In the ever-evolving world of technology, there is a constant influx of new buzzwords and trends that promise to revolutionize various industries. One such term that has gained significant traction in recent years is GenAI. This topic has raised several questions and excitement among executives, managers, and investors alike about its potential impact on businesses. However, it is critical to approach this topic with caution, as there seems to be a common misconception that GenAI is the ultimate solution for all use cases. In this blog post, we will explore what exactly GenAI encompass and why it may not be the panacea for every problem your organization faces. By shedding light on the intricacies of this emerging technology, we aim to demystify any misconceptions and provide valuable insights to help you make informed decisions for your business.  So let’s take a deep dive into debunking the hype around GenAI!

Currently, we tend to be deeply fascinated and dazzled by the many ways generative artificial intelligence can solve specific problems in different industries and businesses. However, while AI has become popular and on everyone’s lips, it is important to recognize the use cases where generative AI is not appropriate and can be solved with other types of models. If organizations continue to invest without understanding this issue, they run the risk of creating a “bubble” and failing to meet business expectations.

Generative AI is a phenomenon that has exploded and created a new wave of technological services, an innovation that is here to stay, as we commented in our 7Puentes assessment last year.

What is more, in the midst of this maelstrom of information about Generative AI, many companies do not usually consider the risks and problems of investing in projects that do not really meet either their solutions or their business expectations, just to ride the crest of the wave.

After all, Generative AI is still in its infancy. Therefore, companies need to think carefully about how to apply it in their organizations and what the potential results could be, keeping in mind that their results could be fallible.

From our proven experience at 7Puentes, we can affirm that there is still no evidence of the results of GenAI in companies because the projects are just starting.

The common metaphor is that of a data engineer trying to open a bottle with a hammer instead of a bottle opener, as the cover of this post illustrates. In this sense, not all business technology solutions lend themselves to generative AI, and it is important to be aware of this situation in the enterprise.

Are we willing to accept that an AI will answer customer questions when it is likely to be wrong many times? What is the risk to our reputation if a customer, or even an employee, receives incorrect information?

The risk of GenAI becoming a “speculative technology bubble”

The fact that everyone’s talking about generative AI can be a positive thing from the point of view of exploring its possibilities and limitations. The problem is that, on the one hand, many publications confuse the concepts of Machine Learning (ML), GenAI and AI (this is a clear example) and, on the other hand, many executives, dazzled by the dynamics of generative AI, want to buy ChatGPT thinking that it can be applied to any use case, when in reality it is not always the right solution.

What usually happens then is that organizations start to put together an investment plan, to organize themselves to see what AI strategy they want to develop, with about 10 projects, and many of those projects based on generative AI either go nowhere or not. They are functional to the strategic needs of the business. The risk for organizations is that all of this becomes a technological and investment bubble, completely speculative and dysfunctional with business objectives.

Data limitations: what GenAI is not trained to do

Considering this scenario, it is necessary to shed light on how Generative AI and Large Language Models work. These LLM models are a category of basic models that are trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide variety of tasks. However, an important limitation – which is often not taken into account – is that language models such as ChatGPT are trained to generate coherent, but not necessarily truthful text, so their answers are not always supported by real data. In fact, in the discipline of natural language processing, incorrect statements or predictions are commonly referred to as “hallucinations”. This can be a serious problem for project performance, and can add considerable ambiguity to the results.

At this point, GenAI is a class of algorithms from the world of machine learning that has been trained primarily with texts and data sources available on the Internet. And in this respect, GenAI suffers from the following problems regarding its training data:

  • It is not trained to make time series predictions.
  • It is not trained to detect anomalies.
  • It is not trained to build logistics planning models.

Of course, there are sectors where GenAI cannot yet be the leading solution for their projects. For example, the Oil & Gas sector has issues such as predictive maintenance or running logistics optimization models that have nothing to do with AI. In fact, for the industry in general, there are predictive or optimization models that are outdated and require ML, but where GenAI is not applicable.

Why a traditional ML model can solve certain problems better

It is a fact that sometimes a traditional machine learning model adapted to specific data is better than using ChatGPT. For example, if we need to do time series prediction, it is better for us to implement a more specific ML model. If we do it with GenAI, the accuracy will be lower. In this case, we need to focus on analyzing how the model is trained and the accuracy of the answers.

We also noticed this during our participation in the 2024 edition of the «AI in Energy» industry meeting, which was recently held in Texas, United States, and whose industry challenges we discussed in this post.

Nevertheless, we find that traditional machine learning cannot achieve adequate data delivery for many business projects. This happens because:

  1. The barrier to entry at the operational level is high and complex.
  2. There is a need to upskill in the organization, which is limited by talent shortage (even though the organization has a team of data scientists and engineers).
  3. There are problems with the data, there is no organization of the dataset or metadata, or the data is not labeled correctly.
  4. There are clear cultural change management issues in the organization that undermine the need to innovate in machine learning.

All of this shows that assembling a good set of training data is long, tedious, and slow. And training models correctly takes time, resources (material and human), and effort.

Final thoughts

If these problems are common to your organization, if you still can’t make the right decision about what type of AI solution to implement in your company, then turning to the professionalism and experience of 7Puentes would be the most appropriate alternative for your business issue.

At 7Puentes, we have the expertise and leadership in data science and machine learning projects. With more than 100 successful projects, we manage to define the appropriate solution, whether in machine learning or generative AI, so that all your projects are resolved and satisfactorily meet the client’s expectations.

In future publications, we will tell you more about how we work in different industries and our difference in machine learning. Get in touch to align, let’s ourselves with your business needs!