...

Demystifying LLMs: What AI Is Not

Cristiano Valente
January 11, 2024
Read: 4 min

Just a year ago, hardly anyone had heard of large language models (LLMs), the technology behind ChatGPT. Now, these models are everywhere, revolutionising the way we work and interact with machines.

But with such great hype come the misconceptions: AI is not (yet?) the Swiss Army knife it tends to be depicted as. So let’s cut through the hype and see what the real deal is.

When speaking of Generative AI, most businesses are looking at leveraging LLMs. In this article, we are going to have a look at what this kind of Gen AI is not to demystify the concept.

LLMs are not databases

While LLMs are often touted as the new-age search engines, these are fundamentally different things. Even though they may complement each other, they are certainly not the same.

An AI model does not store facts like a database. Instead, it creates an abstract internal representation of information using statistical weights, which are used by a series of layers to produce an output. They are probabilistic machines – some even call them "stochastic parrots" although that is a bit of a hot debate.

So, rather than storing facts, they abstract them, often generating responses that seem factual but might not always be.

This is the reason why LLMs sometimes hallucinate information: they generate facts based on patterns they observe. If your prompt echoes their training data closely, they can be spot-on. Other times, not so much.

That is where plug-ins or techniques like retrieval-augmented generation (RAG) come into play.

RAG is a way to optimise the output of the LLM you are using, so it does not base its response on its training dataset but on a vast, trustworthy knowledge base. It is like giving your LLM a fact-checking assistant that pulls up-to-date info from a curated database to inform its responses.

Let's see what AI is not

LLMs are not calculators

For all of the aforementioned reasons, generative AI, while a whizz with words and images, often trips over even basic maths or logical puzzles. They are learning, though.

One common workaround is to have the model generate the code to solve the problem. It is like giving it a calculator when it is stuck on a maths problem. This effectively augments the model’s capabilities to do almost anything you can do with code.

Yet, even with this coding crutch, the solution is not always flawless because…

AI is non-deterministic

Traditional programming is deductive, a term borrowed from philosophy: you provide the logic, the program produces the results. This approach yields deterministic software; think of it as a reliable recipe: the same ingredients always bake the same cake.

AI, on the other hand, is like the chef learning from tasting dishes rather than following recipes. This is called inductive programming: you show the software examples, and it figures out a pattern, a model of your problem. This is why we call them models in the first place.

Generative AI delves deeper, using deep learning, an advanced technique inspired by our brain's neural networks, with layers upon layers of virtual neurons. Due to intricacies we are not going to dive into now, these deep learning models are inherently non-deterministic. Meaning, feed them the same input, and you may get an entirely different result every time.

If you have ever tried throwing the same prompt at ChatGPT multiple times, you know the variety can be quite a show.

You can try to tune this with the model's temperature setting, but it is more of a sticky plaster than a fix, and can bring its own quirks. You are still not getting a deterministic output.

In a nutshell, while it is groundbreaking, Generative AI is not ready to replace software engineering just yet.

You do not want to chat with everything

Lately, it seems like AI chatbots and prompt-based interfaces are popping up everywhere, from websites to code editors all the way to drawing tools.

While in some scenarios, the feature can be incredibly powerful in such tools, often a few clicks in a well-designed UI prove far more efficient and yield more predictable results.

You do not necessarily have to let your users chat with your bot. Sometimes, wrapping a sleek UI around your use case and generating a well-crafted prompt is the way to go.

Sure, it might feel futuristic to chime to your microwave to 'Cook at maximum power for 1 minute,' but let's be honest, hitting that quick-start button gets you to your snack faster 99% of the time.

Are you up to the challenge?

So, you have found the perfect use case for Generative AI, considering all the points above. Great, but there are still a few hurdles to clear.

AI is not just a plug-and-play solution; it is more of a team player with human intelligence (HI). Tailoring a model for your specific needs involves a hefty amount of data engineering:

  • Sourcing the right training sets;
  • Cleaning them up;
  • Setting up RAG;
  • Using techniques like Reinforcement Learning from Human Feedback (RLHF) to ensure it behaves as expected.

And, of course, let’s not forget about privacy. If you are handling sensitive information, entrusting it to an external or third-party provider might be a no-go. Are you planning to set up and maintain your own models? This is a whole new ball game, both complex and costly, although it is getting more and more affordable every day.

Finally, you also need to test the model for any possible exploits, workarounds, data leaks, regressions and the like.

Obviously, it is a big commitment and not something every company is ready or equipped to dive into.

Now that we know what AI is not

We live in a technological wonderland where AI is bursting with potential to revolutionise businesses in amazing ways. Yet, like any powerful tool, it comes with its risks and limitations.

Businesses diving into AI should do so with their eyes wide open and minds sharp, as success is all about smart, careful adoption.

Beware of anyone selling AI as a cure-all, magical solution. The best tool in your AI toolkit is still your own intelligence.

If you would like to understand how you can implement AI capabilities, drop us a line.

More on the topic

Everything we know, we are happy to share. Head to the blog to see how we leverage the tech.

Data diff validation in a blue green deployment: how to guide
Data Diff Validation in Blue-Green Deployments
During a blue-green deployment, there are discrepancies between environments that we need to address to ensure data integrity. This calls for an effective data diff...
January 31, 2024
GDPR & Data Governance in Tech
GDPR & Data Governance in Tech
The increasing focus on data protection and privacy in the digital age is a response to the rapid advancements in technology and the widespread collection,...
January 18, 2024
Data masking on Snowflake using data contracts
Automated Data Masking on Snowflake Using Data Contracts
As digital data is growing exponentially, safeguarding sensitive information is more important than ever. Compliance with strict regulatory frameworks, such as the European Union’s General...
January 17, 2024
Digital innovations in Ukraine
Top 9 Digital Innovations in Ukraine
Attention. Air raid alert. Proceed to the nearest shelter. Don’t be careless. Your overconfidence is your weakness. – Air raid alert app voice-over using the...
January 4, 2024
Doing business in Ukraine
Business in Ukraine: Recommendations for the Tech Sector
In December 2023, I made my first trip to Ukraine since Russia’s full scale invasion of the country in February 2022. It was, for all...
January 3, 2024
How to get started with data contracts
How to get started with data contracts
More and more modern companies are looking to get started with data contracts because they are a great way of increasing a business' data maturity...
December 28, 2023

Everything we know, we are happy to share. Head to the blog to see how we leverage the tech.