Apr 11, 2023
The purpose of this article is to address what I call the Three Illusions of AI: the illusion of truth, the illusion of neutrality and the illusion of thinking, and to open the discussion on the difference between intelligence and inspiration, between generating and creating.
Indeed, in that crowded space of opinions, you hear a bit of everything – and a fair share of nonsense. End of March 2023, the lack of understanding of what AI can do or cannot do took a dramatic turn, with a young Belgian father committing suicide after weeks of conversation with the AI chatbot “Eliza” (link)
Let’s start with a brief reminder before moving into the substance of the matter.
What is actually ChatGPT?
ChatGPT is a web interface to GPT 3.5, a LLM from OpenAI. It is able to generate text in a variety of forms, and for different purposes – summarizing or even writing texts, mimicking human conversation, etc. Its precision, significantly higher than previous versions, is truly impressive. There are many other LLM out there, a bit less known, and a number of other AI active in the fields of video, audio, images, research, design, and more.
There is a lot of good literature explaining the functioning of ChatGPT and LLM in general. For a quick intro, I personally recommend these two short public articles which are easy to read: here and here
In simplified terms, a LLM is a statistical model that predicts the most likely continuation of a text. LLMs look at the text existing so far and assess various options to continue that text, ranking these options by probability. For example: “The cat is on the…” could be continued by “couch”, “chair”, “carpet”, or even “table” (if the cat is not well educated). LLMs will finish the sentence picking the most likely word, based on their training.
LLM training is a critical feature to understand. ChatGPT in particular specifically is trained via so-called “Reinforcement Learning via Human Feedback”. That means that a group of human testers will assess and rate the model outputs. This allows the model to learn, via several iterations, which answers are more positively perceived by the testers’ group, and to more accurately mimic human behavior.
The potentials unlocked by AI are impressive. But if we want to avoid entering a dystopic future, we all need to be aware of what current AI actually can or cannot do, and in particular of the Three Illusions.
The illusion of truth
Let’s start with the “hallucinations” that LLM are prone to. “Hallucination” is the technical term to say that AI on occasion literally makes things up which are not true (there is plenty of literature online on this). This phenomenon explains why both Google’s and Microsoft’s new AI chatbots simply made stuff up during their first public demos in February 2023, without blinking an eye – if I may say. Other instances include cases of ChatGPT literally making up jurisprudence that does not exist (link). We are not talking here about mixing things up, like we all sometimes do, but literally making things up and presenting them as obvious statements while they are simply untrue.
The illusion of neutrality
Because the model has been trained in providing outputs that please most of its testers, it is strongly influenced by who tested it. Therefore the same model could deliver significantly different outputs depending on the testing group preferences. This may be influenced by age, nationality, ethnicity, gender, political or religious beliefs. In other words, we feed fundamentally subjective data points to a tool that many of its users believe is neutral.
These illusions have consequences. The young father who committed suicide end of March 2023 over relied on the AI’s reliability and neutrality – and ultimately put faith into a tool that does not actually think. That leads us to our last and most important illusion.
The illusion of thinking
LLMs mimic human behavior in a manner that is impressive enough to give their human users the impression of thinking. But, as we saw above, what they actually do is only to repeat the most commonly accepted points of view from their training, so basically what pleases most of its users. Here again this is quite risky if not properly understood. Original thinking is not about repeating the majority’s opinion. It is inherently disruptive. Copernicus, Galileo, Descartes or Kant have not repeated, summarized, or reworded preexisting concepts. They came up with something so fundamentally new that they reshaped the way humanity looked at the world. The lack of original thinking is substantial to the way current AI operate.
Earlier we mentioned the “hallucinations” that LLM are prone to. Would we call this “lying”? Probably not. Because something intuitively tells us that this is not appropriate. Differently put because we all somehow recognize that there is no intention to deceive. That lack of intention is the crux of the matter. ChatGPT and other LLM do not have any intent to mean when interacting with you. In other words, ChatGPT tells you things, but does not mean anything. Statistical prediction, mimicking, is not original thinking, it only has the illusion of it. This is why AI do not actually create but only generate – text, music, pictures, etc. Creation requires intention, intuition and inspiration, a “spark” that is not merely a feature of mimicking or statistical forecast. We may of course add some randomness into the algorithms, but that will just make the illusion better, it will not make the AI actually more creative.
So what?
The implications of AI on our lives are going to be so massive that we cannot encompass them in a few lines. There is infinitely more to say and to think of. Economic considerations in terms of job market. Pedagogical considerations such as how next generations will train their memory and maintain critical thinking. Ethical considerations on how far we want to delegate certain decision making to AI, in particular for health or military applications.
Following the recent drama mentioned above, the editor of Eliza modified its chatbot so that suicidal thoughts expressed to Eliza will now result in an alert message to the user directing them to suicide prevention hotlines. It seems to me that we are missing the point here. We are trying to sort out with an additional process something that relates to the quest of meaning. What drove the young father to commit suicide was not the lack of an alert. It was overreliance on what the chatbot actually knows, does, or can.
We do not need alert popups. We need to educate ourselves widely and rapidly on AI’s evolving features, functioning and capabilities. We need everybody to be aware not only of their impressive capabilities but also of their inherent limitations: illusion of truth, illusion of neutrality and illusion of thinking. AI tools are not philosophers, scientists, or priests. They are not people. They are impressive productivity-enhancing tools, which will help humans with but not discharge them from the effort and ability of critical thinking, continuous learning, and original human creation.