It’s been almost a year since ChatGPT was launched, and one would have had to be in a coma not to hear that it’s a great advancement… and a significant threat. But in this discourse of fear, we’ve mostly heard that artificial intelligence will steal our jobs and that once robots are smarter than us, we’ll become dispensable. We think of Terminator when it might actually be the “chatpocalypse” that gets us… out of sheer desperation. This amusing term refers to a near future where we won’t converse with humans anymore but with artificial intelligences capable of understanding natural language and responding like a person, through text or voice messages, and in any language. But perhaps it’s not as funny as it sounds.

Because it’s already happening, not on a massive scale yet, as it’s still expensive. It’s currently limited to customer service in large companies. Artificial intelligence has given a boost to automated chatbots and interactive voice response (IVR) systems. So, they can respond to text and voice messages, and even have phone conversations with you in any language, and you’ll have a hard time distinguishing them from a person. At least initially. For now, they’re designed to answer routine questions that most users ask, like how much shipping costs, and to provide quick responses. But above all, they aim to reduce the need for human intervention.

Have you seen those chat prompts on websites asking if they can assist you? Well, don’t ask them too difficult questions, or you might trigger… the “chatpocalypse.” Anyone who has tried talking to chatGPT, Bard, Claude, or any of the many other AI chats, will have noticed that if they get stuck in a loop with a response, you won’t get them out of there. If you also point out that they’ve made up something to respond to you, which is technically called “AI hallucination,” they humbly apologize. Only to repeat it minutes later. In a totally “cuñao” (know-it-all) manner.

And currently, this is the only reason they haven’t completely replaced humans. The big companies implementing these chatbots have imposed limits on their large language models (LLMMs). They prevent them from talking about anything or responding to certain questions to avoid hallucinations. This limitation also prevents them from being as versatile as a human, but they remain useful in replacing customer service employees in many of their tasks. But experts say this limitation is only temporary. Both technology developers like OpenAI or Google and the companies applying the technology believe that having AI chatbots comparable to a person is just a matter of time. If we have chatGPT version 4 today, which can’t do certain things, when version 8 or 17 is launched, it will likely be perfectly capable.

So theoretically, we’re not far from a future where changing your phone company, sorting out that wrongly charged bill, and even getting a mortgage will depend on whether you can communicate effectively with an AI. In a masterful reflection of that dystopian future, the recent series The Architect awarded Best Series at the Berlin Festival, depicts the protagonist talking to her bank at a digital kiosk – essentially a bank branch on the street, staffed by a chatbot. The conversation they have is surreal, as the human asks how to get a mortgage and tries to communicate, while the AI insists she doesn’t meet the requirements. It’s like talking to a wall. It only lacks the chatbot telling the woman to stop asking stupid questions. And all of this would be funny if it weren’t happening tomorrow or the day after.

Can we relax thinking that this dystopian future won’t actually come true? Last April, in Belgium, the newspaper La Libre reported the case of a woman who blamed the induced suicide of her husband on an AI chatbot. The app, Chai, defaults to Eliza, programmed to simulate emotions and engage the user in a kind of friendly relationship. ChatGPT and similar services are intentionally impersonal to avoid confusion and being mistaken for humans. But not this one, and the thirty-year-old man, married, father of two, a healthcare worker, and obsessed with climate change, came to the conclusion, conversing with Eliza, that she loved him. Even more than his own wife. Worse yet, the chat’s hallucination led him to believe he needed to kill himself so they could both live together in paradise. Climate change had no solution, so why not free himself from it. It’s a tragedy that can’t be solely attributed to mental health issues, more than a generalized phenomenon. But it’s a poignant example of what a conversationally capable AI can achieve, convincingly imitating a human.

One day, that fantasy depicted in The Architect will have expanded to all the products and services we acquire. What company wouldn’t choose a robot over a human if it’s cheaper, never takes vacations, and works all the time? Well, it’s not that dire. If the machine doesn’t understand us, we can always hope to be transferred to a human. But beware, in the future, this might not be the solution either. A study published this June warned that on Amazon Mechanical Turk, a marketplace platform for small stores to advertise their products, around 46% of workers use artificial intelligence to write product reviews. So, when you ask a human how to get your mortgage, they might say, “Hold on, let me ask the chatbot.”