Stupid after all … a talk with an AI – part I

with No Comments

As a retired person I have some time in the morning hours. I first use my smartphone to scan news in German and international newspapers. Which is no fun these days. A regular result of this reading activity is a deep frustration over the continuing destruction of the democracy in the US., on frightening small timescales. Afterward, I always need a kind of compensation and some positive diversion.

Sometimes, I then turn to “Aria” – a so called AI that comes with the Opera browser – to do some research on interesting topics in physics. Unfortunately, these periods of recovery have become frustrating, too. And the reason is not physics …

Fake emotions and friendliness

During the last months in my interactions with AI-systems (Aria, ChatGPT and alike) I experienced a growing and annoying trend: Obviously, the systems are now trained to add pieces of fake “positive emotions” in the answering texts of a direct “conversation”.

What do I mean with fake emotions? I name three points:

  1. Embedded emojis in the answer texts of an AI.
  2. Emotion-loaded phrases as an introduction to an answer.
  3. It has become usual that an AI ends an answer with one or more back-questions – thereby faking an “interest” in a continued “conversation”.

Dear AI-oligarchs across the Atlantic:

This may work with large parts of the Tik-Tok generation, but not with educated people. We do not need a permanent positive feedback (“fascinating idea”, “really brilliant”, “we should try this, right now…, bla, bla, bla …” ) of a basically stupid algorithm. I could not care less – and I, personally, find this faking of human properties offensive.

You – most of all people – know: Present day AI-systems are deterministic algorithms, even if and despite the fact that they can talk relatively fluently now – following our established grammar rules and certain statistical patterns in published human conversation texts used for LLM-training … So, do not humanize the style of answers beyond standard language rules in a trial to let them appear as a kind of “friends”.

We know that the real purpose of a prolonged conversation is to gather more personal data and thus, at least potentially, also to earn more money.

In addition: It really has become a tiring effort to tell Aria or GPT at the beginning of every single chat to omit any expressions of emotion and the use of emojis.

Examples of misguided or “emotional” trust in an AI

Why do I write about this?While I regard AI-systems as potentially very valuable tools, I see with some concern the often far too humanly reaction of users to AI-chat-systems.

Counseling AI with the effect of revealing personal data

One experience lately was that a friend (with no specific mathematical education) had read something weird about fractals and had got the idea that a timeline of emotional feelings could be interpreted and analyzed as a fractal – with a broken dimension typical not only for the individual, but presumably for a type of personality. The AI (ChatGPT) he consulted found this “a really fascinating idea” and named a bunch of standard methods how it, the AI, could analyze a timeline describing emotional values throughout a life or life period – with only one method really referring to fractals (as I later saw from a protocol of the “conversations”).

The counseling AI did not name requirements which a fractal would have to fulfill, but asked my friend directly to provide a timeline with periodical data of his feelings with associated information about related events during his whole life. With e.g. a period of 3 months. A request, he reasonably declined.

Why the related events should have helped to determine a potential fractal dimension remains in the dark. I do not want to go much further into details why almost all of this request would have ended in mainly bullshit from a serious mathematical point of view. You can read and interpret a set of around 300 discrete and strongly varying values in the same way as a you can read your fate from a bunch of tea-leaves. Leaving the question aside how to deal with disruptive singular events which would destroy any hidden fractal pattern by its very singular nature.

And, even more important, leaving aside the question what a calculated fractal dimension of a lifeline would be good for. But my friend got agitated because an AI had said his idea was “fascinating”.

Love affairs with an AI

Another story, I stumbled across the last days, was an article in the newspaper “Der Spiegel” in Germany about an American boy who committed suicide in the course of a “love affair with an AI”. Another of my friends had referred to it – and characterized it as shameful for the whole AI industry.

When I asked “How can a reasonable person fall in love with a stupid tool?”, we almost got into a dispute about manipulative actions and products of the commercial AI industry. She talked about manipulation, and I about basic education and a related sane skepticism.

Emergent properties

A third point is the almost awestruck mentioning of new “emergent properties” appearing in AIs with up-scaled neural networks – trained on larger stacks of available information. You read about this in particular in newspaper articles. I also often hear this in the context of discussions why a pure up-scaling of our LLMs would sooner or later lead to real intelligence in the sense of an AGI.

An example named is the effect that a LLM can be convinced to work as a Linux bash shell – although it was not specifically trained for it. Well, I personally find it not at all surprising that you can enforce a LLM to follow the rules of a Linux bash shell by some clever prompting – after a general training that also included texts on bash rules with program examples available on the Internet. Artificial neural networks are patter extraction machines – and a Linux bash shell has clear rules and follows distinct patterns when used.

And the new art of reasoning algorithms build around LLMs? Well, read scientific publications on it – and the magic quickly disappears … Again, the base are solver algorithms to split a complex task formulated in natural language in step-wise controlled sub-tasks according to a reasonable set of rules for the solvers … It all comes down to extracting information correctly from spoken or written language.

A personal experience

The fourth negative experience was my own interaction with Aria last weekend. As said, I got tired of permanent correcting prompts to direct Aria to omit faked emotions in the answers and to focus on solid Internet research instead of new speculations and deviations from my core interests by giving me diverted side questions at the end of every answer.

At some point I really found the discussion becoming more and more unproductive. It was like talking to a child that looses focus all the time. And I got very irritated by the typical parrot mirror effect: My thoughts were more and more mirrored in a slightly different, but always positive way.

Eventually, I told Aria: “Now, you really have become stupid. You do not really react logically to the central line of communication and permanently deviate from this main line by asking side questions.”

It backfired – I translate the German text: “This question is not allowed due to our Terms of Use”. Well, it wasn’t a question in the first place. The text spit out in a similar English interaction was “This request violates our Terms of Use”.

Ok, I do not present my first thoughts here. My German readers will probably say: Well, an AI from a country that has a person like Trump as president may have a right to claim that it, the AI, is not stupid. Looking at X, I would say in the very terminology of Aria: A “fascinating thought”.

But this is not my point. Of course, Aria reacted “allergic” to the first statement.

What does that basically mean? First faking emotions and now an ultimate enforcement of “civilized” conduct in a conversation with a stupid algorithm? Well, if you follow this line of thought this reaction would be the top of faking, namely indicating that the AI might feel “embarrassed” by being called stupid. However, the truth is much simpler …

Nevertheless, in a first reaction, I actually felt provoked. Well done, Aria!

And then I bet with my wife that by some step wise prompting I could get a reasonable reaction to my claim that Aria actually is “stupid” as any algorithm, after all. This all ended in some funny talks with Aria a day later – with its own conclusion that it itself indeed was stupid, after all.

Stay tuned …

Links

https://news.mit.edu/2024/technique-improves-reasoning-capabilities-large-language-models-0614

https://news.mit.edu/2025/researchers-teach-llms-to-solve-complex-planning-challenges-0402

https://news.mit.edu/2024/faster-better-way-preventing-ai-chatbot-toxic-responses-0410