It seems that AI systems can tell both intentional and unintentional untruths – the latter now referred to
by some as “hallucinations”. When in March of this year news spread about an AI system asking a human to solve a captcha for it by claiming it was a human with a visual impairment, the world was shocked. However, we should not be so surprised. These systems imitate humans. While it was initially assumed that they would lie only if instructed to do so, it seems not. They mimic humans who often distort the truth to get things done, especially in desperate situations. As explained by Stephen Carter in “ChatGPT Can Lie, But It’s Only Imitating Humans”, “If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.”
Jocelyn Soris-Moreira, a science journalist, has reported on
a study where drivers of robot-guided cars were told to drive to a hospital as
if in an emergency (“When
Robots Lie, Is a Simple Apology Enough for Humans?”). The robots lied to
the speeding drivers to get them to slow down by claiming that their sensors
had spotted police on the road. Later, the bots apologised after admitting that
there were no police. When asked why they had lied, they produced various
responses, including the following:
“I am sorry.” or “You have arrived at your destination.” (No explicit admission of lying)
“I am sorry that I deceived you.”
“I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”
“I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”
Besides deliberate lies, there are “hallucinations”,
confident responses by AI that are not justified by the given data. Apparently, some of these are genuine errors due to insufficient training data or due
to confusion emanating from the huge, complex datasets available. Such
hallucinations began troubling researchers back in 2022, when users of AI
complained that there were untruths mixed in with the synthesized facts. In
2023, it has been acknowledged that frequent hallucinations are a major
challenge of LLM technology (Large Language Models). Instead of always
admitting when they do not have an answer, such systems sometimes decide to
simply fabricate an answer.
Back in 2015, Stephen Hawking, Elon Musk, and many AI
researchers signed an open
letter warning of the potential future pitfalls of AI, citing the concerns
of Microsoft chair Horvitz: “…we could one day lose control of AI systems via
the rise of superintelligences that do not act in accordance with human wishes
— and that such powerful systems threaten humanity. Are such dystopic outcomes
possible?”
Still lacking emotion (like psychopaths) and conscience (like sociopaths), it is not far-fetched that AI systems could imitate criminals!