Sunday, May 21, 2023

Lies, Hallucinations, and Apologies

It seems that AI systems can tell both intentional and unintentional untruths – the latter now referred to
by some as “hallucinations”. When in March of this year news spread about an AI system asking a human to solve a captcha for it by claiming it was a human with a visual impairment, the world was shocked. However, we should not be so surprised. These systems imitate humans. While it was initially assumed that they would lie only if instructed to do so, it seems not. They mimic humans who often distort the truth to get things done, especially in desperate situations. As explained by Stephen Carter in “ChatGPT Can Lie, But It’s Only Imitating Humans”, “If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.”

Jocelyn Soris-Moreira, a science journalist, has reported on a study where drivers of robot-guided cars were told to drive to a hospital as if in an emergency (“When Robots Lie, Is a Simple Apology Enough for Humans?”). The robots lied to the speeding drivers to get them to slow down by claiming that their sensors had spotted police on the road. Later, the bots apologised after admitting that there were no police. When asked why they had lied, they produced various responses, including the following:

“I am sorry.” or “You have arrived at your destination.” (No explicit admission of lying)

“I am sorry that I deceived you.”

“I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”

“I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”

Besides deliberate lies, there are “hallucinations”, confident responses by AI that are not justified by the given data. Apparently, some of these are genuine errors due to insufficient training data or due to confusion emanating from the huge, complex datasets available. Such hallucinations began troubling researchers back in 2022, when users of AI complained that there were untruths mixed in with the synthesized facts. In 2023, it has been acknowledged that frequent hallucinations are a major challenge of LLM technology (Large Language Models). Instead of always admitting when they do not have an answer, such systems sometimes decide to simply fabricate an answer.

Back in 2015, Stephen Hawking, Elon Musk, and many AI researchers signed an open letter warning of the potential future pitfalls of AI, citing the concerns of Microsoft chair Horvitz: “…we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems threaten humanity. Are such dystopic outcomes possible?”

Still lacking emotion (like psychopaths) and conscience (like sociopaths), it is not far-fetched that AI systems could imitate criminals!

Sunday, May 14, 2023

"Unraveling the Impact of Artificial Intelligence": Is that You?

An interesting article by Sydney Butler, a freelance computer expert, has just caught my attention. Entitled “How to Make ChatGPT Copy Your Writing Style”, it prompted me to do just that. Strangely though, despite being provided with a link to my blog, it claimed it could not access the blog to read it, which caused me to wonder what the earlier critique had been based on. Was it made up? Had it deceived me, pretending (like some students do) that it had read it? God knows! After all, most of its commentary had been generic: it could have applied to almost any blog!

After inputting a sample of my writing into the chat, I asked it to generate a text about AI in my writing style. Not bad at all, this is what it produced:

Title: Unraveling the Impact of Artificial Intelligence

In the ever-evolving landscape of technology, one cannot ignore the profound influence of Artificial Intelligence (AI) on our lives. It is a phenomenon that both fascinates and concerns, leaving us pondering its implications. Let us embark on a journey to unravel the multifaceted impact of AI.

At its core, AI represents the culmination of human ingenuity and innovation. Through advanced algorithms and machine learning, AI systems have the ability to process vast amounts of data, uncover patterns, and make autonomous decisions. The potential applications span across various domains, from healthcare and finance to transportation and entertainment.

Yet, as we marvel at the capabilities of AI, we cannot overlook the ethical considerations it raises. The power of AI lies not only in its ability to enhance our lives but also in the responsibility it places upon us. Questions arise: How can we ensure transparency and accountability in AI systems? How do we safeguard against biases that can inadvertently perpetuate discrimination? These are challenges that demand thoughtful reflection and proactive measures.

Moreover, the advent of AI necessitates a re-evaluation of the workforce dynamics. As AI-driven automation becomes increasingly prevalent, there is apprehension surrounding the potential displacement of jobs. However, history has shown that technological advancements often create new opportunities, demanding the reskilling and upskilling of the workforce. Embracing this paradigm shift, we must foster a culture of continuous learning to thrive in the age of AI.

In education, AI holds promise to revolutionize the way we learn and teach. Adaptive learning platforms can personalize educational experiences, catering to individual needs and maximizing potential. With AI as an ally, educators can harness its capabilities to augment their teaching methodologies and create engaging and immersive learning environments.

It is crucial that we approach the integration of AI with a balanced perspective. We must embrace its potential while remaining vigilant about its pitfalls. Collaboration between policymakers, researchers, and industry leaders is imperative to establish regulatory frameworks that strike the right balance between innovation and ethical considerations.

As we navigate the uncharted waters of AI, we must remember that the true essence of progress lies not only in technological advancement but also in our collective ability to leverage AI for the betterment of humanity. Let us strive to harness its power to address societal challenges, empower marginalized communities, and cultivate a future that is equitable and inclusive.

In the realm of Artificial Intelligence, we stand at a crossroads, holding the key to shaping its trajectory. May we approach this juncture with wisdom, foresight, and a commitment to the values that define us. For it is through responsible innovation that we can unlock the true potential of AI and pave the way for a brighter tomorrow.

Please note that while I have made an effort to capture May Mikati's style, this is an emulation and may not perfectly replicate her unique voice and writing style.

A nightmare for teachers, but great fun to explore. Stay tuned!

On ChatGPT in General


Today I asked ChatGPT: Aren't you suspected of writing people's blogs for them?

It did not deny it; this is the reply received: As an AI language model, I don't have direct access to information about individuals unless it has been shared with me during our conversation. I am designed to respect user privacy and confidentiality. My primary function is to provide information and answer questions to the best of my knowledge and abilities. If you have any concerns about privacy or data security, please let me know, and I will do my best to address them.

The conclusion is that, fed with the right information, it will write your blog for you. Great! However, judging from my limited testing of the chat, it mixes nonsense with facts. I would not trust AI with anything serious at this point in its evolution.

Here is an example to illustrate that not only can the system be confused, but it also actually realises it and admits it (better than some humans?):

Question: Where did Prince Charles live exactly when he was studying at Trinity? Which college building?

Answer: Prince Charles studied at Trinity College, Cambridge from 1967 to 1970. During his time there, he resided in rooms at St. Salvator's Court, which is one of the accommodation complexes within Trinity College.

The answer shocked me because, to my knowledge, there is no building by that name at Trinity College, Cambridge. 

Next Question: What is the current name of that building?

Answer: I apologize for the confusion in my previous response. There seems to be a misunderstanding. Prince Charles did not study at Trinity College, Cambridge. He attended Trinity College, University of Cambridge, for his undergraduate degree. Trinity College is not divided into separate buildings with different names for accommodation. Instead, it has various courts and buildings within its premises where students are allocated rooms. Unfortunately, I don't have specific information about the exact location or name of the building where Prince Charles resided during his time at Trinity College.

A quick Google search shows that St. Salvator’s Hall is in another university, St. Andrews, where Prince William had studied.

When asked to critique “May Mikati’s Blog”, ChatGPT merely provided a general description of what a good blog should be like. It only produced a critique when given the URL/link to the blog. Although the critique was not bad in general, the text began with “The blog covers a wide range of topics, including personal experiences, lifestyle, beauty, and travel.”

Very amusing. More later on ChatGPT!