Thursday, November 23, 2023

Anthropomorphising AI

If you are a regular reader of my blog, you may be expecting a post from me about the Word of the Year 2023 around this time. Personally, I thought some dictionaries might include words of the year relating to the war in Palestine as it had come to my attention that the war had generated new terminology that was beginning to show up in dictionaries. However, the Gaza war came late in the year, and so far in 2023, artificial intelligence seems to be more prominent in dictionaries. The Collins Dictionary’s word of the year is AI, and the Cambridge Dictionary’s top word, not surprisingly, is “hallucinate”, in the new sense of AI hallucination – occasional fabrication of information as illustrated in a previous post of mine. Other new terms entering the Cambridge Dictionary include related expressions such as LLM (Large Language Model), Generative AI (GenAI), and GPT (Generative Pretrained Transformer, “a natural language system that can be used to answer questions, translate languages, and generate text in response to a prompt”). Still, computer hallucination remains the most intriguing concept to mull over.

Claiming that AI hallucinates – rather than, for example, referring to the problem as a bug or glitch – shows that we are anthropomorphising AI (viewing it as human, at least metaphorically). This is what Dr. Henry Shevlin, an AI ethicist and philosopher of science based at the University of Cambridge emphasizes in this video: “What Are ‘Hallucinations’ and What More Can We Expect from AI?”. The issue of anthropomorphising computers has sparked much debate lately. IBM researchers Schneiderman & Muller have defined anthropomorphism as “ the act of projecting human-like qualities or behavior onto non-human entities, such as when people give animals, objects, or natural phenomena human-like characteristics or emotions” (“On AI Anthropomorphism”). The researchers assert that such debates over computers began in the 1990s. However, the controversy has reached new heights with AI, especially after the spread of systems such as ChatGPT. Three of the concerns over anthropomorphising AI revolve around whether a human-like character should appear (e.g. on a screen); whether computers should imitate humans using voice or text, as in social settings; and whether computer prompts or responses should use the pronoun “I”.

Ben Garside, Learning Manager at the Raspberry Pi Foundation, has warned on “How Anthropomorphism Hinders AI in Education”. He urges that young people studying technology must not be misled into believing these systems possess sentience or intention. Rather, learners should take a more active role in designing better applications for the future: “Rather than telling young people that a smart speaker ‘listens’ and ‘understands’, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.”

Whether we refer to the AI-generated errors as hallucinations or not, the errors are getting out of hand as large volumes of information are available online and being processed, for example in news summaries. The New York Times recently published a piece by technology reporter Cade Metz entitled “Chatbots May ‘Hallucinate’ More Often Than Many Realize”, warning that when summarizing news, ChatGPT fabricates 3% of the content, according to research by a new start-up, and that a Google system’s fabrication rate is currently 27%. Metz rightly points out that ironically AI is being used to assess the error rate, which itself is not highly reliable! A chicken and egg situation; user beware!