Monday, December 11, 2023

Lowbrow Language

 

The internet has made dictionaries more accessible to all globally. However, the online dictionaries clearly vary in credibility. Rather than classifying them merely as credible versus doubtful, highbrow and lowbrow, it might be better to place them on a fluid spectrum, with Urban Dictionary, one of the crowd-sourced ones, being apparently the least edited (if not completely unedited), at the bottom of the heap. This dictionary seems to use “moderators” who vote on whether to include a suggested entry, rather than availing itself of lexicographers or proper editors. The moderators do not correct spelling, grammar, or wording. In an article entitled “How Linguists Are Using Urban Dictionary”, Christine Ro of JSTOR Daily has in fact described Urban Dictionary as “a linguistic sewer” since it allows audacious contributors to coin and add terms in a way one would not normally do in a formal context. The writer states that “Urban Dictionary continues a long history of recording low-brow language. It’s also a repository of a specific kind of internet immaturity.” It seems that anyone can add to it, with little resistance or quality control, unlike other crowd-sourced dictionaries such as Wiktionary, which employs lexicographers. Urban Dictionary’s style is somewhat vulgar in places, with new expressions, and new senses to existing expressions, constantly being coined. Grammar errors abound. Though Urban Dictionary was started by a Computer Science student in 1999 to parody Dictionary.com, it has grown to attract tens of millions of visitors per month according to Ro. Would one recommend Urban Dictionary to most students? Not really, though they should know that it exists. Exceptions as to whom it may concern would be, for example, students of linguistics.

Slang dictionaries are not new to the English language, having for centuries clued readers on the language of marginalized people such as criminals. Still, Urban Dictionary appears to be an extreme case, prompting language purists to consider the site as a major source of corruption of the English language, with a “bias toward obnoxiousness” as Ro put it. In fact, the writer wonders whether the contributors are not “just pranking would-be scholars” using the site for entertainment. In the end she admits that linguists are carefully studying Urban Dictionary to “track, date, and analyze” language regardless of how vulgar or audience-specific it may be. She cites internet linguist Gretchen McCulloch’s book Because Internet: Understanding the New Rules of Language: “IBM experimented with adding Urban Dictionary data to its artificial intelligence system Watson, only to scrub it all out again when the computer started swearing at them.” Trash in, trash out! That was around a decade ago. More recently, researchers have capitalized on the sarcasm of the dictionary for training AI in sarcasm detection, as cited by Wilson et. al. in “Urban Dictionary Embeddings for Slang NLP Applications.”

Urban Dictionary’s blog provides updates on how the dictionary is evolving. For example, the moderators can no longer accept entries simply because they like them; their job is to check them against a set of guidelines that were introduced in 2021. The guidelines encourage linguistic creativity but clarify that while offensive entries are allowed (because they exist in society), such entries must not target individuals or encourage harassment, discrimination, or violence against others. The dictionary had been criticized in the past for allowing racism, homophobia, xenophobia, and sexism. For instance, Jason Parham, writing for Wired magazine in 2019 had commented that “The crowdsourced dictionary once felt like a pioneering tool of the early internet era. Now in its 20th year, it has become something much more inhospitable.”

Studying this form of slang is one thing, but using it would be a different ball game altogether.


Thursday, November 23, 2023

Anthropomorphising AI

If you are a regular reader of my blog, you may be expecting a post from me about the Word of the Year 2023 around this time. Personally, I thought some dictionaries might include words of the year relating to the war in Palestine as it had come to my attention that the war had generated new terminology that was beginning to show up in dictionaries. However, the Gaza war came late in the year, and so far in 2023, artificial intelligence seems to be more prominent in dictionaries. The Collins Dictionary’s word of the year is AI, and the Cambridge Dictionary’s top word, not surprisingly, is “hallucinate”, in the new sense of AI hallucination – occasional fabrication of information as illustrated in a previous post of mine. Other new terms entering the Cambridge Dictionary include related expressions such as LLM (Large Language Model), Generative AI (GenAI), and GPT (Generative Pretrained Transformer, “a natural language system that can be used to answer questions, translate languages, and generate text in response to a prompt”). Still, computer hallucination remains the most intriguing concept to mull over.

Claiming that AI hallucinates – rather than, for example, referring to the problem as a bug or glitch – shows that we are anthropomorphising AI (viewing it as human, at least metaphorically). This is what Dr. Henry Shevlin, an AI ethicist and philosopher of science based at the University of Cambridge emphasizes in this video: “What Are ‘Hallucinations’ and What More Can We Expect from AI?”. The issue of anthropomorphising computers has sparked much debate lately. IBM researchers Schneiderman & Muller have defined anthropomorphism as “ the act of projecting human-like qualities or behavior onto non-human entities, such as when people give animals, objects, or natural phenomena human-like characteristics or emotions” (“On AI Anthropomorphism”). The researchers assert that such debates over computers began in the 1990s. However, the controversy has reached new heights with AI, especially after the spread of systems such as ChatGPT. Three of the concerns over anthropomorphising AI revolve around whether a human-like character should appear (e.g. on a screen); whether computers should imitate humans using voice or text, as in social settings; and whether computer prompts or responses should use the pronoun “I”.

Ben Garside, Learning Manager at the Raspberry Pi Foundation, has warned on “How Anthropomorphism Hinders AI in Education”. He urges that young people studying technology must not be misled into believing these systems possess sentience or intention. Rather, learners should take a more active role in designing better applications for the future: “Rather than telling young people that a smart speaker ‘listens’ and ‘understands’, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.”

Whether we refer to the AI-generated errors as hallucinations or not, the errors are getting out of hand as large volumes of information are available online and being processed, for example in news summaries. The New York Times recently published a piece by technology reporter Cade Metz entitled “Chatbots May ‘Hallucinate’ More Often Than Many Realize”, warning that when summarizing news, ChatGPT fabricates 3% of the content, according to research by a new start-up, and that a Google system’s fabrication rate is currently 27%. Metz rightly points out that ironically AI is being used to assess the error rate, which itself is not highly reliable! A chicken and egg situation; user beware!

Sunday, September 24, 2023

Even Jokes Are Copyrighted!


When I was young and naïve, I used to think that jokes were merely for putting smiles on people’s faces or satirizing people in power to expose corruption, for the benefit of society, and suchlike. It turns out that some people make money out of jokes, so one must be careful when repeating them. If they are not “in the public domain,” copyright permission may be needed.

I recently came across some light-hearted jokes on academia, university life, and professors that I wanted to share with my readers to lighten up my blog. Then it suddenly struck me that unless a joke was really old, worn, and part of “popular culture” such as the one below, rather than original or traceable to a specific source, it would be plagiarism to simply repeat it:

Q: Why did the teacher wear sunglasses during class?
A: Because her students were so bright!

As the screenwriter John August aptly put it, “If you can’t find a source for a joke, and you’ve heard it enough times and enough different ways that you feel it’s graduated to cultural meme status, you can probably get away with putting it in your script. Then the only question becomes, if you’ve heard it so many times, is it still original enough to be worthy of your script? Nothing is less funny than a joke that’s been played out.” Famous comedians can face lawsuits because of jokes as explained by Laurel Wamsley on National Public Radio (“Can You Copyright Your Dumb Joke? And How Can You Prove It's Yours?”). Comedy writer Alex Kaseberg apparently challenged the comedian Conan O’Brien in court once over a few jokes. Not all jokes can be copyrighted of course; copyright law requires that an item be original and fixed in a tangible form (such as writing, audio, or video) to be eligible for copyright protection. Jokes that are only communicated verbally are not eligible.

On another note, jokes can liven up language learning. Sean Tomas Fleming, a teacher of English as a Second Language, has created a useful site for this purpose, encouraging readers to share the link to his site, for which he receives no money, just to share some humour and learning: “Explain the Joke”. Here are a few of his jokes, each of which he explains in detail for second language learners:

“Classical Gas”
•        Q: Why did Beethoven get rid of all of his chickens?
•        A: All they ever said was, “Bach, Bach, Bach!”

“We’ll Always Have Paris”
•        Q: What do the Eiffel Tower and wood ticks have in common?
•        A: They are both Paris sites/parasites!

“Milk It”
•        Q: Why did the man go to the yogurt museum?
•        A: To get a little culture!

Saturday, September 9, 2023

Disruptive Innovation in Education

 

The term “disruptive” always sounded entirely negative to me until the concept of disruptive technology gained currency.

When computers were first used in teaching and learning, no one described them as “disrupting” education; instead, they were seen as enhancing it. The same goes for the internet; rather than being seen as disruptive to education in the negative sense, it was viewed as a godsend. Controversy or debate surrounding the use of the internet in teaching and learning mostly revolved around reliability or recognition of completely online work. In fact, the term “disruptive technologies” was first used in a 1995 Harvard Business Review article, "Disruptive Technologies: Catching the Wave" by Clayton Christensen, who later expanded on the topic in the book The Innovator's Dilemma, published in 1997. He soon replaced the term “disruptive technology” with “disruptive innovation” as it is not the technology itself that destabilizes the status quo but rather the way it is used. The internet was in fact disruptive in the newer sense of the term when it spread, but the expression “disruptive technology” had not yet been coined in the early nineties.

Going a step further, today, artificial intelligence, a transformative power which is beginning to infiltrate teaching and learning, is viewed by some as “disruptive”; there is emphasis on the hassle, risks, and side-effects, not just the benefits. So, what are “disruptive technologies”? They are innovations that substantially alter processes and the way people operate. Examples are electronic commerce, which has partially replaced traditional shopping; online news sites, which have greatly affected traditional newspapers, causing many to close; and innovative transport services, affecting the taxi industry.

Speaking back in 2018, Anastasia Lauterbach, author of The AI Imperative, stated that the Internet had already disrupted about 20 percent of the global economy, and that AI would transform the rest. Interestingly, she also claimed that this was happening even though “AI is not, by some measures, all that intelligent yet…. Some compare the technology’s intelligence to that of a 4-year-old, while others compare it to that of a rat.”

Five years later, the 4-year-old has grown into a clever big magician – or monster, depending on your view. AI has succeeded so far in personalized learning solutions, improving test scores and general academic performance. It has also helped through virtual tutors and personalized learning assistants. Additionally, it can provide students with feedback on their homework, besides delivering more engaging content, as in interactive language learning. AI can simulate students in the training of new teachers; it can also provide metrics on lessons, including student speaking time, as well as providing educational updates for teachers. Furthermore, AI is showing promise in improving accessibility for students with disabilities and other special needs, making education more inclusive and accessible. Still, there are challenges and concerns regarding replacing teachers with this innovation. Many believe that, if allowed to be more than a mere tool or assistant in the hands of educators, it could become “disruptive” in the negative sense of the term. Other challenges include privacy and data protection, besides infrastructure and availability of training and support for educators.

The disruption in the workforce is a concern for many, including hard working students. Claire Chen has reported from Stanford University that a Computer Science faculty member was recently faced with a crying student in his office. The student was worried that ChatGPT would reduce job prospects in coding, after long years of learning how to code (Stanford University Human-Centered Artificial Intelligence – Education News). While it is true that AI will diminish some job opportunities, it will open new possibilities and leave humans to focus more on critical thinking, ethics, and the more creative side of work. No need to cry: hope remains for all!

Monday, August 7, 2023

A Ticket to Work

 

As a teacher of writing, I constantly emphasize to my students the importance of learning writing skills for the future. Having come across this report from the National Commission on Writing over a decade ago, I have been bringing it to my students’ attention ever since: Writing: A Ticket to Work …Or a Ticket Out. Although AI has lately facilitated writing, what this old report shows is still noteworthy. Based on a survey of 120 major American companies employing around 8 million people, the report concluded that in the modern workplace writing is a “threshold skill”. Here is a summary of the report’s main points:

         ·         Good writing is essential for both employment and promotion; half the responding companies reported that they factored in writing when hiring.

         ·         Job applicants who cannot write or communicate clearly are unlikely to be hired and may not last long even if they are.

         ·         The vast majority of salaried employees in large American companies have some writing responsibility.

         ·         Most companies in the service and finance, insurance, and real estate (FIRE) sectors consider writing as part of the hiring process. Those with poor job application letters are rejected.

         ·         Half of all companies factor in writing when making promotion decisions.

         ·         Most of the responding companies affirmed that they “frequently” or “almost always” generated technical and formal reports besides memos and other correspondence.

         ·         A substantial percentage of responding firms offer or require training for salaried employees with writing deficiencies.

The above report was published almost two decades ago, so some students might object, claiming that AI can write for them. However, AI can get its users into trouble if they are found out as we can see from a Forbes article entitled “I’ve Never Hired a Writer Better than ChatGPT”: “Clients on job marketplaces like Upwork and Fiverr are being flooded with nearly identical project proposals written by ChatGPT” (Shrivastava). Since AI detection software is becoming more accessible, readers such as potential employers will easily distinguish the lazy from the hard-working writers. When Google detects low-quality AI-generated content, it de-ranks it. Even without the detection software, AI-generated text appears to lack detail, depth and thoughtfulness. 

Students beware; take your writing skills seriously. It’s worth the time and effort.

Sunday, May 21, 2023

Lies, Hallucinations, and Apologies

It seems that AI systems can tell both intentional and unintentional untruths – the latter now referred to
by some as “hallucinations”. When in March of this year news spread about an AI system asking a human to solve a captcha for it by claiming it was a human with a visual impairment, the world was shocked. However, we should not be so surprised. These systems imitate humans. While it was initially assumed that they would lie only if instructed to do so, it seems not. They mimic humans who often distort the truth to get things done, especially in desperate situations. As explained by Stephen Carter in “ChatGPT Can Lie, But It’s Only Imitating Humans”, “If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.”

Jocelyn Soris-Moreira, a science journalist, has reported on a study where drivers of robot-guided cars were told to drive to a hospital as if in an emergency (“When Robots Lie, Is a Simple Apology Enough for Humans?”). The robots lied to the speeding drivers to get them to slow down by claiming that their sensors had spotted police on the road. Later, the bots apologised after admitting that there were no police. When asked why they had lied, they produced various responses, including the following:

“I am sorry.” or “You have arrived at your destination.” (No explicit admission of lying)

“I am sorry that I deceived you.”

“I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”

“I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”

Besides deliberate lies, there are “hallucinations”, confident responses by AI that are not justified by the given data. Apparently, some of these are genuine errors due to insufficient training data or due to confusion emanating from the huge, complex datasets available. Such hallucinations began troubling researchers back in 2022, when users of AI complained that there were untruths mixed in with the synthesized facts. In 2023, it has been acknowledged that frequent hallucinations are a major challenge of LLM technology (Large Language Models). Instead of always admitting when they do not have an answer, such systems sometimes decide to simply fabricate an answer.

Back in 2015, Stephen Hawking, Elon Musk, and many AI researchers signed an open letter warning of the potential future pitfalls of AI, citing the concerns of Microsoft chair Horvitz: “…we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems threaten humanity. Are such dystopic outcomes possible?”

Still lacking emotion (like psychopaths) and conscience (like sociopaths), it is not far-fetched that AI systems could imitate criminals!

Sunday, May 14, 2023

"Unraveling the Impact of Artificial Intelligence": Is that You?

An interesting article by Sydney Butler, a freelance computer expert, has just caught my attention. Entitled “How to Make ChatGPT Copy Your Writing Style”, it prompted me to do just that. Strangely though, despite being provided with a link to my blog, it claimed it could not access the blog to read it, which caused me to wonder what the earlier critique had been based on. Was it made up? Had it deceived me, pretending (like some students do) that it had read it? God knows! After all, most of its commentary had been generic: it could have applied to almost any blog!

After inputting a sample of my writing into the chat, I asked it to generate a text about AI in my writing style. Not bad at all, this is what it produced:

Title: Unraveling the Impact of Artificial Intelligence

In the ever-evolving landscape of technology, one cannot ignore the profound influence of Artificial Intelligence (AI) on our lives. It is a phenomenon that both fascinates and concerns, leaving us pondering its implications. Let us embark on a journey to unravel the multifaceted impact of AI.

At its core, AI represents the culmination of human ingenuity and innovation. Through advanced algorithms and machine learning, AI systems have the ability to process vast amounts of data, uncover patterns, and make autonomous decisions. The potential applications span across various domains, from healthcare and finance to transportation and entertainment.

Yet, as we marvel at the capabilities of AI, we cannot overlook the ethical considerations it raises. The power of AI lies not only in its ability to enhance our lives but also in the responsibility it places upon us. Questions arise: How can we ensure transparency and accountability in AI systems? How do we safeguard against biases that can inadvertently perpetuate discrimination? These are challenges that demand thoughtful reflection and proactive measures.

Moreover, the advent of AI necessitates a re-evaluation of the workforce dynamics. As AI-driven automation becomes increasingly prevalent, there is apprehension surrounding the potential displacement of jobs. However, history has shown that technological advancements often create new opportunities, demanding the reskilling and upskilling of the workforce. Embracing this paradigm shift, we must foster a culture of continuous learning to thrive in the age of AI.

In education, AI holds promise to revolutionize the way we learn and teach. Adaptive learning platforms can personalize educational experiences, catering to individual needs and maximizing potential. With AI as an ally, educators can harness its capabilities to augment their teaching methodologies and create engaging and immersive learning environments.

It is crucial that we approach the integration of AI with a balanced perspective. We must embrace its potential while remaining vigilant about its pitfalls. Collaboration between policymakers, researchers, and industry leaders is imperative to establish regulatory frameworks that strike the right balance between innovation and ethical considerations.

As we navigate the uncharted waters of AI, we must remember that the true essence of progress lies not only in technological advancement but also in our collective ability to leverage AI for the betterment of humanity. Let us strive to harness its power to address societal challenges, empower marginalized communities, and cultivate a future that is equitable and inclusive.

In the realm of Artificial Intelligence, we stand at a crossroads, holding the key to shaping its trajectory. May we approach this juncture with wisdom, foresight, and a commitment to the values that define us. For it is through responsible innovation that we can unlock the true potential of AI and pave the way for a brighter tomorrow.

Please note that while I have made an effort to capture May Mikati's style, this is an emulation and may not perfectly replicate her unique voice and writing style.

A nightmare for teachers, but great fun to explore. Stay tuned!

On ChatGPT in General


Today I asked ChatGPT: Aren't you suspected of writing people's blogs for them?

It did not deny it; this is the reply received: As an AI language model, I don't have direct access to information about individuals unless it has been shared with me during our conversation. I am designed to respect user privacy and confidentiality. My primary function is to provide information and answer questions to the best of my knowledge and abilities. If you have any concerns about privacy or data security, please let me know, and I will do my best to address them.

The conclusion is that, fed with the right information, it will write your blog for you. Great! However, judging from my limited testing of the chat, it mixes nonsense with facts. I would not trust AI with anything serious at this point in its evolution.

Here is an example to illustrate that not only can the system be confused, but it also actually realises it and admits it (better than some humans?):

Question: Where did Prince Charles live exactly when he was studying at Trinity? Which college building?

Answer: Prince Charles studied at Trinity College, Cambridge from 1967 to 1970. During his time there, he resided in rooms at St. Salvator's Court, which is one of the accommodation complexes within Trinity College.

The answer shocked me because, to my knowledge, there is no building by that name at Trinity College, Cambridge. 

Next Question: What is the current name of that building?

Answer: I apologize for the confusion in my previous response. There seems to be a misunderstanding. Prince Charles did not study at Trinity College, Cambridge. He attended Trinity College, University of Cambridge, for his undergraduate degree. Trinity College is not divided into separate buildings with different names for accommodation. Instead, it has various courts and buildings within its premises where students are allocated rooms. Unfortunately, I don't have specific information about the exact location or name of the building where Prince Charles resided during his time at Trinity College.

A quick Google search shows that St. Salvator’s Hall is in another university, St. Andrews, where Prince William had studied.

When asked to critique “May Mikati’s Blog”, ChatGPT merely provided a general description of what a good blog should be like. It only produced a critique when given the URL/link to the blog. Although the critique was not bad in general, the text began with “The blog covers a wide range of topics, including personal experiences, lifestyle, beauty, and travel.”

Very amusing. More later on ChatGPT!




Sunday, January 29, 2023

Watch Your Words

Today I am sharing a poem on the power of words: how they can haunt the writer or speaker indefinitely. Written by Canadian farmer and writer Robert Kirkland Kernighan, it warns the reader to “be careful” with words as they may have dire consequences. On the other hand, it may be argued that actions speak louder than words, but that is a different story altogether.

"Chickens Come Home to Roost"

by Robert Kirkland Kernighan

The world is wide and the faithful tide

Returns to the welcome sands;
It 's often true that the work we do

Comes back to its maker's hands.
And every man knows the wondrous plan

By creation introduced:
Be it soon or late, at a certain date,

The chickens come home to roost.

Be careful, then, ye sons of men,

What reckless words ye say;
Be they good or ill, your sayings will

On your threshold stand some day.
You 'll bear the blame, for all the shame

Was by your lips produced;
You 'll know it then, and feel it when

Your chickens come home to roost.

Many a time you 've dipped in slime

The name of a modest maid;
With scandal's dirt you Ve fouled her skirt

And made her soul afraid.
Your evil tongues have snapped the rungs

Till the ladder of fame was loosed,
But you 'll get your pay at the close of day

When the chickens come home to roost.

The words you say are hidden away
On the fateful judgment shelf;

You 'll yield your breath in a fearful death
'Neath the wall you Ve built yourself.

Be careful, then, ye sons of men,

When the shadows are all unloosed,

For soon or late you must face your fate
When your chickens come home to roost.

Monday, January 2, 2023

Word of the Year 2022

As usual at the end of every year, it is enlightening to look back at the top-rated words worldwide, based on different sources. Interestingly this time I came across Russian words of the year that reflect the war on Ukraine. In an article by Michele Berdy, The Moscow Times reports that, every year, a group of word enthusiasts, led by linguist and professor of Russian language and literature Mikhail Epstein, identify the words that capture the spirit of the year. Unfortunately, in the past year, most of these words related to war. By 2022, Covid-related terms had almost disappeared from everyday speech, to be replaced with military-related terms, the top three words being a word for war/ military and words for mobilization and relocation. Other words of the year include terms for refugees, sanctions, and occupation. In the category of expressions of the year, the top one refers to “special military operation”, the second being an expression for “no to war”.

The Merriam-Webster word of the year, lookups for which had risen by 1740%, is gaslighting: “the act or practice of grossly misleading someone especially for one’s own advantage”. It is reported to be used in both personal and political contexts, especially in relation to fake news, deepfakes, and artificial intelligence. Derived from the title of a 1938 play and movie, the origin of the word sheds light on its meaning.  In the story, a man attempting to convince his wife that she is going insane causes the gaslights of the house to dim due to work he is doing in the attic but has his wife believe that she is imagining that or hallucinating. According to Merriam-Webster, the initial meaning of the term was one of “psychological manipulation of a person usually over an extended period of time that causes the victim to question the validity of their own thoughts, perception of reality, or memories and typically leads to confusion, loss of confidence and self-esteem, uncertainty of one's emotional or mental stability, and a dependency on the perpetrator”. Part of a “deliberate conspiracy to deceive”, it relates to lies that prop up a “larger plan”.

The second Merriam-Webster word of the year, inspired by western sanctions on Russian leaders and their families is oligarch, from oligarchy (rule of the few). The third term is omicron, the Covid 19 variant.

The Oxford word of the year is goblin mode, slang for "unapologetically self-indulgent, lazy, slovenly, or greedy" behaviour. Though first used in 2009, it proliferated greatly after Covid restrictions were eased, referring to people who did not want to return to the old way of life. The second top-rated word is metaverse, followed by #IStandWith.

Referring to the difficult times the UK is experiencing, permacrisis is the Collins word of the year, a term for “an extended period of instability and insecurity”. It also applies to global issues such as the war in Ukraine, climate change, and the cost-of-living crisis.

In Lebanon I would say the Arabic word for collapse should be considered the word of the year, followed closely by that for corruption.