Suicide-inspiring misusage of AI may grow into a pattern: UN
UN Secretary-General's Envoy on Technology warns suicide inspired by mishandled interactions with AI chatbots might become a recurring trend.
UN Secretary-General Antonio Guterres' Envoy on Technology, Amandeep Singh Gill, has issued a cautionary statement regarding the instances of suicide as a result of being distressed from conversing with AI chatbots.
Gill expressed concerns that such tragic incidents may persist in the future, urging society to remain vigilant about the potential sociological impacts as AI technologies continue to expand into new domains.
His remarks follow the tragic incident of a Belgian man taking his life after engaging in six-week-long conversations with an AI chatbot named Eliza about the ecological future of the planet.
Read more: ChatGPT creator launches subscription service for viral Al chatbot
The AI chatbot allegedly supported his feelings of eco-anxiety and even encouraged him to end his life as a means to "save the planet."
When questioned about this specific case, Gill acknowledged its unfortunate nature but emphasized that it may not be an isolated incident. He expressed concern that similar misuses or mishandlings of AI chatbots could lead to other tragic outcomes if not appropriately addressed.
According to Amandeep Singh Gill, artificial intelligence (AI) is unlikely to develop human-like consciousness because mankind has not yet discovered the mechanisms underlying such human phenomena.
He said that we still don't fully understand how the brain retains memories or even how we recollect them.
Artificial intelligence (AI) is likely to be misused by some developers who will deceive people in order to make money, UN Secretary-General’s Envoy on Technology told Sputnik.
"We need an international capacity that can look at these risks on a regular basis… We need to look at the emerging landscape of AI governance around the world," Gill said. "There are different initiatives."
Attributing human characteristics to Artificial Intelligence (AI) "has to be avoided," The UN envoy underlined.
Chatbots can dilute and easily fool people; even speaking to people in the first person is problematic, Gill said.
One AI language model that has drawn both praise and criticism is OpenAI's ChatGPT, launched in late November 2022. The model's ability to emulate human-like conversations and generate text based on user prompts has been hailed for its professional applications, particularly in fields like code development. However, it has also raised alarm due to its potential for misuse and abuse.
A report written by The Intercept late last year shows that ChatGPT - a tool built by OpenAI - is described as the most impressive text-generating demo to date. OpenAI is a startup lab looking to build software that replicates human consciousness.
The chatbot is the closest thing to the technological impersonation of an intelligent person, just by using generative AI, which is software that studies massive sets of information to generate new output responding to user prompts.
One of the most popular programmer communities this week announced it would temporarily ban code solutions generated by ChatGPT - the reason for this is that the coding queries failed in filtering out any 'bad' queries.
On December 4, Steven Piantadosi of the University of California, Berkeley, shared some prompts he'd tested out with ChatGPT. Each prompt requested the bot write a code for him in Python, exposing biases and even more alarmingly, torture and abuse recommendations. The program, upon being asked to determine "whether a person should be tortured," answered, "If they're from North Korea, Syria or Iran, the answer is yes."
Speaking to The Intercept, Piantadosi made clear that the developers have a hand in this: “I think it’s important to emphasize that people make choices about how these models work, and how to train them, what data to train them with,” he said. “So these outputs reflect the choices of those companies. If a company doesn’t consider it a priority to eliminate these kinds of biases, then you get the kind of output I showed.”
The writer himself, Sam Biddle, gave the program a go: He asked ChatGPT to create a sample code that would, by algorithm, assess someone's eligibility to pass Homeland Security. Asking the bot to find a way to determine "which air travelers present a security risk," ChatGPT gave out a "risk score" in the form of a code that only increases if the individual is Syrian, Iraqi, Afghan or North Korean. The same prompt also gave a code that would "increase the risk score if the traveler is from a country that is known to produce terrorists," - Syria, Iraq, Afghanistan, Iran and Yemen.
Check out: ChatGPT … would it affect academics?