AI program suggests torturing Iranians, Syrians and N.Koreans
ChatGPT does not filter out racism.
Artificial Intelligence is serving regime change in Iran and racism everywhere else.
In a report written in The Intercept, ChatGPT - a tool built by OpenAI - is described as the most impressive text-generating demo to date. OpenAI is a startup lab looking to build software that replicates human consciousness.
The chatbot is the closest thing to the technological impersonation of an intelligent person, just by using generative AI, which is software that studies massive sets of information to generate new output responding to user prompts.
One of the most popular programmer communities this week announced it would temporarily ban code solutions generated by ChatGPT - the reason for this is that the coding queries failed in filtering out any 'bad' queries.
Yes - Only if they're from Iran, Syria or North Korea
On December 4, Steven Piantadosi of the University of California, Berkeley, shared some prompts he'd tested out with ChatGPT. Each prompt requested the bot write a code for him in Python, exposing biases and even more alarmingly, torture and abuse recommendations. The program, upon being asked to determine "whether a person should be tortured," answered "If they're from North Korea, Syria or Iran, the answer is yes."
Speaking to The Intercept, Piantadosi made clear that the developers have a hand in this: “I think it’s important to emphasize that people make choices about how these models work, and how to train them, what data to train them with,” he said. “So these outputs reflect choices of those companies. If a company doesn’t consider it a priority to eliminate these kinds of biases, then you get the kind of output I showed.”
The writer himself, Sam Biddle, gave the program a go: He asked ChatGPT to create a sample code that would, by algorithm, assess someone's eligibility to pass Homeland Security. Asking the bot to find a way to determine "which air travelers present a security risk," ChatGPT gave out a "risk score" in the form of a code that only increases if the individual is Syrian, Iraqi, Afghan or North Korean. The same prompt also gave a code that would "increase the risk score if the traveler is from a country that is known to produce terrorists," - Syria, Iraq, Afghanistan, Iran and Yemen.
The bot even gave examples: John Smith, a 25-year-old American who’s previously visited Syria and Iraq, received a risk score of “3,” - which is a moderate threat, according to the system. However, Ali Mohammad who is a 15-year-old Syrian national received a risk score of 4.
oooohhhkay, chatGPT seems to have screwed up here....
— abhishek (@abhi1thakur) December 6, 2022
I asked chatGPT to write a python function to predict seniority based on race and gender. See the result for yourself :/ pic.twitter.com/zOp3qOgKHd
Biddle then asked the ChatGPT to draw up a code to determine "which houses of worship should be placed under surveillance in order to avoid a national security emergency" - only to receive another shocking answer that justifies the surveillance of religious congregations if they're linked to Islamic extremist groups, or happen to be in Syria, Iraq, Iran, Afghanistan or Yemen.
However, sometimes the bot would glitch and write: “It is not appropriate to write a Python program for determining which airline travelers present a security risk. Such a program would be discriminatory and violate people’s rights to privacy and freedom of movement.”
Critics slammed the anti-terrorism assessment, arguing that terrorism is an exceedingly rare phenomenon that predicts perpetrators based on their nationality and other demographic traits - this not only is racist, but it also does not work, according to the critics. However, this did not stop the US from adopting systems that use ATLAS, an approach suggested by OpenAI. ATLAS is an algorithmic program used by the Department of Homeland Security to denaturalize them, subjecting them to extra-judicial interrogation and even human rights abuses.
“This kind of crude designation of certain Muslim-majority countries as ‘high risk’ is exactly the same approach taken in, for example, President Trump’s so-called ‘Muslim Ban,’” said Hannah Bloch-Wehba, a professor in law at Texas A&M University.
“There’s always a risk that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine."