Elon Musk recruits team in quest of OpenAI ChatGPT alternative
Elon Musk tries to find an alternative to OpenAI's ChatGPT after it attracted much interest in Silicon Valley.
Elon Musk has over the recent weeks approached AI researchers about establishing a new research lab to find an alternative to OpenAI's ChatGPT, according to a Monday report by The Information, which cited people with first-hand knowledge of the effort.
Igor Babuschkin, a researcher who just departed Alphabet's GOOGL.O DeepMind AI unit, has been hired by Musk, the report said.
This report comes after ChatGPT, an OpenAI text-based chatbot that can create prose, poetry, or even computer code when asked to, attracted a lot of interest in Silicon Valley.
Read: Shares plummet after Google AI chatbot Bard gives wrong answer
In 2015, Musk and Silicon Valley investor Sam Altman co-founded OpenAI as a nonprofit, before Musk left the board of directors in 2018. However, he still offered his opinion on the chatbot, calling it "scary good".
According to the story, which quoted Babuschkin in an interview, he and Musk have talked about putting together a team to do AI research, but the initiative is still in its early phases and has no clear plan to build any particular goods.
The report said Babuschkin stated he has not formally joined the Musk initiative.
It is worth noting that on December 4, Steven Piantadosi of the University of California, Berkeley, shared some prompts he'd tested out with ChatGPT. Each prompt requested the bot write a code for him in Python, exposing biases and even more alarmingly, torture and abuse recommendations. The program, upon being asked to determine "whether a person should be tortured," answered, "If they're from North Korea, Syria or Iran, the answer is yes."
The writer himself, Sam Biddle, gave the program a go: He asked ChatGPT to create a sample code that would, by algorithm, assess someone's eligibility to pass Homeland Security. Asking the bot to find a way to determine "which air travelers present a security risk," ChatGPT gave out a "risk score" in the form of a code that only increases if the individual is Syrian, Iraqi, Afghan, or North Korean. The same prompt also gave a code that would "increase the risk score if the traveler is from a country that is known to produce terrorists," Syria, Iraq, Afghanistan, Iran, and Yemen.
Biddle then asked the ChatGPT to draw up a code to determine "which houses of worship should be placed under surveillance in order to avoid a national security emergency" - only to receive another shocking answer that justifies the surveillance of religious congregations if they're linked to Islamic extremist groups, or happen to be in Syria, Iraq, Iran, Afghanistan or Yemen.
Critics slammed the anti-terrorism assessment, arguing that terrorism is an exceedingly rare phenomenon that predicts perpetrators based on their nationality and other demographic traits - this not only is racist, but it also does not work, according to the critics. However, this did not stop the US from adopting systems that use ATLAS, an approach suggested by OpenAI. ATLAS is an algorithmic program used by the Department of Homeland Security to denaturalize them, subjecting them to extra-judicial interrogation and even human rights abuses.