Human extinction by AI ‘not that concerning… at least for now': Expert
While rebuffing concerns about AI’s existential threats, expert Gary Marcus believes that society should focus on genuine risks.
Following the launch of the language model ChatGPT in November, which is now widely adopted by millions and has rapidly advanced beyond predictions by top experts in the industry, expert Gary Marcus has warned against artificial intelligence's ultra-fast development and adoption.
The New York University emeritus professor said as quoted by AFP that the technology's existential threats may currently be "overblown."
"I'm not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete," said Marcus in San Francisco.
"A more general problem that I am worried about... is that we're building AI systems that we don't have very good control over and I think that poses a lot of risks, (but) maybe not literally existential," he added.
In March, Marcus, Elon Musk, Steve Wozniak, Andrew Yang, and more than 1,000 artificial intelligence experts, researchers, and backers joined a call for an immediate pause on the creation of “giant” AIs for at least six months.
Around 1,124 people had signed the open letter, which points to OpenAI's GPT-4 as a red flag. The company brags that the most recent model is more precise, human-like, and capable of analyzing and responding to images. Even a mock bar exam was passed by it.
However, Marcus did not sign the more concise declaration issued last week by corporate leaders and specialists, including OpenAI CEO Sam Altman, which caused a stir.
'An escalation that winds up in nuclear war'
Global leaders should be exerting strained efforts to reduce "the risk of extinction" from artificial intelligence technology, the signatories insisted.
"If you really think there's existential risk, why are you working on this at all? That's a pretty fair question to ask," Marcus questioned.
Instead of focusing on more improbable situations in which no one survives, Marcus believes that society should focus on genuine risks.
"People might try to manipulate the markets by using AI to cause all kinds of mayhem and then we might, for example, blame the Russians and say, 'look what they've done to our country' when the Russians actually weren't involved," he continued.
"You (could) have this escalation that winds up in nuclear war or something like that. So I think there are scenarios where it was pretty serious. Extinction? I don't know," he added.
Democracy in danger
The psychology expert is concerned about democracy in the short run.
Generative AI software produces increasingly persuasive fake photographs, and soon videos, at little cost.
As a result, "elections are going to be won by people who are better at spreading disinformation, and those people may change the rules and make it really difficult to have democracy proceed."
Moreover, "democracy is premised on having reasonable information and making good decisions. If nobody knows what to believe, then how do you even proceed with democracy?"
He also believes that "there's going to be some harm along the way and we really need to up our game, we have to figure out serious regulation."
"The last several months have been a real reminder that the big companies calling the shots here are not necessarily interested in the rest of us," he warned.