Senators submit two bills on AI as US falls behind on risk containment
The new bipartisan bills will force US federal agencies to notify users when they are communicating with an AI language.
Two separate bipartisan bills were submitted by US Senators that aim to regulate the use and implementation of artificial intelligence (AI), as the world is set on an irreversible path to integrate the technology into daily human lives.
While China became the first country in the world to introduce several AI regulations to protect consumers and manage the technology's risk, followed by the European Union, the United States has so far refrained from adopting such laws, raising skepticism over its ability to contain AI's threats, most importantly keeping in check American tech giants such as Google, Facebook, and Apple.
The Chief of the Senate Homeland Security Governmental Affairs Committee Democrat Senator Gary Peters, alongside Republican Mike Braun and James Lankford, revealed one of the bills.
Read more: Amid US tech war on Beijing, OpenAI CEO says China industry key player
The legislation would force federal agencies to notify users when they are interacting with an AI language, stressing the need for agencies to come up with a way so people can appeal any decision generated by AI.
In a statement, Braun said, "No American should have to wonder if they are talking to an actual person or artificial intelligence when interacting with the government. The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren’t being made without humans in the driver’s seat."
Peters also echoed Braun's position, saying that "artificial intelligence is already transforming how federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies."
Read more: AUKUS holds first AI military tests
The other bill was introduced by Democrat Senators Michael Bennet and Mark Warner, and GOP Senator Todd Young, which suggested creating an Office of Global Competition Analysis to monitor and analyze the competitiveness of the United States in artificial intelligence fields with other countries, on top of which is China.
"This legislation will better synchronize our national security community to ensure America wins the technological race against the Chinese Communist Party. There is no single federal agency evaluating American leadership in critical technologies like artificial intelligence and quantum computing, despite their significance to our national security and economic prosperity. Our bill will help fill this gap," Young said.
The bills come after media outlets cited the US Department of Commerce (DOC) as saying that Washington is examining the need for checking AI-based programs, such as ChatGPT, amid concerns that they can be used to commit crimes and spread misinformation.
The US Department of Commerce DOC said earlier that the government is for studying the need to examine AI-based programs - such as ChatGPT - over concerns that they might be exploited to spread misinformation and commit crimes.
The lack of US laws to monitor the use of artificial intelligence has forced Europe and China into imposing heavy restrictions on US-based companies and establishing safety standards for their [US firms] accession to their markets.
Bad actors and AI
In March, Gary Marcus, Elon Musk, Steve Wozniak, Andrew Yang, and more than 1,000 artificial intelligence experts, researchers, and backers joined a call for an immediate pause on the creation of “giant” AIs for at least six months.
In May, US media reported that Geoffrey Hinton, a computer scientist known as "the godfather of artificial intelligence," left Google to speak out against the technology's hazards.
The New York Times quoted Hinton, who developed core technology for AI systems, as saying that advances in the subject presented "profound risks to society and humanity."
"It's hard to see how you can prevent bad actors from using it for bad things," he told NYT.
He also cautioned about the possibility of AI-generated false information proliferating, stressing that the typical individual "will not be able to know what is true anymore."