US unveils national security plan to step up use of AI
A senior Biden administration official told reporters that the US intends to create national security AI applications in areas like cybersecurity and counterintelligence to reduce the possibility of a "strategic surprise" from its enemies.
The United States instructed the Pentagon and intelligence agencies on Thursday to increase their use of artificial intelligence to improve national security, the first such policy to confront challenges from competitors such as China.
According to officials, the new National Security Memorandum, published a year after President Joe Biden signed an executive order governing AI, aims to strike a balance between employing the technology to oppose rivals' military uses and establishing safeguards to protect civil liberties.
"This is our nation's first-ever strategy for harnessing the power and managing the risks of AI to advance our national security," National Security Advisor Jake Sullivan said during an address at Washington's National Defense University.
"We have to be faster in deploying AI and our national security enterprise than America's rivals are in theirs. They are in a persistent quest to leapfrog our military and intelligence capabilities."
'Imperative to accelerate AI adoption in national security'
A senior Biden administration official told reporters that the US intends to create national security AI applications in areas like cybersecurity and counterintelligence to reduce the possibility of a "strategic surprise" from its enemies.
"Countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities," the official added. "It's particularly imperative that we accelerate our national security community's adoption and use of cutting-edge AI capabilities to maintain our competitive edge."
Last October, Biden directed the National Security Council and the White House chief of staff to create the memorandum as part of an executive order aimed at positioning the United States to "lead the way" in global efforts to handle AI dangers.
The White House described the directive as a "landmark" action, directing federal agencies to establish new safety criteria for AI systems and requiring developers to share safety test findings and other key information with the US government.
AI military and intelligence rivalry
US authorities believe that fast-growing AI technology will unleash military and intelligence rivalry among global countries.
According to a second administration official, American security services have been told to get access to the "most powerful AI systems," which would require significant procurement efforts.
The official told reporters, "We believe that we must out-compete our adversaries and mitigate the threats posed by adversary use of AI." The majority of the memorandum is public, but there is also a classified annex that mostly tackles opponent concerns. The document, he added, aims to guarantee that the government is "accelerating adoption in a smart, responsible way."
Along with the initiative, the government intends to produce a framework document outlining "how agencies can and cannot use AI," according to the official.
In July, more than a dozen civil society organizations, including the Center for Democracy & Technology, addressed an open letter to Biden administration officials, including Sullivan, urging rigorous protections to be put into the document to protect civil rights.
The letter details how despite promises made, "little is known about the AI being deployed by the country's largest intelligence, homeland security, and law enforcement entities like the Department of Homeland Security, Federal Bureau of Investigation, National Security Agency, and Central Intelligence Agency."
"Its deployment in national security contexts also risks perpetuating racial, ethnic or religious prejudice, and entrenching violations of privacy, civil rights and civil liberties," the letter adds.
Former NSA chief appointed to Open AI board: Responsible Statecraft
Back in July, an article by Responsible Statecraft indicated that artificial intelligence research organization OpenAI appears to be etching closer to the military-industrial complex, due to the appointment of newly retired US Army General and former National Security Agency (NSA) Director Paul M. Nakasone to its board of directors.
Nakasone was appointed to oversee the Safety and Security Committee of the organization, advising on matters and decisions concerning the company's security-related issues, evidently attempting to re-establish a safety-forward reputation amid increasing wariness of AI technology.
The military-industrial complex
With Nakasone's 38-year military career, including his five-year service heading the US Army Cyber Command, the former NSA director is bridging the worlds of military defense and intelligence and private technological institutions.
This phenomenon in Big Tech essentially creates conflicts of interest and massive contracts which, according to a report from April 2024 on the Costs of War, achieved a ceiling totaling "at least $53 billion combined" from 2019 to 2022. For example, OpenAI is currently in collaboration with the Pentagon to prevent veteran suicide with the use of cybersecurity-related tools.