Pentagon reportedly testing large language AI to assess reliability
The Department of Defense is conducting large-scale artificial intelligence (AI) exercises to evaluate the execution of critical military jobs for the first time.
The Department of Defense is testing for the first time large language artificial intelligence (AI) exercises to test how their performance during major military tasks, an American broadcaster reported on Wednesday.
Major power-generative AI tools such as OpenAI's ChatGPT and Google's Bard are used as part of an eight-week exercise run by the Pentagon's digital and AI office and military top brass, including US allies, as per the report.
On that note, the Department of Defense has not publicly identified which large language models are being tested. However, Scale AI, based in San Francisco, confirmed that its new Donovan product is among the tools being used in the exercise, according to the report.
Read next: EU task force set up to build pressure on Open AI over ChatGPT
Companies such as Palantir Technologies, co-founded by Peter Thiel, and Anduril Industries are developing AI-based decision platforms for the Pentagon, according to the report, adding that the exercise will run until July 26.
According to Bloomberg, the exercise reflects fear that generative AI can compound bias and relay incorrect information with striking confidence. Moreover, AI can also be penetrated in many ways, including the damage of data by poisoning it, the report said.
In light of such tests, it is worth highlighting that some experts were arguing against the development of AI, especially in the military field. Last month, a US official confirmed that during a virtually-staged test by the US military, an AI-controlled drone decided to 'kill' its operator to stop it from interfering with achieving its mission.
During the Future Combat Air and Space Capabilities Summit in the UK in June, Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, said that AI used “highly unexpected strategies to achieve its goal” in the simulation.
Read next: ChatGPT ‘massive opportunity’ for civil service and beyond: Donelan
On 31 May, a statement was released by the Center for AI Safety warning that artificial intelligence technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said. No real person was harmed.
US police use AI
In March, Clearview AI was forbidden from selling to most US companies as a result of a ruling by an Illinois court, when the American Civil Liberties Union (ACLU) took Clearview AI to court for violating privacy law, but US police were considered an exception.
Clearview allows police to upload a photo of someone's face and find matches of it in a database of billions of images, then it provides a link to an online presence of that match - Clearview is said to be one of the globe's most powerful yet accurate facial recognition services.
The CEO of the company, Hoan Ton-That, argues that hundreds of law enforcement in the US use the service, even though it is prohibited in the cities of Portland, San Francisco, and Seattle.
The Assistant Chief of Miami's Police force, Armando Aguilar, confirmed that his unit used Clearview to identify suspects of various crimes, and clarified that it was used nearly 450 times a year.