Google, Microsoft, OpenAI and Anthropic form body to regulate AI
According to the tech giants, the Frontier Model Forum will focus on the "safe and responsible" development of new AI models.
Google, Microsoft, OpenAI, and Anthropic have announced a new council to monitor the safe development of the most advanced models of AI.
The four influential firms founded the Frontier Model Forum, an organization focused on the "safe and responsible" creation of frontier AI models, meaning AI technology that is more sophisticated than examples currently accessible.
Read more: AI-generated tweets considered more trustworthy than humans': Study
Brad Smith, President of Microsoft, reassured that “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” detailing that the initiative is "a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.
According to the forum members, the primary goals are to promote AI safety research like guidelines for assessing models, promoting responsible deployment of advanced AI models, engaging in dialogue with politicians and academics over safety risks in AI as well as assisting in the development of positive uses for the technology like ending global warming and detecting illnesses like cancer.
The members also reassured that firms that build frontier models are welcome to join the association.
Frontier models are described as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."
The move by the tech giants comes amid an increased sense of concern over the pace at which AI technologies have been developing as of late.
The representatives of seven major tech companies, namely Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, met with President Joe Biden at the White House on Friday, where they pledged to commit to a series of guardrails aimed at regulating the development of artificial intelligence.
In May, the Center for AI Safety warned that artificial intelligence (AI) technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
Geoffrey Hinton, dubbed the godfather of AI, quit Google in May, citing AI's "existential risk."
UN Security Council to convene historic discussions on risks of AI
It is worth noting that the UN High Commissioner for Human Rights has lately warned that recent advances in artificial intelligence posed a grave threat to human rights and called for safeguards to prevent violations.
UN Secretary-General Antonio Guterres' Envoy on Technology, Amandeep Singh Gill, has issued a cautionary statement regarding the instances of suicide as a result of being distressed from conversing with AI chatbots.
Gill expressed concerns that such tragic incidents may persist in the future, urging society to remain vigilant about the potential sociological impacts as AI technologies continue to expand into new domains.