Guterres endorses idea of global AI watchdog like nuclear agency
Artificial intelligence has been a subject of contention recently over its super capabilities and possible dangers.
-
UN Secretary-General Antonio Guterres making a statement outside the Security Council at UN headquarters on March 14, 2022 (AP)
On Monday, United Nations Secretary-General Antonio Guterres approved a proposal by some artificial intelligence entrepreneurs to establish a worldwide AI oversight organization similar to the Worldwide Atomic Energy Agency (IAEA).
Since ChatGPT began six months ago and became the fastest-growing app of all time, generative AI technology that can create an authoritative language from text prompts has fascinated the public. Concerns have also been raised about AI's potential to generate false images and other dangers.
Read next: US scientists use AI, brain scans to 'read minds', decode thoughts
Last month, the Center for AI Safety released a statement that warned artificial intelligence (AI) technology should be classified as a societal risk and put in the same class as pandemics and nuclear wars.
Guterres told reporters that the warnings are loudest among the developers who designed AI and that "we must take those warnings seriously."
He has announced intentions to begin work on a high-level AI advisory group by the end of the year to routinely examine AI governance arrangements and give recommendations on how they should accord with human rights, the rule of law, and the common good, adding that he would be open to the idea of an agency "inspired by what the international agency of atomic energy is today."
According to Guterres, the model would be "very interesting" but emphasized that "only member states can create it, not the Secretariat of the United Nations."
The Secretary-General revealed a plan to appoint a scientific advisory board of experts and scientists.
Last month, doctors and health specialists warned that AI should stop unless it is regulated, a crucial committee of European Parliament legislators adopted a first-of-its-kind AI law using a risk-based approach, with requirements corresponding to the amount of danger posed by the system, as well as establish criteria for suppliers of so-called "foundation models" such as ChatGPT.