Europe may introduce first A.I. law
On Thursday, a committee of European Parliament legislators adopted the EU's AI Act, bringing it one step closer to becoming law.
A crucial committee of European Parliament legislators has adopted a first-of-its-kind artificial intelligence rule, bringing it one step closer to becoming law.
Last week, doctors and health specialists warned that AI development should stop unless it is regulated because it might endanger the health of millions of people and constitute an existential threat to civilization.
The European AI Act is the first law in the Western world governing AI systems. China has already produced draft laws to govern how firms create generative AI products such as ChatGPT.
The legislation regulates AI using a risk-based approach, with requirements corresponding to the amount of danger posed by the system, as well as establish criteria for suppliers of so-called "foundation models" such as ChatGPT, which have been a major issue for regulators due to how sophisticated they are and worries that even skilled people would be replaced.
The details
The AI Act divides AI applications into four risk categories: unacceptable risk, high risk, limited risk, and low or no risk.
Unacceptable risk applications are automatically prohibited and cannot be implemented in the bloc.
A few included are AI systems that use subliminal tactics, as well as manipulative or deceitful approaches, to distort behavior, AI systems that take advantage of the weaknesses of people or certain groups, systems of biometric classification based on sensitive features or qualities, systems employed for social score or trustworthiness evaluation, AI algorithms that forecast criminal or administrative infractions are utilized for risk assessments.
Additionally, any systems creating or expanding facial recognition databases as well as inferring emotions in law enforcement, border management, the workplace, and education are also grouped in the last category.
Read more: Study finds ChatGPT might show more empathy than doctors
Several MPs have advocated for the measures to include ChatGPT, making big language models and generative AI subject to requirements.
Before making their models public, foundation model developers will be expected to implement safety checks, data governance mechanisms, and risk mitigations.
They will also be expected to verify that the training data used to instruct their systems does not infringe on any intellectual property rights.
Ceyhun Pehlivan, counsel at Linklaters and co-lead of the law firm’s telecommunications, media and technology and IP practice group in Madrid, told CNBC that “The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law,” adding that they would also be subjected to "data governance requirements, such as examining the suitability of the data sources and possible biases.”
Expert opinions
The Computer and Communications Industry Association expressed worry that the AI Act's reach had been too widened, potentially catching innocuous kinds of AI.
Boniface de Champris, policy manager at CCIA Europe, told CNBC via email that it is "worrying" that some useful and limited risk AI applications "would now face stringent requirements, or might even be banned in Europe."
“The European Commission’s original proposal for the AI Act takes a risk-based approach, regulating specific AI systems that pose a clear risk,” de Champris added.
“MEPs have now introduced all kinds of amendments that change the very nature of the AI Act, which now assumes that very broad categories of AI are inherently dangerous.”
Dessi Savova, director of continental Europe for Clifford Chance's IT department, stated that the EU standards will create a "global standard" for AI legislation. However, he warned that other countries, notably China, the United States, and the United Kingdom, are fast formulating their own solutions.
“The long-arm reach of the proposed AI rules inherently means that AI players in all corners of the world need to care,” Savova told CNBC via email.
“The right question is whether the AI Act will set the only standard for AI. China, the U.S., and the U.K. to name a few are defining their own AI policy and regulatory approaches. Undeniably they will all closely watch the AI Act negotiations in tailoring their own approaches.”
According to Sarah Chander, senior policy advisor at European Digital Rights, a digital rights advocacy organization located in Brussels, the regulations would compel foundation models such as ChatGPT to "undergo testing, documentation, and transparency requirements."
“Whilst these transparency requirements will not eradicate infrastructural and economic concerns with the development of these vast AI systems, it does require technology companies to disclose the amounts of computing power required to develop them,” Chander told CNBC.
“There are currently several initiatives to regulate generative AI across the globe, such as China and the US,” Pehlivan said.