Seize the regulatory opportunity, and the EU’s "Artificial Intelligence Act" enters the final negotiations.
□ Rule of Law Daily reporter Wang Wei
According to a recent report by DPA, the plenary session of the European Parliament passed the draft authorization of the Artificial Intelligence Act with an overwhelming majority of 499 votes in favor, 28 votes against and 93 abstentions. This means that the bill enters the final negotiation stage of EU legislation to strictly supervise the application of artificial intelligence technology, and the European Parliament, the European Commission and member States will conduct "tripartite" negotiations to determine the final terms of the bill. The German "Business Daily" said that the "Artificial Intelligence Act" is expected to be finally approved by the end of this year, but it may take several years before it fully takes effect.
Be prepared for negotiations
According to the report, the European Parliament decided to add more security control measures to generative artificial intelligence tools such as ChatGPT to ensure that the research and development and application of artificial intelligence comply with relevant EU laws and regulations.
As the world’s first comprehensive legislation aimed at artificial intelligence through parliamentary procedures, the draft has formulated detailed rules in terms of security, privacy, transparency and non-discrimination. According to the website of the European Parliament, the European Parliament "is ready for negotiations to formulate the first artificial intelligence bill in history". According to the legislative agenda, the European Parliament will hold "tripartite" negotiations with the European Commission and member States on this draft authorization, and European legislators hope to reach a consensus on the final version of the bill before the end of this year.
In May this year, the Market Committee and the Civil Liberties Committee of the European Parliament passed the draft negotiating authorization for the proposal of the Artificial Intelligence Act put forward by the European Commission in April 2021. Previously, the European Parliament and the Council of the European Union had made several rounds of revisions and discussions on the draft, and after the emergence of generative artificial intelligence applications such as ChatGPT, EU legislators urgently discussed the issues not covered in the original draft. The European Parliament stated that this proposal, if formally approved, will become the first regulation on artificial intelligence in the world.
The draft will apply to entities that put artificial intelligence systems on the market or put them into use in the EU (whether in the EU or in a third country), entities that use artificial intelligence systems in the EU and entities that use artificial intelligence systems in a third country, but the output of the systems is used in the EU or has an impact on people in the EU.
Risk classification supervision
A prominent feature of the draft authorization of the Artificial Intelligence Act is that it pays attention to formulating a supervision system based on risks, so as to balance the innovative development of artificial intelligence and safety norms. The draft divides the risk of artificial intelligence into four levels: unacceptable risk, high risk, limited risk and minimal risk, corresponding to different regulatory requirements. Among them, the use of high-risk artificial intelligence must be strictly regulated, and system providers and users must abide by the provisions of data management, record keeping, transparency and artificial supervision to ensure the stability, accuracy and safety of the system. For those who violate the regulations, the draft sets a fine of up to 30 million euros (1 euro is about 7.895 yuan) or 6% of the global annual turnover.
The draft strictly prohibits artificial intelligence systems that pose unacceptable risks to human security, including systems that deploy subconscious or purposeful manipulation techniques, exploit people’s weaknesses or use them for social scoring, and expands the classification of high-risk areas of artificial intelligence to take into account the harm to people’s health, safety, basic rights or the environment. The draft also requires artificial intelligence companies to maintain artificial control over their algorithms, provide technical documents, establish a risk management system for "high-risk" applications, and set up a special supervision system for generative artificial intelligence such as ChatGPT. Each EU member state will set up a supervisory body.
In addition, the regulatory sandbox (which means that enterprises are allowed to test innovative products, services and business models with real individual users and enterprise users in a real market environment by setting restrictive conditions and formulating risk management measures) mechanism is introduced to control risks and promote innovation, and the artificial intelligence system is developed, tested and verified to reduce risks before it enters the market or is put into use. The draft also proposes that this mechanism is also allowed to test innovative artificial intelligence under realistic conditions to encourage artificial intelligence enterprises to continue to innovate.
Face important challenges
The EU’s attempt to establish a unified legal supervision framework for artificial intelligence is an important symbolic example in the development of global artificial intelligence. The discussion and attempt of effective regulation on artificial intelligence in the EU will have a global impact, which may lead to more and more countries trying to follow up the exploration of relevant regulations.
Regarding the decision of the European Parliament, the German Social Democratic Party expressed its support: "The potential and risks of artificial intelligence should be concerned by the whole society." However, some opponents worry that excessive supervision may bring higher costs and excessive burdens to European artificial intelligence companies.
Companies that are in a leading position in the development of artificial intelligence are, to some extent, technology companies that have faced the risk of antitrust violations, violated existing laws and caused information leakage in the past decade. Based on this, Sam altman, CEO of OpenAI, said that he would lead his team to withdraw from the European market if the AI Act over-regulated AI.
Experts say that the EU’s promotion of artificial intelligence legislation is conducive to maintaining the EU’s leading position in the fields of digital sovereignty and science and technology, and through such a comprehensive legislation, it will seize the opportunity of global artificial intelligence supervision. But it has its own limitations. It adopts horizontal legislation, not aiming at specific artificial intelligence application fields, but trying to bring all artificial intelligence into the scope of supervision, and will face a lot of interpretation problems at the implementation level.
Some voices further pointed out that the EU has not reached a consensus on what impact artificial intelligence technology may bring to mankind. From a dialectical point of view, artificial intelligence can not only benefit and serve human beings, but also bring threats or injuries to human beings if it is not used properly. How to balance innovation and constraints is a very important challenge facing the EU legislature. In addition, the unified legal framework does not involve specific legal norms, and the actual implementation effect may be "greatly reduced".
The international community believes that at present, the development of artificial intelligence technology is still in the exploration period. At this time, the EU enacted the "Artificial Intelligence Act", which released a strong signal that the EU is strengthening its regulations in the digital field. How to better balance and link technological innovation and institutional innovation needs to be constantly revised in practice.