Following three days of “marathon” talks, the Council presidency and negotiators from the European Parliament have struck a tentative agreement on the proposal for harmonized laws on artificial intelligence (AI), known as the Artificial Intelligence Act. The draft rule intends to ensure that artificial intelligence systems placed on the European market and utilized in the EU are safe and respect fundamental rights and EU values. This historic plan also seeks to boost AI investment and innovation in Europe.
“This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”
Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence
The AI act is a landmark legislative effort that has the potential to encourage the development and use of safe and trustworthy AI across the EU’s single market by both private and public players. The core idea is to control AI based on its ability to harm society, using a ‘risk-based’ approach: the bigger the danger, the stronger the laws. As the world’s first legislative proposal of its sort, it has the potential to create a global standard for AI regulation in other jurisdictions, much like the GDPR, promoting the European approach to tech regulation on the global arena.
The primary provisions of the initial agreement
In comparison to the initial Commission proposal, the following are the significant new components of the preliminary agreement:
• guidelines for high-impact general-purpose AI models that may generate future systemic danger, as well as high-risk AI systems
• a reformed governance system with some enforcement powers at the EU level
• expansion of the list of restrictions, but with the option for law enforcement authorities to utilize remote biometric identification in public settings, subject to safeguards
• improved rights protection by requiring deployers of high-risk AI systems to perform a fundamental rights impact assessment before deploying an AI system.
In more specific terms, the provisional agreement addresses the following issues:
Definitions and context
The compromise agreement aligns the definition of an AI system with the approach proposed by the OECD to guarantee that the definition provides sufficiently explicit criteria for distinguishing AI from simpler software systems.
The interim agreement further underlines that the regulation does not apply to areas beyond the scope of EU law and should not, in any circumstances, interfere with member states’ or any organization tasked with tasks in this area. Furthermore, the AI act will not apply to systems deployed solely for military or defense objectives. Similarly, the agreement states that the rule will not apply to AI systems used just for study and innovation, or to those who utilize AI for non-professional purposes.
AI system classification as high-risk and forbidden AI practices
The compromise agreement includes a horizontal layer of protection, including a high-risk classification, to guarantee that AI systems are not captured if they are not likely to pose substantial fundamental rights abuses or other significant dangers. AI systems posing only minor risks would be subject to extremely mild transparency responsibilities, such as disclosing that the content was created by AI so that consumers can make informed decisions about future use.
To acquire access to the EU market, a wide range of high-risk AI systems would be authorized, but subject to a set of restrictions and commitments. The co-legislators clarified and adjusted these requirements so that they are more technically feasible and less burdensome for stakeholders to comply with, for example, in terms of data quality or the technical documentation that SMEs must create to demonstrate that their high-risk AI systems comply with the requirements.
Because AI systems are developed and distributed through complex value chains, the compromise agreement includes revisions clarifying the allocation of responsibilities and roles of the many participants in those networks, including AI system suppliers and consumers. It also clarifies the relationship between the AI Act’s duties and those that currently exist under other legislation, such as the applicable EU data protection or sectorial legislation.
Risk is deemed intolerable for some AI applications, and so these systems will be outlawed in the EU. Cognitive behavioral manipulation, untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals are all prohibited under the provisional agreement.
Exceptions for law enforcement
Several adjustments to the Commission proposal relating to the use of AI systems for law enforcement purposes were agreed upon, taking into account the specificities of law enforcement authorities and the need to retain their capacity to use AI in their critical job. These adjustments, subject to adequate safeguards, are intended to reflect the need to protect the confidentiality of sensitive operational data in regard to their activities. For example, in the event of an emergency, law enforcement authorities can deploy a high-risk AI tool that has not passed the conformity evaluation method. A special procedure, however, has been developed to ensure that basic rights are adequately secured against any potential misuses of AI systems.
Furthermore, with regard to the use of real-time remote biometric identification systems in publicly accessible spaces, the provisional agreement clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be allowed to use such systems on an exceptional basis. The compromise deal includes additional safeguards and restricts these exclusions to cases involving victims of specific crimes, the prevention of true, present, or predictable dangers, such as terrorist attacks, and searches for those accused of the most serious crimes.
AI foundation models and general-purpose AI systems
New measures have been introduced to account for circumstances in which AI systems can be utilized for a variety of purposes (general purpose AI) and where general-purpose AI technology is then merged into another high-risk system. The preliminary agreement also addresses the specific circumstances of general-purpose artificial intelligence (GPAI) systems.
Specific principles have also been agreed upon for foundation models, which are enormous systems capable of performing a variety of diverse activities, such as generating video, text, graphics, speaking in lateral language, computing, or generating computer code. The interim agreement states that before foundation models may be offered on the market, they must meet certain transparency requirements. A more stringent protocol was implemented for ‘high impact’ foundation models. These are foundation models that have been trained with a vast quantity of data and have advanced complexity, capabilities, and performance that are well above average, and can disperse systemic risks throughout the value chain.
A new governing structure
Following the new restrictions on GPAI models and the clear necessity for their enforcement at the EU level, the Commission has established an AI Office tasked with overseeing these most advanced AI models, contributing to the development of standards and testing techniques, and enforcing the common norms in all member states. A scientific panel of independent experts will advise the AI Office on GPAI models by helping to develop methodologies for evaluating the capabilities of foundation models, advising on the designation and emergence of high impact foundation models, and monitoring potential material safety risks associated with foundation models.
The AI Board, comprised of members from member states, will continue to serve as a coordination platform and an advisory body to the Commission, and Member States will play a key role in the regulation’s implementation, including the development of standards of practice for foundation models. Finally, to give technical knowledge to the AI Board, an advisory forum for stakeholders such as industry representatives, SMEs, start-ups, civil society, and academia would be established.
Penalties
Fines for AI act infractions were set as a percentage of the offending company’s global annual turnover in the preceding fiscal year or a specified amount, whichever was greater. This would be €35 million or 7% for violations of prohibited AI applications, €15 million or 3% for violations of the AI act’s responsibilities, and €7.5 million or 1.5% for providing false information. However, the interim agreement provides for more equitable administrative fine limitations for SMEs and start-ups in the event of AI act violations.
The compromise agreement further states that a natural or legal person may file a complaint with the relevant market surveillance authority over noncompliance with the AI act and expect that such a complaint will be treated in accordance with that authority’s specific procedures.
Transparency and fundamental rights protection
Before its deployers put a high-risk AI system on the market, the provisional agreement calls for a fundamental rights impact assessment. The preliminary accord also calls for greater openness in the use of high-risk AI technologies. Notably, several parts of the Commission proposal have been revised to state that certain public bodies that utilize a high-risk AI system would also be required to register in the EU database for high-risk AI systems. Furthermore, newly added regulations emphasize the need for users of emotion recognition systems to notify natural persons when they are being exposed to such a system.
Measures to Encourage Innovation
The clauses relating measures to assist innovation have been significantly updated in comparison to the Commission proposal in order to create a more innovation-friendly legal framework and to promote evidence-based regulatory learning.
Notably, it has been clarified that AI regulatory sandboxes, which are intended to create a controlled environment for the creation, testing, and validation of new AI systems, should also permit testing of innovative AI systems in real-world situations. Furthermore, new measures have been included that allow AI systems to be tested in real-world scenarios under certain conditions and safeguards. To reduce the administrative burden on smaller businesses, the preliminary agreement includes a list of activities to be performed to assist such operators, as well as some limited and explicitly defined derogations.
Entrance into force
The interim agreement states that the AI act will take effect two years after it is enacted, with some exceptions for specific sections.
Following steps
Following the interim agreement, technical work will continue in the following weeks to finalize the details of the new legislation. Once this task is completed, the presidency will propose the compromise text to the representatives of the member states (Coreper) for approval.
Before formal acceptance by the co-legislators, the full text must be validated by both institutions and undergo legal-linguistic review.
Historical context
The Commission proposal, which was submitted in April 2021, is a crucial component of the EU’s policy to promote the development and adoption of safe and lawful AI that respects basic rights across the single market.
The proposal takes a risk-based approach and establishes a standard, horizontal legal framework for AI with the goal of ensuring legal certainty. The proposed legislation aims to encourage investment and innovation in artificial intelligence, improve governance and effective enforcement of current law on fundamental rights and safety, and ease the formation of a single market for AI applications. It complements other initiatives, such as the coordinated plan on artificial intelligence, which intends to increase investment in AI in Europe. The Council agreed on a broad strategy (negotiating mandate) on this file on December 6, 2022, and will begin interinstitutional negotiations with the European Parliament (‘trilogues’) in mid-June 2023.