The European Union's AI Legislation Has Been Finalized: Important Details to Consider
In Brief
Lawmakers within the European Union have reached a significant consensus to set a global standard for regulating artificial intelligence.

After a 36-hour negotiation marathon, European Union European policymakers successfully navigated complex discussions to reach a political consensus aimed at outlining a global framework governing artificial intelligence (AI).
The AI Act A landmark piece of legislation focusing on the regulation of AI, which aims to mitigate potential harm, wrapped up its legislative process as key EU institutions addressed their differences in a crucial trilogue meeting on December 8.
In this crucial political dialogue, which set a new benchmark for interinstitutional negotiations, leading EU representatives tackled an extensive agenda of 21 outstanding issues. Just before the clock struck midnight on December 8, Thierry Breton, the European Commissioner, tweeted, 'Deal!' from Brussels.
Next year, the European Parliament is slated to cast its vote on the proposed AI Act, which is poised to bring significant changes. legislation is likely to come into force by 2025.
Pivotal AI Risks Being Addressed
The AI Act encompasses a variety of unacceptable risks, such as manipulation tactics, systems preying on vulnerabilities, and social scoring mechanisms. Lawmakers have succeeded in prohibiting the use of emotion recognition technologies in workplaces and educational settings, although there are exceptions for safety purposes, like assessing driver alertness.
Moreover, lawmakers placed a ban on predictive policing technologies that estimate an individual’s likelihood of committing future offenses based on personal characteristics. There's also an effort to restrict tools that categorize individuals based on sensitive attributes, including race, political views, or religious beliefs. AI systems As articulated by EU Parliament officials, any AI systems deemed to pose unacceptable risks will be strictly prohibited. This category includes:
Cognitive behavioral manipulation targeting individuals or specific vulnerable groups: for instance, voice-activated toys that may encourage harmful actions in children.
- Social scoring systems: assessing individuals based on their behavior, socio-economic status, or personal traits.
- Real-time biometric recognition technologies, such as facial recognition systems.
- Certain exceptions are on the table: For instance, remote biometric identification technologies that perform identifications after a delay could be used in serious crime prosecutions, granted they receive judicial authorization.
In response to pressure from member states, the Parliament has shown flexibility regarding the prohibition of real-time biometric recognition, now allowing limited instances for law enforcement, specifically to avert terrorist activities or assist in locating specific individuals related to a defined set of serious offenses.
In the realm of AI, systems like ChatGPT are now subject to fresh transparency mandates. This initiative is part of broader efforts to address concerns surrounding AI-generated materials. As stipulated, generative AI technologies must clearly disclose when content has been produced by AI, ensuring users are informed about the content's origins.
Transparency Around Generative AI
In development for the generative AI Additionally, these AI models are required to include safeguards against the generation of unlawful content, reflecting ethical considerations while ensuring compliance with established legal frameworks.
Furthermore, there is a mandate for organizations developing generative AI systems to openly provide summaries of copyrighted materials utilized during their training phases, thus enhancing accountability and transparency in the development of these technologies.
To manage the potential risks that AI systems could incur, the European Union has implemented a thorough framework that categorizes high-risk AI applications. The regulations delineate two main categories for these systems, with particular emphasis on safety and respect for fundamental rights.
High-Risk AI Applications
This is integrated into products that fall under the EU’s product safety regulations across various sectors, including toys, aviation, automobiles, medical equipment, and elevators, with the aim of ensuring safe AI integration across diverse industries.
The first category pertains to AI systems The second category specifically focuses on AI technologies in eight defined areas, which necessitate registration within an EU database.
These specified areas comprise biometric identification, critical infrastructure oversight, educational practices, employment standards, access to fundamental services, law enforcement, migration and border management, as well as assistance with legal interpretations.
Importantly, all high-risk AI systems, regardless of their classification, will undergo a meticulous evaluation prior to market entry and will be continuously monitored throughout their lifecycle.
Members have also proposed that public institutions and essential service providers carry out assessments regarding the impact on fundamental rights for high-risk AI systems.
Enforcement actions involve significant fines, calculated as either a percentage of the annual turnover or a fixed minimum amount, whichever is greater, ranging from 1.5% or half a million euros for minor inaccuracies to 6.5% or €35 million for egregious breaches.
This underscores the dedication to maintaining accountability and protecting fundamental rights in the application of high-risk AI within vital public services.
Please be advised that the information on this page is not intended to serve as, and should not be construed as, legal, tax, investment, financial, or any other type of advice. It is crucial to only invest what you can afford to lose and to consult an independent financial advisor should you have any uncertainties. For further details, we recommend reviewing the terms and conditions along with the help and support resources offered by the issuer or advertiser. MetaversePost is committed to delivering accurate and unbiased news, yet market conditions can fluctuate without advance notice.
Disclaimer
In line with the Trust Project guidelines Kumar is a seasoned tech journalist focused on the dynamic intersections of AI/ML, marketing technology, and emerging domains like cryptocurrencies, blockchain, and NFTs. With over three years of experience, Kumar has built a reputation for crafting engaging narratives, conducting enlightening interviews, and providing in-depth analyses. His expertise encompasses producing impactful content, including articles, reports, and research papers for leading industry platforms. With a unique blend of technical proficiency and storytelling skills, Kumar excels at breaking down complex technological ideas for audiences of varying backgrounds in an accessible and captivating manner.