On Wednesday, the European Union Parliament made history by approving the world’s first substantial regulatory framework to oversee mediatized artificial intelligence, a primary area of tech investment. The provisional political agreement was achieved in December and received approval in the Parliament with 523 votes for, 46 against, and 49 not cast during Wednesday’s session.
Thierry Breton, the European commissioner for the internal market, declared that Europe has now become a global trendsetter in AI. Roberta Metsola, the European Parliament President, hailed the new legislation as pioneering, asserting it will foster innovation while also preserving essential rights.
According to Metsola, AI is already a significant part of daily life, and now it will be integrated into legislation as well. Dragos Tudorache, who oversaw the EU negotiations concerning the agreement, hailed the deal, but underscored that the greatest challenge is still its implementation.
The EU AI Act, established in 2021, categorizes artificial intelligence technology according to risk levels, from “unacceptable” (leading to a technology ban) to high, medium, and low risk. The regulation is anticipated to come into effect in May, at the end of the legislative period, after final reviews and endorsement from the European Council, with staggered implementation from 2025.
Some EU nations, including major tech startup hubs like Germany and France, have preferred self-regulation to government-led regulations, citing concerns that excessive regulation could impede Europe’s ability to compete with Chinese and US companies in the tech sector.
In response to the growing consumer impact of tech advancements and the market dominance of major players, the EU recently enacted stringent competition legislation to limit the power of US tech giants. The Digital Markets Act enables the EU to crack down on anti-competitive practices from major tech companies and requires them to open their services in sectors where they’ve stifled smaller businesses and limited user choice. Six firms, including Alphabet, Amazon, Apple, Meta, Microsoft, and China’s ByteDance, have been identified as gatekeepers.
However, there are increasing concerns about the potential misuse of artificial intelligence as major companies like Microsoft, Amazon, Google, and chipmaker Nvidia continue to champion AI investment.
Governments are increasingly concerned about the potential use of deepfakes, AI-generated fake events including photos and videos, in the run-up to many critical global elections this year. Some AI proponents, like Google, are implementing self-regulation to combat disinformation. Google has limited the election-related queries its Gemini chatbot can handle, with changes already effective in the US and India.
Dragos Tudorache praised the AI Act, positing that it puts humans in control of technology and promotes its use for societal progress and human potential. However, he emphasized that the Act is not the end, but rather a beginning for a new model of technology governance. He called for political energy to be focused on implementing the Act’s provisions on the ground.
Legal experts view the Act as a significant step forward in international AI regulation, suggesting that it might inspire similar measures in other countries. Steven Farmer, an AI specialist at the international law firm Pillsbury, compared the AI Act to the EU’s General Data Protection Regulation (GDPR), implying a global convergence towards it.
Mark Ferguson, a public policy expert at Pinsent Masons, maintained that passing the act was just the first step, and companies need to work with lawmakers to interpret its implementation properly.
Nevertheless, Emma Wright, a partner at law firm Harbottle & Lewis, warned that the rapidly advancing pace of technology might render the Act outdated, especially considering the extended implementation timeframe.