As public awareness of AI and machine learning technologies has grown, so has the demand for ethical safeguards and increased transparency. In response, the European Parliament introduced the AI Act, designed to address the risks presented by this rapidly evolving technology. The Act is operational since August 1, 2024, and is a milestone in the regulation of artificial intelligence. The Act prohibits AI applications that carry unacceptable risks and enforces stringent rules on high-risk uses in critical sectors such as healthcare, infrastructure, border control, education, justice, and everyday services. The newly established EU AI Office will oversee the Act’s implementation, ensuring that AI technologies are developed and deployed in a compliance with the regulatory standards.
The scope of the Act is broad and applies to all the bodies involved in the provision, deployment, importation, distribution, and manufacturing of AI systems marketed or utilized within the European Union. It also extends to non-EU companies. Non-compliance carries severe penalties, with fines ranging from €7.5 million or 1.5% of revenues to a staggering €35 million or 7% of revenues. Moreover, the Act reinforces existing EU data protection laws on AI technologies.
Surely, the Act will influence global AI regulation and AI development and push for similar legislation worldwide.
How Would the EU AI Act Reshape the AI Landscape?
The EU AI Act aims to ensure product safety and reduce the risks AI systems may present to individuals. While this strategy is effective for products with a specific purpose, it encounters difficulties with newer general-purpose models such as OpenAI’s ChatGPT, Meta’s Llama, or Google’s Gemini, which are utilized for countless applications, making it difficult to assess all potential risks and design regulations accordingly. The Act addresses these difficulties by imposing a general obligation to avoid harm to fundamental rights. However, one of its co-architects in the European Parliament has noted that this regulatory framework does not suit the complexities of contemporary AI models. In addition, vague terminology, ambiguous definitions, and the involvement of multiple regulatory bodies adds new legal uncertainties.
Another major criticism of the EU AI Act is its stringent regulations, which could stifle AI innovation by categorizing applications according to their levels of risk. Of course, high-risk cases will be facing intense scrutiny (and heavier sanctions). However, this approach likely deters companies from pursuing cutting-edge AI research and possibly causes European firms to lag further behind their competitors in the U.S. and China, where regulation is more lenient. Surely, the Act’s compliance requirements will also impose heavy financial and administrative burdens to the entire industry. SME will be hit particularly hard and startups will think twice before they enter the AI market.
By focusing heavily on regulation rather than innovation, therefore, Europe risks dropping back in innovation. In the global race to establish new AI companies, the United States leads, with China close behind, while the European Union already struggles to keep pace. Only three European firms make it to the top twenty of the Forbes Index.
Finally, the EU AI Act will end up being less attractive to international AI talent and investment. The EU doesn’t lack talent but struggles to retain it. While Europe produces skilled researchers, many migrate to the U.S. for better salaries and more opportunities. For example, a third of AI talents at American universities comes from the EU. Europe also lags in converting academic research into commercial success, with European researchers contributing minimally to major AI developments like language models. In 2022, 54% of the creators of AI models were Americans, but only 3% came from Germany, the leading European country.
Conclusion
The EU AI Act aims to ensure ethical AI and protect citizens’ rights, but it may result in overregulation, higher compliance costs, and reduced global competitiveness. The EU should simplify regulations, harmonize markets, and encourage innovation to stay competitive. Companies and researchers prefer regions with limited constraints, and Europe is not one of those areas. If businesses relocate to less regulated markets, Europe risks falling behind. By reducing government intervention, creating a business-friendly environment, and allowing companies to adapt to market demands, Europe can foster collaboration between academia and industry, enhance technological progress and economic growth.
Photo by Emiliano Vittoriosi