Others

AI Act Approved by EU, UK Focuses on International AI Safety Cooperation

EU Approves AI Act, Banning High-Risk AI Applications and Ensuring Responsible Use

By Athena Xu

5/21, 09:18 EDT
article-main-img

Key Takeaway

  • The EU has approved the AI Act, the world's first major law regulating AI, emphasizing trust and accountability while banning high-risk applications.
  • The UK focuses on international cooperation for AI safety, diverging from the EU's regulatory approach and opening a new AI safety office in San Francisco.
  • Rapid advancements in AI by companies like OpenAI, Google, and Meta raise safety concerns; industry spending on generative AI is projected to hit $100 billion this year.

EU Approves AI Act

The European Union has taken a significant step in regulating artificial intelligence (AI) by approving the AI Act, the world's first major law aimed at governing AI technology. The EU Council announced the final approval of this comprehensive regulation, which aims to introduce a set of rules to ensure the responsible use of AI. "The adoption of the AI act is a significant milestone for the European Union," stated Mathieu Michel, Belgium’s secretary of state for digitization. He added, "With the AI act, Europe emphasizes the importance of trust, transparency, and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation."

The AI Act employs a risk-based approach, categorizing AI applications based on the level of threat they pose to society. Unacceptable AI applications, such as social scoring systems, predictive policing, and emotional recognition in workplaces and schools, are prohibited. High-risk AI systems, including autonomous vehicles and medical devices, are evaluated for their potential impact on health, safety, and fundamental rights. The regulation also covers AI applications in financial services and education, where there is a risk of bias in AI algorithms.

UK Pushes AI Safety at Seoul Summit

The UK government is advocating for enhanced AI safety measures at the Seoul summit, aiming to position itself as a global leader in managing AI risks. Michelle Donelan, the UK's Secretary of State for Science, Innovation, and Technology, emphasized the importance of responsible AI development. "There will be some agreements that we broker," Donelan stated. "We’ll be going to ask companies how they can go even further in showing they’ve built safety into the release of their models."

The UK aims to build on the AI Safety Summit hosted last year, part of Prime Minister Rishi Sunak's efforts to make AI safety a key aspect of his political legacy. The Seoul event will include representatives from countries such as China, the US, India, and Canada, marking another round of high-level ministerial talks. The UK announced a new AI safety office in San Francisco, diverging from the EU's comprehensive legislation approach and focusing on international cooperation.

Diverging Regulatory Approaches

Different countries have adopted varying approaches to AI regulation. The UK has opted not to "rush to regulate," focusing instead on understanding AI risks and fostering international cooperation. Michelle Donelan defended the UK's approach, stating that any legislation passed would likely be outdated by the time it came into force. "We want to lean in to and support innovation," she said. The British government also announced a new overseas office in San Francisco dedicated to AI safety. "There will always be slightly different approaches, what we want is commonality on taking this seriously," Donelan added.

In contrast, the European Union has enacted comprehensive legislation to place guardrails on AI technology, and some US cities and states have implemented laws limiting AI use in specific areas. The EU’s landmark AI Act is expected to become a blueprint for global AI regulations once it is approved by all EU member states and enters into force.

Industry Developments and Safety Concerns

The AI industry continues to advance rapidly, with companies like OpenAI, Google, and Meta releasing new AI products. OpenAI's CEO, Sam Altman, highlighted the futuristic capabilities of their latest system, GPT-4o, by referencing the 2013 film "Her." However, the rapid pace of innovation has raised safety concerns. A key OpenAI safety researcher, Jan Leike, resigned this week, citing disagreements over the company's direction. "Safety culture and processes have taken a backseat to shiny products," warned Leike.

Despite these concerns, the industry is pushing forward. OpenAI released GPT-4o for free online, Google previewed a new AI assistant called Project Astra, and Meta continues to develop its Llama AI model. Dan Ives, an analyst at Wedbush Securities, estimates that spending on generative AI will reach $100 billion this year, part of a projected $1 trillion expenditure over the next decade.

Street Views

  • Mathieu Michel, Belgium’s Secretary of State for Digitization (Cautiously Optimistic on the AI Act):

    "The adoption of the AI act is a significant milestone for the European Union... With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation."