Equities

Big Tech Highlights AI at Viva Tech Amid EU Regulatory Scrutiny

Big Tech highlights AI's transformative potential at Viva Tech amid EU's new AI Act and global safety commitments.

5/23, 04:27 EDT
article-main-img

Key Takeaway

  • At Viva Tech, Amazon and Google highlighted AI's transformative potential in sectors like supply chains and healthcare, emphasizing innovation with responsibility.
  • The EU's AI Act introduces stringent regulations for high-risk AI applications, focusing on trust, transparency, and accountability to foster European innovation.
  • Major tech firms committed to AI safety at the Seoul Summit, agreeing on frameworks to mitigate risks like automated cyberattacks and bioweapons threats.

AI's Potential Highlighted at Viva Tech

At the Viva Tech conference in Paris, U.S. technology leaders emphasized the transformative potential of artificial intelligence (AI) for global economies and communities. Amazon's Chief Technology Officer Werner Vogels and Google's Senior Vice President for Technology and Society James Manyika discussed the significant benefits AI can bring. Vogels highlighted AI's role in solving complex global issues, such as improving supply chains for essential commodities like rice. He cited examples from Jakarta, Indonesia, where AI connects small rice farm owners to financial services, enhancing efficiency in the supply chain.

Manyika focused on AI's advancements in health and biotechnology. He mentioned Google's Gemini AI model, tailored for medical applications, and Google DeepMind's AlphaFold 3, which can understand a wide range of biological molecules. Manyika also introduced Google's new "watermarking" technology to identify AI-generated text, images, and audio, which has been open-sourced for developers to build upon. He stressed the importance of balancing innovation with responsibility, especially in a year marked by significant global events and concerns about misinformation.

EU's AI Act and Regulatory Landscape

The European Union has taken a significant step in AI regulation with the approval of the AI Act, the world's first major law governing artificial intelligence. The AI Act employs a risk-based approach, categorizing AI applications based on their potential threats. High-risk AI systems, such as those used in autonomous vehicles and medical devices, will be subject to stringent evaluations. The law also prohibits "unacceptable" AI applications, including social scoring systems and predictive policing.

Mathieu Michel, Belgium’s secretary of state for digitization, emphasized the importance of trust, transparency, and accountability in AI development. He stated, "With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation."

Big Tech's Commitments to AI Safety

In a landmark agreement at the Seoul AI Safety Summit, major tech companies, including Microsoft, Amazon, and OpenAI, committed to ensuring the safe development of advanced AI models. These companies agreed to publish safety frameworks outlining how they will measure and mitigate risks associated with their AI systems. The frameworks will include "red lines" for intolerable risks, such as automated cyberattacks and bioweapons threats. In extreme cases, companies will implement a "kill switch" to halt AI development if risks cannot be mitigated.

U.K. Prime Minister Rishi Sunak praised the agreement, stating, "These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI." The commitments build on previous agreements made at the U.K.'s AI Safety Summit in November 2023 and will be further refined with input from trusted actors, including governments.

Management Quotes

  • Werner Vogels, CTO of Amazon:

    "AI can be used to solve some of the world’s hardest problems... At the same time we need responsibly to use some of this technology to solve some of the world’s hardest problems."

  • James Manyika, SVP for Technology and Society at Google:

    "AI can lead to huge benefits from a health and biotechnology standpoint. A version of Google’s Gemini AI model recently released by the firm is tailored for medical applications and able to understand context relating to the medical domain."
    "Google open-sourced its watermarking tech so that any developer can build on it, improve on it... I think it’s going to take all of us, these are some of the things, especially in a year like this, a billion people around the world have voted, so concerns around misinformation are important."
    "I worry sometimes when all our narratives are just focused on the risks. Those are very important, but we should also be thinking about why are we building this technology?"
    "All of the developers in the room are thinking about how do we improve society, how do we build businesses, how do we do imaginative, innovative things that solve some of the world’s problems."