The European Union’s strategy to expand AI development also includes ramping up domestic infrastructure, enhancing access to training data, and guiding firms by the AI Act
The European Union has unveiled an action plan for the development of the AI industry, vowing to “convert Europe’s strong traditional industries and its exceptional skills pool into powerful engines of AI innovation and acceleration.”
The “AI Continent Action Plan” was introduced through the European Commission, the administrative body of the EU, in a press launch on Wednesday, April 9. These actions come amid criticism from tech industry players and other stakeholders, who’ve asserted that the EU’s policies are overly burdensome and could stifle innovation in the sector.
“Achieving our objectives in AI would require leadership both in growing and the use of AI. It involves sustained investment in infrastructure (such as computing power and networks), along advances in model growth, and broad adoption throughout the economy,” the plan reads.
The plan seeks to help Europe’s AI industry compete more aggressively with the United States and China by ramping up domestic AI infrastructure. “The EU recently lags at the back of the US and China in terms of available data centre capability, depending heavily on infrastructure set up and controlled by way of different sectors of the world, that EU users get access by the cloud,” it notes.
As a part of its efforts to reinforce AI progress within the area, the EU has introduced plans to construct a community of ‘AI factories’ and ‘AI gigafactories’. These factories are describes as large-scale facilities that residence graphics processing units (GPUs) that provide the computing power need to increase and train cutting-edge AI models.
It is also seeking to establish research centres focused on making good quality training data more reachable to startups in the area. The bloc has also sated it’s going to create a new entity called the AI Act Service Desk to help regional corporations with its landmark AI Act.
“The upcoming AI Act Service Desk can be a central point of contact for agencies searching for facts and guidance,” the European Commission stated.
Navigating the provisions of the EU’s AI law has appears as a thorny difficulty amongst AI enterprise’s working in the area. It stipulates responsibilities for all AI programs based on the extent of threat they pose to society. The foundational models advancement by the likes of OpenAI, Google, and French startup Mistral fall in the category of General Purpose AI Models (GPAIs).
Last year, the EU presumed a Code of Practice for developers of GPAI as a part of the draft guidelines supposed to enforce the AI Act. The draft Code needs these enterprise’s to launch detailed information about the general reason AI models, together with “information on data used for training, testing out and validation” and the outcomes of the testing out approaches that the AI models had been subjected to.
It also offers requiring GPAI developers to bring in outside specialists for impartial testing and hazard evaluation of general motive AI tools.
The law’s focus on tackling risks related to AI has raised concerns among political and enterprise leaders that Europe should miss out on utilizing the ability of AI.
“There’s almost this fork in the road, maybe even a tension right now among Europe on the EU level … after which some of the nations. They’re trying to maybe go in to touch, little bit of a one-of-a-kind path that simply desires to embody the innovation,” OpenAI’s Chief Global Affairs Officer Chris Lehane was quoted as saying by CNBC in February this year.