The execution of the EU’s AI General-Purpose Code of Practice has disclosed deep divisions between major technology corporations. Microsoft has told its objective to sign the European Union’s voluntary AI compliance framework whilst Meta flatly refuses participation, calling the guidelines executive overreach with a view to stifle innovation.
Microsoft President Brad Smith told Reuters on Friday, “I suppose it’s probable we will sign. We want to read the documents.”. Smith highlighted his company’s collaborative approach, mentioning, “Our aim is to find a ay to be supportive, and on the same time, one of the things we welcome is the direct engagement by the AI Office with industry.”
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, announced on LinkedIn that “Meta won’t be signing it. The code presents numerous legal doubts for model developers, as well as measures which go far beyond the scope of the AI Act.”
Kaplan argued that “Europe is turning down the incorrect path on AI” and warned the EU AI code would “throttle the development and deployment of frontier AI models in Europe, and feat European companies trying to construct businesses on pinnacle of them.”
Early adopters vs. Holdouts
The technology sector’s disrupted response emphasizes different techniques for handling European regulatory compliance.
OpenAI and Mistral have signed the Code, positioning themselves as early adopters of the voluntary framework.
OpenAI announced its commitment, stating, “Signing the Code displays our commitment to imparting capable, accessible and secure AI models for Europeans to fully participate in the economic and societal advantages of the Intelligence Age.”
OpenAI joins the EU code of practice for general-purpose AI models, the second one signature of a leading AI company after Mistral, consistent with industry observers tracking the voluntary commitments.
More than 40 of Europe’s largest businesses signed a letter in earlier this month, asking the European Commission to halt the execution of the AI Act, along with companies like ASML Holding and Airbus that known as for a two year postpone.
Code requirements and timeline
The code of practice, was published on July 10 by the European Commission, and targets to offer legal fact for companies growing general-motive AI models ahead of mandatory enforcement starting August 2, 2025.
The voluntary tool was evolved by 13 independent professionals, with input from over 1,000 stakeholders, along with model vendors, small and medium-sized businesses, academics, AI safety experts, rights-holders, and civil society organizations.
The EU AI code establishes necessities in 3 regions. Transparency responsibilities need providers to maintain technical model and dataset documentation, at the same time as copyright compliance mandates clear internal rules outlining how training data is received and used underneath EU copyright policies.
For the most advanced models, protection and security obligations follow below the category, “GPAI with Systemic Risk” (GPAISR), which covers the maximum advanced models, like OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro.
Signatories will ought to post summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright regulation. The framework needs companies to document training data sources, put in force strong chance assessments, and establish governance frameworks for managing potential AI system threats.
Enforcement and penalties
The penalties for non-compliance are substantial, together with up to €35 million or 7% of global annual turnover (the more of both). In particular, for providers of GPAI models, the EC may impose a fine of up to €15 million or 3% of the global annual turnover.
The Commission has denoted that if providers adhere to an authorized Code of Practice, the AI Office and national regulators will deal with that as a simplified compliance path, focusing enforcement on checking that the Code’s commitments are met, in place of accomplishing audits of each AI system. This creates incentives for early adoption among companies searching for regulatory predictability.
The EU AI code represents part of the broader AI Act framework. Under the AI Act, duties for GPAI models, particular in Articles 50 – 55, are enforceable 12 months after the Act enters into force (2 August 2025). Providers of GPAI models which have been placed on the market before this date need to be compliant with the AI Act by 2 August 2027.
Industry effect and global implications
The different responses suggest technology corporations are adopting essentially different techniques for dealing with regulatory relationships in global markets. Microsoft’s cooperative stance contrasts sharply with Meta’s confrontational approach, potentially setting precedents for how major AI developers interact with global law.
Despite support opposition, the European Commission has refused to postpone. The EU’s Internal Market Commissioner Thierry Breton has insisted that the framework will proceed as scheduled, saying the AI Act is important for consumer safety and accept as true with in emerging technologies.
The EU AI code’s recent voluntary nature for the duration of initial phases presents companies with opportunities to influence regulatory development by participation. However, obligatory enforcement beginning in August 2025 guarantees eventual compliance regardless of voluntary code adoption.
For corporations operating in more than one jurisdictions, the EU framework may impact global AI governance standards. The framework aligns with broader global AI governance developments, along with the G7 Hiroshima AI Process and various national AI strategies, potentially establishing European approaches as international benchmarks.
Looking ahead
In the immediately term, the Code’s content can be reviewed by using EU government: the European Commission and Member States are assessing the Code’s adequacy and are expected to formally suggest it, with a very last decision deliberate planned by 2 August 2025.
The regulatory framework creates significant implications for AI development globally, as companies should balance innovation objectives with compliance responsibilities in more than one jurisdictions. The different enterprise responses to the voluntary code foreshadow capacity compliance demanding situations as mandatory requirements take impact.