top of page

Meta opposes European Union AI Code of Practice

  • Writer: Privacy Law In Canada
    Privacy Law In Canada
  • Jul 20
  • 2 min read

You’ve probably heard that Meta calls the EU’s Ai code of conduct, ‘overreach’.  No surprise there given that Meta has invested Billions into Ai, and the Code puts limits on Ai use.  Particularly with respect to their customers’ personal information.  But what does the Code say.


It’s called the, General Purpose AI Code of Practice or GPAI.  The Code was published July 10, 2025, drafted by experts and designed to help industry comply with the AI Act’s obligations for providers of general purpose AI models.  If a company agrees to be bound by the Code, then it is signaling to customers and the world that it is compliant with the Act.  The alternative appears to be litigation.


The Act is binding law.  Therefore Meta has to comply.  However, Meta can do its thing, wait for the Regulators to levy fines and penalties, and then Meta can litigate the interpretation of the Act in the Courts.  Completely legal.  As well, Meta can use its considerable clout to bring about change to the Act – can you imagine the EU banning the use of Facebook.  Won’t happen, so the EU’s toolbox of enforcement comes down to fines and penalties.


If the fines are large enough (Meta paid $5 Billion to the FTC recently, and then settled with investors for a reported $8 Billion for wasting money), then maybe Meta will comply.  However it still may mean years of litigation, and by the end the horses will have left the barn.

What’s in the Act?  Certain Ai applications are banned.  Things like, Biometric identification and categorization of people; cognitive behavioral manipulation; social scoring; facial recognition in public spaces. 


Ai activity is broken out into levels – high risk for Ai that affects safety.  High risk has two categories:  Products, like toys, cars, medical devices; and others, such as Management/Operation of critical infrastructure; Education; Employment; Law Enforcement; Boarder control. 


Generative Ai such as ChatGPT is not categorized as High Risk, however the Act still has rules for it.  On the other hand, Ai models like GPT-4 may be High Risk.


On the surface the Act seems like a good thing.  It would be nice if Meta were to publicly explain its concerns.  Government is expensive and slow.  The Gold-plated turtle.  Seeking regulatory approval of every new innovation would be impractical.  That might be one concern.  Others may be that they need your personal information to turn a profit.

bottom of page