The major US technology opposes Europe
Elon Musk’s social media platform X has refused to comply with the French prosecutor’s requests for recommended algorithms and access to user post data.
The French government began its investigation in January after receiving complaints from MP Eric Bothorel and senior civil servants who claim foreign interference and data manipulation via X’s algorithm.
This month, the case escalated to the National Police Cyber Crime Unit (PDF), where prosecutors are trying to investigate alleged tampering with automated data systems and “fraudulent data extraction.”
In the post, X said it “decisively denys” the accusation and refused to take over the requested data.
“French authorities have launched a politically motivated criminal investigation into X regarding the manipulation of the algorithm and allegedly “illegal extraction,” the company writes.
“X believes that this investigation distorts French law to serve the political agenda and ultimately limits freedom of speech.”
X maintained its decision not to comply with the data request, citing its legal right to refuse, although it maintained its decision.
According to a spokesman for the Paris Prosecutor’s Office, authorities only require access to X’s algorithms (not private user data, but X’s algorithms, not private user data, to perform technical verification based on concerns raised by experts and researchers.
A spokesman told CNBC that investigators will be bound by strict confidentiality and access will be provided through a “security process” to ensure data protection.
However, X questioned the neutrality of the investigation, accusing two appointed analysts, the director of the Paris Complex Systems Institute (ISC-PIF), and Maziyar Panahi, the AI platform leader at the same agency, of being biased.
The company has pointed out that Chavalarias runs a public campaign called “Escape X” to encourage users to leave the platform, and that both researchers are collaborating on projects that X claims to reflect “open hostility.”
Meta pushes back EU AI regulatory efforts
Meanwhile, in another regulatory standoff in Brussels, Meta announced it would not sign the European Union’s newly published code of practice for the general AI (GPAI) model.
Voluntary code released on July 10th is intended to help AI developers comply with the upcoming AI law, comprehensive regulations on AI.
Meta’s global affairs chief Joel Kaplan said on LinkedIn that the company “respectfully reviewed” the EU code, but found it to be full of “legal uncertainty” and obligations “are well beyond the scope of AI law.”
“Europe is on the wrong path of AI,” writes Kaplan.
Although spontaneous, this code is designed to provide early signers.
Signatories are expected to increase transparency regarding model training, security risks and copyright compliance. All of these are legally required once the AI Act comes into effect.
The EU can fine up to 7% of the company’s global annual revenue for violations under AI law, making compliance a high score issue.
The rejection of the meta code reflects broader concerns expressed by more than 45 major organizations, and recently cited the EU to delay the implementation of the AI Act for two years, citing ambiguity regarding compliance requirements.
In contrast, Microsoft appears to be poised to support the EU framework.
“I think there’s a good chance I’ll sign it. I need to read the document,” Microsoft President Brad Smith told Reuters.
“Our goal is to be collaborative, and one of the things we really welcome is our direct involvement by our AI office with the industry,” he said.
Openai and France’s Mistral have already signed the EU code.
The Trump administration has criticised Europe over regulations on US technology companies and has criticized them several times for equating them with censorship.