Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
Surprised the EU hasn’t banned it yet given that the platform is manipulated by Musk to destabilize Europe and move it towards the far right. The child abuse feels like a smaller problem compared to that risk.
"Manipulated by Musk to destabilize Europe and move it towards the far right" - this is a very strong claim to make about a fairly open platform where people can choose what to post and who to follow.
Also, could you clarify what the difference is between the near right and the far right? Do you have any examples of the near right?
This is obviously diversion but anyway:
Bunch of "American and European" "patriots" that he retweets 24/7 turned out to be people from Iran, Pakistan, India and Russia. These accounts generate likes by default by accounts with "wife of vet" in bio and generic old_blonde_women.jpeg aka bots.
de Gaulle would be considered insanely far right today. Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today.
Assume good intent. It helps you see the actually interesting point being made.
They wrote "Bush was right wing" (unless it was edited), so what's your point in saying "Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today." ?
In my opinion I think the reason they raided the offices for CSAM would be there are laws on the books for CSAM and not for social manipulation. If people could be jailed for manipulation there would be no social media platforms, lobbyists, political campaign groups or advertisements. People are already being manipulated by AI.
On a related note given AI is just a tool and requires someone to tell it to make CSAM I think they will have to prove intent possibly by grabbing chat logs, emails and other internal communications but I know very little about French law or international law.
>French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system. It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.
I had to make a choice to not even use Grok (I wasn't overly interested in the first place, but wanted to review how it might compare to the other tools), because even just the Explore option shows photos and videos of CSAM, CSAM-adjacent, and other "problematic" things in a photorealistic manner (such as implied bestiality).
Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.
Given America passed PAFACA (intended to ban TikTok, which Trump instead put in hands of his friends), I would think Europe would also have a similar law. Is that not the case?
Are you talking about this [1]? I don't know the answer to your question whether or not the EU has the same policy. That is talking about control by a foreign adversary.
I think that would delve into whether or not the USA would be considered a foreign adversary to France. I was under the impression we were allies since like the 1800s or so despite some little tiffs now and again.
> The child abuse feels like a smaller problem compared to that risk.
I think we can and should all agree that child sexual abuse is a much larger and more serious problem than political leanings.
It's ironic as you're commenting about a social media platform, but I think it's frightening what social media has done to us with misinformation, vilification, and echo chambers, to think political leanings are worse than murder, rape, or child sexual abuse.
Big platforms and media are only good if they try to move the populace to the progressive, neoliberal side. Otherwise we need to put their executives in jail.
Simply because if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime.
But whatever zombie government France is running can't "ban" X anyway because it would get them one step closer to the guillotine. Like in the UK or Germany it is a tinderbox cruising on a 10-20% approval rating.
If "French prosecutor" want to find a child abuse case they can check the Macron couple Wikipedia pages.
Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
Grok brought that thought all the way to "... so let's not even try to prevent it."
The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.
I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.
Grok does try to prevent it. They even publicly publish their safety prompt. It clearly shows they have disallowed the system from assisting with queries that create child sexual abuse material.
The fact that users have found ways to hack around this is not evidence of X committing a crime.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.
There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.
Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?
This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).
There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.
I'm not talking about video editing software; that's a different class of software. I'm talking about other generative AI models, which you can download today onto your computer, and have it do the same thing as Grok does.
> How is this exoneration?
I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.
Well you could not sue the video-editing software for someone making child pornography with it. You would, quite sanely, go after the pedophiles themselves.
No. I'm just saying that people should be consistent and if they apply a certain standard to Grok then they should also apply the same standard to other things. Be consistent.
Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.
Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.
There's a lot of data on computers that are physically in the office. This doesn't change whether you are a tech company or not.
These days many tech company offices have a "panic button" for raids that will erase data. Uber is perhaps the most notorious example.
It's sad to see this degree of incentives perverted, over adhering to local laws.
I had the same thought - not just about raids, but about raiding a satellite office. This sounds like theater begging for headlines like this one.
France24 article on this: https://www.france24.com/en/france/20260203-paris-prosecutor...
lol, they summoned Elon for a hearing on 420
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
I wonder how he'll try to get out of being summoned. Claim 4/20 is a holiday that he celebrates?
> Claim 4/20 is a holiday that he celebrates?
Given his recent "far right" bromance that's probably not a good idea ;)
It hadn't occurred to me that might be the reason they picked 420
Oh, that was 100% in my mind when I wrote that. I was wondering how explicit to be with Musk's celebrating being for someone's birthday.
It's voluntary
They'll make a judgement without him if he doesn't turn up.
[dupe] Earlier: https://news.ycombinator.com/item?id=46868998
@dang The title here is misleading. The original not.
X didn't raid the prosecutors offices, the prosecutors did
I think they meant to put a period after raided.
Fixed. That period was one char too long but I used an "&" instead of "and" to fix it.
Surprised the EU hasn’t banned it yet given that the platform is manipulated by Musk to destabilize Europe and move it towards the far right. The child abuse feels like a smaller problem compared to that risk.
"Manipulated by Musk to destabilize Europe and move it towards the far right" - this is a very strong claim to make about a fairly open platform where people can choose what to post and who to follow.
Also, could you clarify what the difference is between the near right and the far right? Do you have any examples of the near right?
This is obviously diversion but anyway: Bunch of "American and European" "patriots" that he retweets 24/7 turned out to be people from Iran, Pakistan, India and Russia. These accounts generate likes by default by accounts with "wife of vet" in bio and generic old_blonde_women.jpeg aka bots.
https://en.wikipedia.org/wiki/Astroturfing
Elon fiddles with the algorithm to boost certain accounts. Some accounts are behind an auth wall and others are not. It’s open but not even.
And this destabilizes Europe... how?
> where people can choose
How true is this really?
We certainly have data points to show Musk has put his thumb on the scale
It is completely literally true. If you look at the "Following" feed, you will only get the people you are following.
de Gaulle was right wing, Hitler was far right. Bush was right wing, Mussolini was far right
The only way you can't tell these two things apart is if you're sub 70iq or pretending to be sub 70iq
de Gaulle would be considered insanely far right today. Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today.
Assume good intent. It helps you see the actually interesting point being made.
They wrote "Bush was right wing" (unless it was edited), so what's your point in saying "Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today." ?
Nope no stealth edit, my bad.
My point still stands, "politics change and assessments of politicians change accordingly".
Bill Clinton's crime bill would be considered far right today.
Ronald Regean's amnesty bill would be considered far left today.
Bad assumptions are just another form of stupidity.
It used to be a principle of the left to believe in free speech. Now that is called right wing.
There’s no such thing as free speech and there never has been. To believe there is, is to fundamentally fail to understand what a society even is.
In case you're not playing dumb, the term you're looking for would be centre right.
In my opinion I think the reason they raided the offices for CSAM would be there are laws on the books for CSAM and not for social manipulation. If people could be jailed for manipulation there would be no social media platforms, lobbyists, political campaign groups or advertisements. People are already being manipulated by AI.
On a related note given AI is just a tool and requires someone to tell it to make CSAM I think they will have to prove intent possibly by grabbing chat logs, emails and other internal communications but I know very little about French law or international law.
It's broader and mentioned in the article:
>French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system. It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.
I had to make a choice to not even use Grok (I wasn't overly interested in the first place, but wanted to review how it might compare to the other tools), because even just the Explore option shows photos and videos of CSAM, CSAM-adjacent, and other "problematic" things in a photorealistic manner (such as implied bestiality).
Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.
Given America passed PAFACA (intended to ban TikTok, which Trump instead put in hands of his friends), I would think Europe would also have a similar law. Is that not the case?
Are you talking about this [1]? I don't know the answer to your question whether or not the EU has the same policy. That is talking about control by a foreign adversary.
I think that would delve into whether or not the USA would be considered a foreign adversary to France. I was under the impression we were allies since like the 1800s or so despite some little tiffs now and again.
[1] - https://www.congress.gov/bill/118th-congress/house-bill/7521
EngineerUSA needs to vastly change his tone to avoid being flagged. I vouched it because it's broadly true but the wording could be a LOT better.
> The child abuse feels like a smaller problem compared to that risk.
I think we can and should all agree that child sexual abuse is a much larger and more serious problem than political leanings.
It's ironic as you're commenting about a social media platform, but I think it's frightening what social media has done to us with misinformation, vilification, and echo chambers, to think political leanings are worse than murder, rape, or child sexual abuse.
Almost like the EU can't just ban speech on a whim the way US far right people keep saying it can.
Big platforms and media are only good if they try to move the populace to the progressive, neoliberal side. Otherwise we need to put their executives in jail.
Simply because if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime.
But whatever zombie government France is running can't "ban" X anyway because it would get them one step closer to the guillotine. Like in the UK or Germany it is a tinderbox cruising on a 10-20% approval rating.
If "French prosecutor" want to find a child abuse case they can check the Macron couple Wikipedia pages.
Turns out you can't build a CSAM generator, release it and then blame the user's for using the feature.
Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
> if requested by a savvy user
Grok brought that thought all the way to "... so let's not even try to prevent it."
The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.
I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.
Grok does try to prevent it. They even publicly publish their safety prompt. It clearly shows they have disallowed the system from assisting with queries that create child sexual abuse material.
The fact that users have found ways to hack around this is not evidence of X committing a crime.
https://github.com/xai-org/grok-prompts/blob/main/grok_4_saf...
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.
There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.
Grok makes it especially easy to do so.
What makes Grok special compared to random "AI gf generator 9001" which is hosted specifically with the intent of generating NSFW content?
> What makes Grok special
X. xAI isn’t being raided. X is. If Instagram bought a girlfriend generator and built it into its app, it would face liability as well.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.
Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?
Every artist is capable of drawing CSAM. Every 3D modeler can render CSAM. Ban all computers !!
You can use photoshop to create CSAM too, should that be banned?
What can be asserted without evidence can also be dismissed without evidence. Your comment is NULL.
This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).
There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.
[1] -- see the Uncensored General Intelligence leaderboard where Grok is currently #1: https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard
> everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing)
Well, yes. You can make child pornography with any video-editing software. How is this exoneration?
I'm not talking about video editing software; that's a different class of software. I'm talking about other generative AI models, which you can download today onto your computer, and have it do the same thing as Grok does.
> How is this exoneration?
I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.
Well you could not sue the video-editing software for someone making child pornography with it. You would, quite sanely, go after the pedophiles themselves.
We don't go after Adobe for doing that. We go after the person who did it.
Whataboutism on CSAM, classy. I hope this is the rock bottom for you and that things can only look up from here.
No. I'm just saying that people should be consistent and if they apply a certain standard to Grok then they should also apply the same standard to other things. Be consistent.
Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.