X Moves to Penalize Creators Posting Undisclosed AI War Content
X has announced new enforcement measures targeting creators who share AI-generated videos of armed conflict without clearly disclosing that the content is artificial.
The platform said violators will be temporarily removed from its revenue-sharing program.
Under the new policy, creators who post misleading AI-generated conflict footage without proper labeling will lose access to monetization for 90 days.
Repeat violations after reinstatement will result in permanent removal from the program.
The company said the decision reflects growing concerns about misinformation during times of war, noting how easily AI tools can be used to fabricate realistic but false visuals that mislead audiences.
Enforcement will rely on a combination of automated AI-detection tools and user-driven reporting through the platform’s community-based fact-checking system.
The goal, according to the company, is to reduce financial incentives for spreading deceptive content.
Critics argue the move addresses only part of a broader problem, as AI-generated misinformation remains widespread outside conflict zones, including in politics and advertising, areas not covered under the new restrictions.
Source: TechCrunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!

