Meta Urged to Crack Down on AI Deepfakes During Wars as Oversight Board Flags ‘High-Risk’ Misinformation

Meta Urged to Crack Down on AI Deepfakes During Wars as Oversight Board Flags ‘High-Risk’ Misinformation
Meta’s Oversight Board urges stronger rules, AI detection tools and ‘High Risk AI’ labels to curb deepfake misinformation spreading rapidly during wars and global crises.

As AI-generated deepfakes increasingly flood social media during global conflicts, Meta is facing growing pressure to overhaul how it handles such content. The company’s independent Oversight Board has called for stronger policies, better detection tools and clearer labels to help users identify AI-generated media—particularly during wars and crises where misleading content can quickly influence public perception.

In a detailed ruling issued on March 10, the Oversight Board said Meta must strengthen its moderation framework to tackle the rapid spread of AI-manipulated images, videos and audio. The panel warned that deepfakes shared during conflicts pose a serious risk of misleading the public and could potentially shape narratives around real-world events if left unchecked.

The recommendations came after the Board reviewed an AI-generated video posted on Facebook during the 12-day Israel-Iran war in June 2025. The clip falsely showed widespread damage in the Israeli city of Haifa and included text reading “Live now – Haifa Towards Down.” Fact-checkers later confirmed that the footage was entirely fabricated using artificial intelligence.

Despite the video’s misleading nature, Meta initially allowed the content to remain online. The Oversight Board overturned that decision, stating the post should have carried a prominent “High Risk AI” label because it had the potential to mislead audiences during a sensitive geopolitical conflict.

According to the Board, the scale and sophistication of AI-generated content are increasing rapidly, making it harder for users to distinguish authentic footage from fabricated material. The risk becomes particularly severe during conflicts or political crises, when viral videos can shape public opinion before fact-checkers intervene.

The ruling comes at a time when global tensions remain high, including the ongoing war involving the United States, Israel and Iran that began on February 28. The conflict has already triggered a surge of online war footage, much of which has been widely shared before being exposed as AI-generated or digitally manipulated.

To address these risks, the Oversight Board urged Meta to adopt stronger “content provenance” systems. These tools would provide clear information about how a piece of media was created or altered, allowing users to understand whether AI was involved in producing the content.

The panel also criticised Meta’s current labeling system, which typically marks AI-modified posts with an “AI Info” tag. According to the Board, this approach is not robust enough to handle the growing volume and complexity of synthetic media circulating online.

One major concern highlighted in the ruling is Meta’s inconsistent use of the Coalition for Content Provenance and Authenticity (C2PA) standards, even for media generated using its own AI tools. The Board said stronger and more consistent implementation of such standards is essential to ensure transparency.

Among its recommendations, the Board proposed that Meta introduce new warning labels such as “High Risk” and “High Risk AI” for potentially misleading synthetic media. Some members also suggested that such posts should be downgraded in visibility or removed entirely to prevent widespread misinformation.

The panel further called on Meta to invest in advanced tools capable of detecting AI-generated content across multiple formats, including images, audio and video. It also recommended clearer penalties for users who fail to disclose that their content has been digitally created or manipulated.

In addition, the Board suggested creating a dedicated Community Standard specifically addressing AI-generated media. This policy would define rules for labeling synthetic content, preserving metadata that reveals how content was produced and ensuring proper disclosure when AI tools are used.

At the same time, the Oversight Board stressed that stronger moderation should not come at the cost of free expression. Instead, it argued that technology companies must find ways to help users identify deceptive AI content without broadly restricting speech.

Although the Board’s rulings on specific moderation cases are binding, its broader policy recommendations are not mandatory. However, Meta is required to publicly respond to each recommendation within 30 days, outlining whether it plans to implement the suggested changes.

The Bengal Express Logo

Be the first to comment

Leave a Reply

Your email address will not be published.


*