Meta is rolling out new measures to tackle impersonation and low-quality AI content on Facebook following growing criticism that the platform has become flooded with what some critics call “AI slop.”
Complaints about the issue have spread widely online and in the media, with reports from outlets such as Futurism, BBC, and Fast Company describing Facebook as increasingly overwhelmed by AI-generated spam and recycled content.
In response, Meta announced Friday that it is introducing improved detection tools to combat impersonation accounts while also updating its creator guidelines to clarify what qualifies as “original content.”
The changes build on an earlier effort launched last year when Meta announced a broader crackdown on spammy and unoriginal posts. That campaign targeted accounts that repeatedly reused other creators’ photos, videos, or text without adding meaningful new content.
Meta said the goal is to improve Facebook’s feed by prioritizing authentic creator work and reducing the spread of AI-generated spam and reposted material that has hurt the platform’s reputation. The company also outlined its strategy in more detail through its creator blog, emphasizing the importance of protecting original creators.
According to Meta, these efforts are already showing results. The company claims that views and watch time for original content on Facebook nearly doubled in the second half of 2025 compared with the same period a year earlier.
Meta also reported progress in dealing with impersonation. The company says it removed around 20 million impersonation accounts last year, and reports targeting large creators dropped by 33%.

As part of the new push, Facebook is testing improvements to its content protection tools. These tools allow creators to detect when their Reels are reposted by impersonators and take action from a centralized dashboard.
Meta says upcoming updates will simplify the reporting process by allowing creators to submit multiple reports in one place. However, the current system mainly focuses on identifying duplicate content rather than detecting unauthorized use of a creator’s likeness — an area many creators say still needs better protection.
Facebook is far from the only platform facing challenges related to AI-generated content. This week, YouTube also announced plans to expand its AI deepfake detection tools to cover politicians, journalists, and other public figures.
Alongside the new moderation tools, Meta is also updating Facebook’s content guidelines to more clearly define what counts as “original content.”
Under the updated rules, content must be filmed or produced directly by the creator to qualify as original. Remix-style videos that add commentary, analysis, or new information may also qualify. On the other hand, posts that simply re-upload someone else’s work or make minor changes — such as adding captions, borders, or small edits — will be classified as unoriginal and deprioritized in Facebook’s feeds.
For Meta, the stakes are high. If creators feel their work is drowned out by AI-generated spam or copycat content, the company risks losing the creators who drive engagement and monetization across the platform.