Brand Safety Implications of 2025 META Policy Changes
Business Insights
Meta’s 2025 policy shift may increase brand risks, raising concerns over ad placement, safety, and ROI.
Meta has recently announced it will no longer fact-check content posted on its sites — relying instead on a “community notes” program similar to how content is evaluated on X. In a recent interview, the company’s CEO Mark Zuckerberg acknowledged that “more harmful content” will likely become more prevalent on Meta’s platforms, which collectively reach more than 5 billion monthly users globally.
With this change comes an increased likelihood that brands’ ads will appear with content that may be considered controversial (hate speech, misinformation, etc.). How toxic will environments like Facebook and Instagram become? And how quickly? That remains to be seen.
Meta’s standing protocol around prohibiting ads running alongside content labeled “misinformation” may be rendered impractical or impossible to implement, and we should assume that the “misinformation” label itself on Meta platforms will take a new form, if not cease to exist outright.
While parties in the EU and in South America (notably Brazil) have voiced concerns, as of Jan. 10, the changes appear to be limited only to the U.S. market.
As the controversy around these changes swirl, some users will likely abandon the Meta platforms (as we saw on X from 2022 – 2024, with user declines estimated as high as 24%). This could potentially impact reach and reach efficiencies for advertisers, as Meta may be forced to scramble to make up for lost revenue.
At a base level, advertisers should be prepared over the ensuing months to see:
- Increase of “questionable” content across Facebook, Instagram, and Threads.
- Increased likelihood of brand ads appearing near or within content that could be construed as “questionable.”
- Potential increases in CPMs and reduced campaign reach.
- Lower engagement rates.
- Increases in CPCs.
What Should Our Clients Do?
We don’t anticipate these changes will manifest as rapidly or dramatically as occurred on X. It is possible that many users and advertisers will experience no negative impacts.
Generally, we would counsel clients as follows:
- Monitor the content within which your ads appear.
- Consider disabling ad comments.
- Examine your appetite for appearing with content you may consider questionable.
- Weigh the performance benefits of your media investment on Meta’s platform vs. potential brand consequences of your presence on them.
Which Brand Safety Controls Remain?
Meta’s current basic brand safety features appear not to be directly impacted. And inventory filters, placement controls, content type exclusions, topic exclusions, and comment filtering remain.
But the following brand safety tools, introduced as recently as October 2024, may have a more ambiguous fate:
- Comment muting.
- Deeper partnerships with IAS< Double Verify, and others.
- Expanded ad placements controls (blocking appearances on specified profiles, including acceptance of third-party generated block lists.).
The overall robustness of third-party partner offerings moving forward is unclear.
As recent as October 2024, for example, Integral Ad Science (IAS) touted its products’ ability to target misinformation with a solution that relies on more than just block lists. IAS’s Total Media Quality (TMQ) measurement system purports to add a layer of AI-assisted content analysis, intended to align with the framework established by the now-defunct Global Alliance for Responsible Media (GARM) and Meta’s own policies (as of October 2024).
While we expect the state of Meta policy to be in flux for a while, adherence to areas of the GARM framework — and the appetite to do so — may prove impossible without third-party verification, and indeed may be viewed as “censorship” by some. Below are specific framework exclusion items that seem most likely to be affected:
- Promotion, incitement or advocacy of violence, death or injury.
- Behavior or content that incites hatred, promotes violence, vilifies, or dehumanizes groups or individuals based on race, ethnicity, gender, sexual orientation, gender identity, age, ability, nationality, religion, caste, victims, and survivors of violent acts and their kin, immigration status, or series disease sufferers.
- Insensitive, irresponsible, and harmful treatment of debated social issues and related acts that demean a particular group or incite greater conflict.
Block list effectiveness should be relatively unchanged, and may become increasingly important, with third-party providers in the best position to provide comprehensive lists — though it’s possible that IAS’s historical close partnership with Meta may soften how these are built and deployed. The implementation of robust profile blocking will represent a significant added cost that few of our clients currently incur.
We expect comment muting to be unaffected.
What Other Actions Can Brands Take?
Brands may opt to discontinue use of Meta properties if the safety risks are deemed too high. For perspective, the current reality is that no other social platform will offer the reach (and therefore the volume efficiency) that the Meta properties represent. For your comparison, here are the biggest social media platforms’ current estimated monthly users:
- Facebook: 3.07B
- Instagram: 2.11B
- Snapchat: 850MM
- Reddit: 850MM
- X: 611MM
- TikTok: 170MM
- Bluesky: 26.5MM (total users)
Indeed, the sum of all non-Meta platforms above is only 2.5B, or about half of what Meta can deliver via IG and Facebook. If advertisers decide to abandon Meta, they should be prepared for a significant decrease in program efficiencies across the board. Having said that, interest in Bluesky is very high right now with usership growing rapidly. TikTok may cease to be a viable platform by January 19. And though many clients don’t use Reddit (personally, professionally, or commercially), we do think it remains a viable ad platform for certain niche audiences and products.
Finally, it should also be noted that independent watchdog groups have become increasingly sidelined in the past few years with GARM effectively litigated out of existence. Sleeping Giants essentially disbanded in 2021, and Check My Ads (co-founded by Sleeping Giants founder Nandini Jammi) operating with a relatively low profile. Between the absence of effective independent watchdogs and the gradual normalization of questionable content and/or behavior, brands can anticipate markedly less blowback from unseemly ad positioning than they may have experienced in the recent past.
Ideas Collide will continue to monitor developments at Meta and other social platforms to ensure client brand safety and program efficiencies and effectiveness.