What progress is being made in removing harmful content from ad-supported digital media?

Measuring brand safety:
how well are the platforms enforcing their policies

The AANA understands the importance of brand safety to advertisers. As a member of the WFA’s Global Alliance for Responsible Media (GARM), we can share important insights into what progress is being made in removing harmful content from ad-supported digital media.


The GARM progress reports, which track progress on 7 participating digital platforms, provide a framework for advertisers to determine how well the platforms are doing in terms of brand safety by asking the following questions:

  • How safe is the platform for consumers?
  • How safe is the platform for advertisers?
  • How effective is the platform enforcing its safety policy?
  • How responsive is the platform at correcting mistakes?

 

The latest GARM progress report shows the following progress and trends:

  1. Spam/Malware and Adult/Explicit Content continue to be the leading enforcement areas in terms of content blocked, removed, and actors actioned from digital platforms. Technological developments are enabling faster, greater and more proactive enforcement of this content.
  2. Highly nuanced content such as Crime & Harmful acts to Individuals and Society are highly reliant on context and remain the most manual in terms of enforcement because assessing such content requires human moderation which cannot necessarily be automated. This results in a lower and less proactive enforcement rate of this type of content.
  3. Platforms are increasing their focus on Misinformation and Self-Harm content and sharing this data with GARM. This finer detail on more sub-categories of content provides a better picture of brand safety on those platforms.
  4. Youtube became the first and only media platform to receive independent accreditation for metrics intended to report on the safety of monetised content.

Read the full GARM Aggregated Measurement Report