The GARM which consists of the world’s biggest advertising companies — including a few Big Tech companies — has agreed to evaluate some issues collectively, including deciding how to better define hate speech across the entire industry.

Progress on the removal of hate speech

Update from Global Alliance for Responsible Media (GARM)

The Global Alliance for Responsible Media (GARM), the industry body consisting of the world’s biggest advertising companies — including a few Big Tech companies — has agreed to evaluate some issues collectively, including deciding how to better define hate speech across the entire industry.

In addition to the position individual advertisers & members take on hate speech, the AANA believes that a solution driven at a global level through GARM will be the best way to align globally based social media platforms with the needs of advertisers worldwide. The WFA is coordinating this work through GARM and the AANA receives regular updates and opportunities to input.

The WFA outlines their approach on GARM below which is followed by an update specifically on the Facebook situation from Will Easton, Managing Director Facebook ANZ

Firstly, here is an update from the CEO of World Federation of Advertisers (WFA), Stephan Loerke:

“Just over a year ago, we launched the Global Alliance for Responsible Media (GARM) with the overarching goal of removing harmful content online and eliminating inadvertent advertising support of it. The GARM charter sets out an industry-wide plan to drive more consistency, transparency and control. And the first GARM key building blocks are poised to be delivered later this month.

Over the last few weeks, the combination of the COVID pandemic, societal and racial justice protest movements, and the upcoming US presidential elections has exposed yet again the vulnerabilities of the platforms. And has revealed again the urgent need for them to put in place a more robust and consistent framework for fighting harmful content. Right now, many advertisers and their agencies are reviewing platform efforts in addressing these critical issues and considering their next moves.

Given those recent events, we see a need to rejig the GARM agenda and prioritize certain areas such as hate speech, while pushing forward on the deliverables we’ve already committed to.

These are the areas that we’re now focusing on in our work with platforms:

  1. Driving an immediate adoption of the GARM’s definitions for harmful content, with a focus on Hate Speech and Acts of Aggression
  2. Harmonizing methodologies which platforms use to report on harmful content
  3. Creating transparency via independent verification of measurement and reporting
  4. Improving adjacency controls that allow for advertisers to make better informed decisions on content and conversations they want to put their brands next to.

As you may have seen in the press, we are actively engaged on this accelerated agenda. We are doing this work via the GARM SteerTeam and Working Groups.

Our goal is to drive all platforms to develop a consistent plan to address the steps above in the coming weeks. And to then hold them to account in delivering those actions within the agreed timeframe. We anticipate that we will be able to update you on these aligned action plans next month, with a detailed walk-through in September.

We encourage all brands to join and become involved in the GARM Community to help shape the solutions that drive consumer safety and societal health.”

And the Facebook update from Will Easton:

GARM is activating and accelerating the work of their cross-industry working groups to focus on the following four areas:

  1. Definitions: Agree on the 11 standard definitions of harmful content outlined in the brand safety floor- with immediate focus on Hate Speech + Acts of Aggression
    • Our Status – Our goal is to align with the industry on standard  definitions for the brand safety floor in August 2020. Meetings have already taken place with our policy team and GARM and I am confident we are having the right conversations.
  2. Measurement: GARM is asking for 3 key metrics: Prevalence, Incidence, and Adjacency of hate speech and acts of aggression by platform and product type.
    • Our Status
      • Prevalence: We expect prevalence for hate speech in the November CSER (Community Standards Enforcement Report), pending no additional COVID-19 challenges.
      • Content Actioned: In our CSER, we already report how much content we took action on and how much of it we found before a user reported it (our next report will be released in August 2020).
      • Adjacency: We are exploring methodologies to do this, but most likely won’t have anything to share until after we have the prevalence number (likely November 2020). We recognize the desire to have a long-term plan to report on this on a consistent basis.
  3. Audits: GARM wants agreement on independent, third-party measurement, audits and oversight.
    • Our Status –
      • We will undertake an independent, third-party audit of our content moderation systems to validate the numbers we publish in our CSER report–to understand if our numbers are accurate.  The RFP will be issued in August 2020 and the audit is expected to be conducted in 2021.
      • MRC Audit of Advertiser Controls –  MRC’s goal is to cover all Facebook owned and operated properties with an Accreditation assessment of Brand Safety processes, reporting and compliance with relevant MRC standards. This audit will be comprehensive and will take time.  We are in continued discussions with the MRC on the overall process and plan to report progress by mid-August, and will communicate a plan for further marketplace updates thereafter.
  4. Suitability Controls: GARM is asking for us to enhance advertiser controls on adjacency to provide advertisers options to ensure ads are not on or near Hate Speech and Acts of Aggression.
    • Our Status
      • We do have controls on surfaces like Audience Network, Instant Articles, and in-stream video. For Feed- we have met with many advertisers and are evaluating plans on what this could possibly look like. We do know the best way is to continue to aggressively focus on our detection and enforcement of removing violating content from our platform and our next report card on this will be in August.