brand safety and influence marketing

Digging Deeper Into Brand Safety

This entry was posted in News on .

Overview 

Brand safety is a key factor whenever a creator works on a company’s campaign. 

On a macro level, brands strive to ensure that their content, organic or sponsored, does not appear adjacent to blatantly objectionable or illegal content, such as pornography, child exploitation, unlawful narcotics sales, violence, racism or sexism. 

However, on a micro level, many marketers take their efforts one step further to shield their content and ads from subject matter that, while not illegal, does not conform with the message their brand wishes to convey. And this is where tools such as Open Influence’s Go Prism creator listening platform can provide a valuable service. 

Brand Safety Efforts by Social Platforms 

Brand safety is also a priority for social platforms, as instances of companies’ content and potentially offensive fare appearing on the same screen will cause those companies to think twice about where they allocate their marketing dollars. 

Via TikTok

TikTok’s Brand Safety Center provides details on the video creation platform’s partnerships with industry organizations Brand Safety Institute, Global Alliance for Responsible Media, IAB, and Trustworthy Accountability Group, as well as with third-party partners DoubleVerify, Integral Ad Science, and Zefr

The TikTok Inventory Filter, introduced last July, gives brands more control over content that appears next to their ads in the platform’s For You feed, assigning one of these four risk levels to content: 

  • Floor Content: Content that violates TikTok’s community guidelines, terms of service, and/or intellectual property policy. 
  • High-Risk Content: Glamorization or gratuitous depiction of mature themes. 
  • Medium-Risk Content: Fictional or entertainment depiction of mature themes. 
  • Low-Risk Content: Educational depiction of mature themes. 

Those risk levels are then used to populate the following inventory tiers: 

  • Full Inventory: Excludes Floor Content removed by moderation against TikTok’s community guidelines and some High-Risk Content, but ads may appear next to some content featuring mature themes. 
  • Standard Inventory: Excludes Floor Content and High-Risk Content. Ads will appear next to content that is appropriate for most brands but may contain some mature themes. 
  • Limited Inventory: Excludes all risk levels. Ads will appear next to content that does not contain mature themes. 

Meta (covering Facebook, Instagram, mobile monetization platform Meta Audience Network and messaging app WhatsApp) offers its own Inventory Filter feature, along with other brand safety options such as: 

  • Block Lists: Brands can upload lists of URLs where they do not want their content to appear. 
  • Topic Exclusions: Brands can ensure that their ads are not displayed on in-stream Facebook videos about gaming, news, politics, or religious/spiritual content. 
  • Content Type Exclusions: Brand can specify that their Facebook in-stream video ads not appear in live videos or videos from publishers that are not on the platform’s publisher list

Despite the chaos that has prevailed since Elon Musk took over Twitter last October, the platform still has robust brand safety policies in place, and recent developments put in place by Musk’s scaled-down staff include Adjacency Controls, pre-bid controls that help brands ensure that their ads do not appear adjacent to tweets containing keywords they wish to avoid in relevance-ranked timelines, the bulk of timelines on its platform. 

Twitter also said last December that it would further expand its brand safety partnerships with DoubleVerify and IAS, planning to roll out post-bid brand safety reporting for tweets in the home timeline at scale to its advertising partners during the first quarter of this year.  

The platform said at the time that this reporting will give advertisers transparency on the context in which their ads are served, conforming with the GARM Brand Safety and Suitability Framework

What Is GARM? 

The World Federation of Advertisers launched the Global Alliance for Responsible Media at the Cannes Lions International Festival of Creativity in June 2019 as a cross-industry initiative to combat the monetization of potentially harmful content on social platforms, improving online safety for both advertisers and consumers. 

GARM is made up of advertisers, agencies, industry organizations, media companies, and platforms, and working groups formed by its members focus on specific challenges and issues and develop solutions for the wider community to vote on and adopt. 

The WFA explained on the GARM website, “GARM’s role is to act as the forum for the creation of solutions that will improve online safety for both consumers and advertisers. By creating a working forum where all parts of the online advertising system can meet, GARM’s ambition is to get the digital media ecosystem working together on the shared priorities that will lead to the removal of harmful content from advertiser-supported social media.” 

Brand Safety on a Micro Level 

Open Influence’s Go Prism creator listening platform gives brands a thorough social media background check on creators they are considering partnerships with, analyzing those creators’ posts to surface potentially objectionable content and examples of unusual engagement with their posts. 

Go Prism lets brands go one step further, though, enabling them to vet creators’ activity history across six distinct categories that, while not necessarily illegal or in violation of social platforms’ respective regulations, may not jibe with the brand image they are trying to project: alcohol, drugs, politics, profanity, religion, and sexual content.

“Brand safety is important to ensure that the creators that brands select are who they say they are. The brand safety component of Go Prism functions as a complete social media background check.” 

Steve Kilsdonk, SaaS sales executive, Open Influence 

Go Prism’s brand safety functionality saved a brand from a potentially embarrassing public relations fiasco last year. 

A leading baby products brand had zeroed in on a creator whose content was focused on motherhood, and it seemed like a perfect match, until Go Prism surfaced a host of older posts referencing the creator working as an escort, appearing in pornography and discussing drug use. Without Go Prism, the brand would have had to spend hours examining years of content across multiple social platforms to make that discovery on its own.

Open Influence, a global creator marketing agency, is always looking for ways to create memorable brand experiences with creators both online and offline. Start your free trial of OI’s Go Prism creator listening platform today.

Leave a Reply

Your email address will not be published. Required fields are marked *