In pursuit of both performance and fairness, brands aim to become smarter about brand safety – AdExchanger


When it comes to brand safety and fit, marketers often struggle to balance their priorities.

Bad habits are hard to reverse. Just ask someone who’s tried to make a New Years resolution.

But more and more advertisers are starting to rethink their programmatic media buying practices from a more sustainable, diverse and focused perspective.

“Aligning your brand values ​​with media investment is the next evolution in targeted advertising,” said Rachel Lowenstein, global managing director of inclusive innovation at Mindshare, owned by WPP.

The question, however, is how. Like plans to hit the gym in January, there’s a big difference between saying and doing, especially when it comes to smart brand safety protection.

Safety dance

When it comes to brand safety and fit, marketers often struggle to balance their priorities.

Advertisers naturally seek to protect themselves against fraud, awkward placements, and inadvertent financial misinformation, but in doing so, they rely on overzealous or blunt branded security filters that keep them away from news and miscellaneous content.

While well-intentioned efforts from industry like the Global Alliance for Responsible Media (GARM) have emerged in recent years to set global standards for brand safety and suitability on the path to a digital media environment more sustainable, putting those standards into practice at scale remains a big challenge, said Chris Vargo, CEO and founder of contextual categorization startup Socialcontext, which is supported by academics at the University of Colorado.

For example, last year, GARM and the IAB Tech Lab worked together develop a content taxonomy to help third-party verification providers avoid problematic content without hampering a publisher’s ability to monetize.

Comic: Brand SafetyThe standard identifies 11 categories of sensitive content, including hate speech, terrorism, obscenity and explicit sexual content, and four levels of risk tolerance: high, medium, low and a “floor” that defines the benchmark. for the type of content that most advertisers want to deliver. to avoid, such as pornography.

Using GARM standards, an advertiser could differentiate between an article with topical value on terrorism and an article promoting terrorism.

“But, unfortunately, there is no really precise way to distinguish between informative content and dangerous content,” Vargo said. “Even the best contextual technology today cannot precisely analyze these elements. “

And such worthy content can be caught in the brand’s safety net.

For example, according to recent research From contextual targeting company GumGum, the majority of climate change-related content is safe to advertise, although it is labeled as dangerous for… climate change.

GumGum has determined that in a 30-day period at the end of last year, nearly 60% of unique pages containing keywords related to climate change in its publisher network were “safe” according to GARM standards. In just one month, brands using basic blocklists to avoid climate change content missed 52.8 million impressions – and that’s fair for all of GumGum’s offering.

Not safe for work

Brands that don’t take a more refined approach to brand safety and suitability risk unnecessarily rejecting content that addresses important social or environmental issues – and diminishes the scale (and performance) of their campaign. to start.

There is no reason why objective, factual, or positive reports on potentially sensitive matters lose the opportunity to monetize due to an overly restrictive blocklist.

“Exclusion lists as a main strategy are archaic, [and] holding companies and partners continue to be urged to refine or reduce their use as much as possible, ”said David Murnick, member of the Brand Safety Institute advisory board and former executive vice president of investment media operations and partnerships / brand safety manager at Dentsu. .

“If you put broad terms like News, LGBTQ, and Black Lives Matter on an exclusion list, you’ll end up removing a lot of good positive or neutral content that is covered in a safe and generally appropriate manner,” Murnick said.

And some of the content overshadowed could even have a positive impact on a brand’s bottom line.

Ads placed alongside content on sustainability, recycling and nature conservation – some of which are classified as climate change related and therefore blocked – are associated with a direct increase in purchase intention compared to content on climate change denial, according to research published in December by the Brand Safety Institute, Nielsen Innovate and AdVerif.ai, an Israeli startup that works to design tools to fight hate speech and fake news.

Comic: With that in mind, brands need to strike a balance that allows them to reach their reach without monetizing waste.

“The real practical challenge is the scale,” said Or Levi, CEO and founder of Adverif.ai. “If you only target sites with positive content, you probably won’t have that much scale, which means you also want to check out sites with neutral, topical content while avoiding places that promote misinformation.” . “

Safety valve

Socialcontext has a new approach to help ad buyers identify news on social and diversity issues that would typically be overblocked based on traditional brand safety and fit methods.

Rather than starting with a list of “problematic” keywords, Socialcontext develops definitions of concepts, such as gender equality and racial equality, using academic research. Professors in the field are invited to validate the definitions.

University of Colorado graduate students take these codified descriptions and classify thousands of articles on the open web as pro-diversity or not. These manually classified items then serve as a sample group for the machine learning algorithms. Once these algorithms detect information promoting racial or gender equity, for example, advertisers can unblock items that would otherwise have been excluded based on their blocklists.

“For the right advertiser, sponsoring content on gender equality or women’s athletics doesn’t just help them with their DCI initiatives,” said Vargo of Socialcontext. “It also helps them reach the right audience. “

Mindshare is working with Socialcontext to gather additional publisher information for a more nuanced approach to brand safety.

The point is, despite so much ink spilled on the problem, publishers – especially minority publishers – are still forced to deal with overly aggressive blocklists that lead to unfair monetization. Editors in the LGBTQ community, for example, are at a disadvantage because their content regularly uses terms such as “transgender”, “lesbian” or “bisexual”.

“As part of the intentional investment and inclusion in your overall media mix, I pose that brand safety needs to evolve further to respond to modern issues, and that’s something we’re working on,” said said Lowenstein of Mindshare. “Despite all the good that brand security does, there are still challenges to be overcome in order to best protect human security. “


Comments are closed.