Many brands and agencies are using a blunt-knife approach to brand safety, resulting in unintentional discrimination in advertising, writes Adapt's Liam McNamee
Over the past few years, we’ve seen an increase in the number of brands demonstrating their social stance. And today, these brands are choosing to buy their media from a diverse range of creators and ethical sources.
However, businesses are under pressure to show that they are a brand of integrity and they fear that their advertisements might feature next to other content that could be detrimental to their reputation.
Brand integrity is of the utmost importance. 54% of consumers said they would think negatively of a brand that runs ads alongside content designed by others whose morals don’t align with theirs.
You can see why brand safety is such a hot topic in marketing, and this is especially apparent in the programmatic world.
If you’re unsure of what brand safety includes, it essentially encompasses all the measures advertisers implement to protect their brand from the potential backlash they could face by running ads next to harmful content.
The problem with over-blocking
However, over-blocking content can actually be doing more harm than good. A vast majority of businesses still adopt a cut-throat approach when it comes to protecting their brand’s reputation.
Yet this overprotective approach has led to a great proportion of content being excluded, even when it’s safe and suitable for your brand. We’ve seen this happen in many negative keyword lists and blocklists.
Industry-standard blocklists contain a vast array of terms related to race, ethnicity and sexual orientation. So, it’s clear to see that: “Industry standard advertising practices [are] unfairly penalising content creators within various groups, including the LGBTQ+, BIPOC and API communities, as well as content relating to important aspects of the human experience, including social issues, mental health and wellness and identity.” This is taken from Channel Factory, Conscious Project, 2021.
How detrimental are negative keyword & blocklists?
In 2019, CHEQ’s ‘How Keyword Blacklists are Killing Reach and Monetization’ report looked at what type of content has been excluded from an industry-standard blacklist of 2,000 keywords.
The report found that 57% of articles that were safe, were incorrectly flagged and blocked from serving ads. This was taking place because brands had been using overprotective negative keyword lists and blocklists.
These lists are incredibly harmful to creators and publishers, as they are unable to monetise their own content.
Other statistics the CHEQ study uncovered included:
-
73% of safe LGBT news-related content was blocked due to keywords like “gay”, “lesbian”, “bisexual”, and “same-sex marriage”
-
75% of safe history-related news content is being blocked
-
65% of content relating to movies and TV is being blocked
Why blocklists have become unethical
Most companies do not update blocklists regularly, which is why a great deal of suitable content is deemed harmful.
Times are changing and have been changing quite drastically over the past few years. Our lists of negative keywords may have been necessary in 2017, but today they could be absolutely pointless.
Unfortunately, these outdated blocklists are harming marginalised communities across the globe, which is unfair, to say the least. And this is a problem that needs urgent attention from our industry.
This overwhelming fear has led brands into the trap of unethical exclusion. What we mean by this is that brands and agencies are blocking all content related to marginalised groups and communities.
Therefore, content related to gender, sexual orientation, race, ethnicity, social issues, identity (to name a few) should be monetised.
Positioning yourself in the market as a diverse and inclusive business has never been more important. Today, 60% of consumers prefer to associate themselves with companies that actively show they are committed to creating an online experience that is inclusive for all.
How to create inclusive advertising
Of course, it’s important to focus our attention on brand safety. But as an industry, we need to think of new ways to implement brand safety without excluding creators based on their sexual orientation, race, or ethnicity.
This new approach needs to be one that can monetise positive content, which will, in turn, benefit the wider society.
1. Review your blocklists regularly
Your blocklists and negative keywords need to be tailored to your brand and checked frequently. Don’t just keep adding new words, remove the unnecessary ones.
2. Review your brand safety processes
It’s always worth checking over your brand safety processes. You can then remodel a new brand strategy that encourages inclusivity.
3. Whitelist creators
You should always search for and connect with brands and creators that share your values. Once you have added them to your whitelist, they can assist in the monetisation of positive and inclusive content.
4. Consider your brand safety partnerships
Working with a third-party brand safety and brand suitability partner can help with your brand safety measures. These third-party partners usually opt for human input, which ensures content is categorised with diversity in mind.
In conclusion...
As an industry, we need to embrace changes to brand safety measures to ensure we are not excluding words, phrases, or languages that could exclude minority groups.
All voices should be represented in your content, not just a select few. By monetising your content, you can create new and improved content, which highlights diverse and marginalised communities.
It’s time you move forward with your approaches to brand safety because, when you do, you will be actively creating an online world that promotes positivity and inclusivity.
Posted on: Monday 18 October 2021