How was the article?

1482530cookie-checkYouTube Uses “Raters” To Gauge “Borderline Content” For Shadow Bans
Features
2019/06

YouTube Uses “Raters” To Gauge “Borderline Content” For Shadow Bans

We knew that YouTube introduced algorithms back in January of 2019 to reduce recommendations of “extremist” or “borderline” content. Well, at a recent Code event, which was taking place as the same time as E3, we learned a bit more about YouTube’s recommendation feature and how it will shadow ban “borderline content”.

During a Q&A session, Kevin Roose, a tech columnist who penned a hit-piece against YouTube for the New York Times, asked YouTube CEO Susan Wojcicki if the platform was having a “radicalizing” effect on the way people engage with the content. Wojcicki talked completely around the question, but did offer some insight into the way the shadow ban filters work for recommendations, which was highlighted in a tweet by Josh Pescatore.

If you’re unable to view the video, Wojcicki expounds on their recommendation filters for “borderline content” by talking about “raters” for how the content is gauged, saying…

“We offer – as you know and you’ve researched – a broad range of opinions. And we have looked in all of these different areas across the board and we see that we are offering a diversity of opinions. So when people go and they look up one topic – whether it’s politics, or religion, or knitting – we’re going to offer a variety of other content associated with it.

 

“But we have taken these radicalization concerns very seriously, and that’s why at the beginning of January we introduced some new changes in terms of how we handle recommendations. I know you’re familiar with them; you’ve referenced them. But what we do – just for [a bit] of background for other people – is that we basically have an understanding of what’s content that’s borderline. When that content is borderline we determine that based on a series of different interv– [cuts off] raters that we use that are representative of different people across the U.S., and when we determine that we’re able to build a set of understanding of what’s considered borderline content and then reduce our recommendations.

 

“So we have reduced by 50% the recommendations that we’re making of borderline content. And we’re planning to roll that out to all countries – to 20 more countries – this rest of the year.”

Now if you want to view the full interview you can do so over on Recode’s YouTube channel where they have up the full 41 segment.

But to break down the important part of the exposition regarding this topic, we need to go back over Wocjicki’s stumble and recovery of when she first mentioned “interviews” but changed it to “raters”.

So if we’re take, prima facie, what was first mentioned before addressing what was mentioned thereafter, it would appear that YouTube is gauging content based on interviews they hold with certain people – presumably different demographics – across the U.S., to see and gauge whether or not certain kinds of content is actually “borderline content”.

Whatever the interview respondents return as the answer is likely how YouTube proceeds.

However, when we look at the situation from the perspective of YouTube using “raters” – or people who rate the content – then it develops a slightly different outlook on how the recommendation feature is being hewn for users. Raters specifically seems to point to people who view and rate content on YouTube and then based on that the algorithm determines whether or not the content deemed “borderline” is worth recommending.

In either case, it’s a situation of the content being outside of the organic search engine recommending videos based on what you like (or who you subscribe to) and instead making those decisions based on ideological preferences or determiners made by YouTube’s “raters”.

More pointedly, this means that content that is being rated “borderline” by the raters is being pushed out of the recommendation feeds. In turn, this means that content that would typically appear in the recommended bar is not appearing there, thus a sense of censorship, or what we call a soft shadow ban.

YouTuber Weaponized Nerd Rage did a short video examining the impact of this new feature on potential viewership and user acquisition.

Right now it’s hard to get a proper gauge on how impactful this will be on certain channels.

YouTube wasn’t very forthcoming about the exact channels hit by this new feature, and so far given all the other changes they’ve made people have been mostly focusing on just trying to stabilize their channels instead of worrying about new kind of suppression technique.

That’s not to mention that every YouTuber that wasn’t a mainstream media source was negatively impacted by YouTube’s focus on their “authoritative” algorithm, which saw the platform giving more prominence and promotion to mainstream media and corporations via the trending tab, as revealed in a detailed report by Coffee Break.

YouTubers have also had to deal with the VoxAdPocalypse taking place, which resulted in a number of channels getting fully demonetized or terminated.

I imagine as the shadow ban for the recommended feature continues to roll out in more regions around the globe, we’ll be able to better gauge its impact on various channels and how it will affect their video discovery across the platform.

(Thanks for the news tip Weaponized Nerd Rage)

Other Features