We previously wrote about how Disqus had plans on “improving” the user experience by being more proactive in moderating comments on publisher platforms. They’ve decided to take it a step further by introducing Disqus users to some of the new features set to roll out soon, including tools to shadowban users just like how Reddit and Twitter do it, and new machine learning AI detection for what they deem to be “toxic content”.
On the site, Kim Rohrer explains that…
“The Disqus Platform supports a diversity of websites and discussions; with such a large network of publishers and commenters, having a policy against hateful, toxic content is critical. While we do periodically remove toxic communities that consistently violate our Terms and Policies, we know that this alone is not a solution to toxicity. Oftentimes these communities simply shift to another platform. Ultimately, this does not result in higher quality discussions, and it does not stop the hate. In order to have a real, lasting impact, we need to make improvements to our product. Which is why, if at all possible, we work with publishers to encourage discourse (even unpopular or controversial discourse!) while helping to eliminate toxic language, harassment, and hate.”
In order to take measures against entire communities, Disqus will be implementing a form that can be used to tattletale on a website or channel.
You can see what the forum submission looks like over on the TOS Violation Submission site.
If a site is caught violating Disqus’ terms of service, they may warn the publisher, inform the website owner to “implement moderation improvements” or temporarily ban the site from using the service.
They say this is not only done to improve the community but also to appease advertisers…
“[…] we know that advertisers do not want their content to display next to toxic comments. Leveraging our moderation technology, we will provide more protection for advertisers, giving them more control over where they display their content.“
While the machine learning algorithm for detecting “toxic” content was viewed as bothersome to some, the one thing that really set off a lot of people was the open admission to using shadow banning. Rohrer explains…
“We’re working on two features right now, Shadow banning and Timeouts, that will give publishers more options for managing their communities. Shadow banning lets moderators ban users discreetly by making a troublesome user’s comments only visible to that user. Timeouts give moderators the ability to warn and temporarily ban a user who is exhibiting toxic behavior.”
Reddit and Twitter have also done the same thing. In fact, Twitter has been proud about their shadow banning, openly admitting they use shadowbans to fragment discussion.
— William Usher (@WilliamUsherGB) April 5, 2017
The funny thing about it is that all of this totalitarianism has actually affected the mental health of the average American. They’ve noted that all this political tension has raised stress levels in America.
The other funny thing about is that when some people in the comment section on the Disqus site commented that this measure seems like another step towards censorship… to no one’s surprise, the comment was censored.
Some people are thankful they have more tools for moderation. Others are leery as to where this new position of authority that Disqus has bestowed upon themselves will lead. Some see it as a way for Disqus to finally curb communication on sites that espouse opinions and discussions that are “unpopular” with the SJW types.