In light of the Christchurch, New Zealand shooting that took place back on March 15th, 2019, Microsoft announced that they want to work with tech giants to suppress and censor any kind of “violent” or “toxic” content that they feel shouldn’t be spread, shared, or discussed online.
In a blog post made over on the Microsoft website by Microsoft president Brad Smith, who explained…
“Words alone are not enough. Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch.”
Smith advocated for improving existing technology to better identify and classify extremist or violent content that needs to be censored and removed from the internet as quickly as possible, creating a blackout on the source of the information as quickly as possible.
Smith goes into more detail on how this would be accomplished, explaining that the tech giants would effectively become technocratic arbiters of content curation for all recorded and shared content that would be identified and quickly barred if it’s determined to be “extremist” or “violent” via machine-learning AI systems. Smith writes…
“First, we need to focus on prevention. We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.”
But it doesn’t end there.
Microsoft also wants to pool resources with other tech giants, creating a globalized, one-tech-nation command center where they can initiate a protocol to remotely lockdown content deemed “violent” or “extreme” by the AI, a little bit like how in The Matrix the Architect was able to reconfigure events or influence outcomes for people through the mainframe without anyone even knowing what was going on. This was also discussed way before Microsoft announced their plans by various YouTubers such as Arch Warhammer.
Second, we need to respond more effectively to moments of crisis. Even with better progress, we cannot afford to assume that there will never be another tragedy. The tech sector should consider creating a “major event” protocol, in which technology companies would work from a joint virtual command center during a major incident. This would enable all of us to share information more quickly and directly, helping each platform and service to move more proactively, while simultaneously ensuring that we avoid restricting communications that are in the public interest, such as reporting from news organizations.”
Now this is a key thing to keep in mind here. Smith is saying that the general public should not be allowed to spread, share, discuss, or access this information independent or separate from what the AI allows them to access, nor should they be allowed to disseminate or propagate the information or content outside of what’s allowed on major mainstream news organizations, much like how the shooter’s manifesto and video are blocked from public access by New Zealand but mainstream media is allowed to share and discuss the contents.
The last bit focuses on Microsoft’s aim to work with other companies to fight against increasingly “toxic” discourse. Smith writes…
“Finally, we should work to foster a healthier online environment more broadly. As many have noted, while much of the focus in recent days rightly has been on the use of digital tools to amplify this violence, the language of hate has existed for decades and even centuries. Nonetheless, digital discourse is sometimes increasingly toxic. There are too many days when online commentary brings out the worst in people. While there’s obviously a big leap from hateful speech to an armed attack, it doesn’t help when online interaction normalizes in cyberspace standards of behavior that almost all of us would consider unacceptable in the real world.”
So here, Smith is talking about big tech working together to censor and stifle what they determine to be “hate speech”.
Keep in mind that hate speech is protected by the First Amendment in America. Hateful speech that incite calls to violence are considered unlawful, however. But that isn’t what Smith is talking about. He’s talking about “language of hate” and “toxic” discourse that would be determined by the likes of Microsoft, Google, Facebook, or Twitter. As we can already see in the case of Twitter, their censorship and policy enforcement oftentimes moves in one direction: against people who usually don’t agree with Leftist politics.
We usually see appeals for civility as the excuse to increase censorship of the internet. If Microsoft’s plan does go through and they manage to coalesce with Facebook, Twitter, and Google on stricter content sharing policies and digital social interactions, then they’ll basically be caring out exactly what the Christchurch shooter wanted. He specifically pointed out in his manifesto that his plan was to accelerate censorship and gun control policies in order to sow discord and division among politically demarcated groups in hopes of eventually inciting a civil race war.
(Thanks for the news tip s_fnx)