During the last week, Apple, Google, and AWS took steps to ensure that Parler was made inaccessible. Likewise, YouTube temporarily removed TalkRADIO’s channel because the UK radio station questioned “expert” advice on COVID-19. TalkRADIO’s content is already regulated by Ofcom, the UK’s broadcasting regulatory body, raising questions as to why YouTube believes it should censor content already subject to government regulations.
Over the past few years, private companies and other organisations — banks, payment processors, universities, social media companies, etc. — have taken on a role in censorship by either deplatforming people or revoking services. Whilst freedom of speech laws often prevent governments from censoring citizens, no such laws exist for private companies and private organisations to provide platforms to people. Likewise, most companies cannot be forced to give services to people, and most companies can revoke services (e.g., revoke a bank account) without legal repercussions.
At its heart, the argument for free speech on private companies’ platforms is a clash of two rights: People’s right to freedom of speech vs. companies’ right to freedom of association. Free speech is typically thought of in terms of governments being about to regulate speech; however, free speech is nothing without a strong cultural element: One needs to avoid the gut feeling to silence someone else in order to uphold freedom of speech; freedom of speech is nothing without the freedom to offend. But why would private companies want to platform extremely offensive language?
The severe cultural pressure on Apple and Google to remove Parler from its respective app stores sets a dangerous precedent that will only get worse. Both Apple and Google have removed plenty of apps from its respective stores in the past, for various reasons. However, removing a social media app — the attempted silencing of millions — is not the same as removing an app because of low quality concerns or apps that make fraudulent claims.
Danger for Secure Messaging Apps?
Governments have been looking for a means to ban — or backdoor — secure messaging apps since PGP. Arguments are typically formed in terms of “for your safety” against criminals and terrorists. The argument for banning Parler was to ensure public safety by banning alleged incitement of violence. Does the argument sound familiar?
Secure messaging apps remain a bastion of free speech between a limited group of people, and many secure messaging apps have been used to organise political protests against governments. Politicians — despite using these apps themselves — have already called for secure messaging apps to be banned.
Moreover, claims of misinformation on secure messaging apps give politicians another angle to attack digital privacy.
Apple and Google have mixed results when it comes to digital privacy. Both were named in the Snowden revelations as NSA partners. However, Google engineers were reportedly very angry to learn that the NSA was snooping on unencrypted traffic between their datacentres, and Apple famously refused to unlock an iPhone for the FBI. Despite banning Parler, both are currently unlikely to ban secure messaging apps…. but what happens in the future when a messaging app is used to allegedly incite violence?
Politicians have made significant inroads into abusing the digital privacy of citizens, although calls to directly regulate or backdoor secure messaging apps have, to date, failed.
With the increased cultural focus on censorship and misinformation, expect politicians to change tack. It wouldn’t surprise me if overzealous politicians now begin to use the language of “incitement” and “misinformation” to target secure messaging apps.