International

US platforms have recently removed content: that’s okay, but not the solution – politics

Platform operators such as Facebook, Twitter and Google have been reluctant to take social responsibility for years. When it came to hatred in social networks, they always referred to an American concept of freedom of speech: anyone can say anything as long as the very vaguely worded community norms are not violated.

This led to absurd situations: while certain forms of nudity led to sanctions, minorities could be incited with impunity.

It all had a system. Because the business model of ad-funded platforms is to keep users on the screen as long as possible to show them as many ads as possible. This works best with loud, very quirky content.

For example, YouTube is known for using an algorithm that shows users increasingly radical videos over time. In this way, viewers disappear down the ‘rabbit hole’ – each new video recommendation takes you deeper into the dark corridors of extremist idea worlds. And Facebook has also been making extreme opinions visible for years, provided they provoke sufficient interaction.

The operators of the platforms are beginning to understand the consequential damage

But slowly a turning point seems to be emerging. In the past week, YouTube has removed tens of thousands of videos from followers of the “QAnon” conspiracy ideology. For a few days now, Facebook has banned its users from denying the Holocaust and no longer sells advertising space on anti-vaccine drugs. Apparently, the platform operators have understood that their ever-faster interaction machine could cause incalculable consequential damage in the heated opinion climate in 2020.

[Wenn Sie alle aktuellen Entwicklungen zur Coronavirus-Pandemie live auf Ihr Handy haben wollen, empfehlen wir Ihnen unsere App, die Sie hier für Apple- und Android-Geräte herunterladen können  ]

Occasionally, however, the platform operators also invade areas where interventions in the political opinion process can have delicate consequences. Facebook and Twitter, for example, inhibited the spread of research by the tabloid “New York Post” into alleged revelations about Joe and Hunter Biden. In the case of Facebook, this already happened before the fact check was completed. The reason for this was the fear of election manipulation.

The report on the Biden family had flagrant arguments. But it also came from an editor of a research newspaper. Now, when platforms rise to judge truth and false statements in journalism, is that really the form of control one could wish for in view of the upcoming federal election?

The platforms must be able to justify their decisions

Social networks have gained enormous social importance over the past ten years. They resemble power grid operators: Facebook, Twitter and Google are providers of critical political exchange infrastructures. Of course they also have a responsibility to ensure that their systems operate free from interference and discrimination.

For example, electricity network providers must prove that they protect their systems against cyber attacks. Likewise, social networks should establish specific rules on how they deal with electoral manipulation. In the case of the “New York Post”, Twitter responded exemplarily and could cite the reasons that led to the text not being shared. Facebook could not say anything about this when asked.

In order for infrastructures for political exchange to function smoothly during electoral campaigns, it is also necessary to ensure that hate speech is not preferred for short-term profit making. Rather than enforcing banal, obvious things like the ban on Holocaust denial, platform operators should be required to disclose their algorithms. Only then will we find out how serious social networks really are about their social responsibility – and whether their business model has long since become a threat to democracy.

Report Rating
Close