Hans-Bredow-Institut Amélie Heldt studied French and German Law at the universities of Paris Nanterre and Potsdam.
After passing the German first state examination, she took a supplementary training programme in Design Thinking at the Hasso-Plattner-Institut in Potsdam and worked in the legal department of a Universal Music. She completed her two-year clerkship at the superior Court of Justice in Berlin and was working i.a. for the Berlin Opera Foundation, in the media and copyright area of the law firm Raue LLP and for the GIZ in Cambodia. Since May 2017, she is a junior researcher within the research program “Transformation of Public Communication” at the Hans-Bredow-Institute and a PhD candidate with Prof. Wolfgang Schulz. She is an associated researcher with the Humboldt-Institute for Internet an Society in Berlin.
“Upload-filters: bypassing classical concepts of censorship?”
Keywords: freedom of expression, censorship, democratic legitimation, upload-filters, prior restraints Protecting human rights from automated decision making might not be limited to the relationship between intermediaries and their users. In the European human rights framework, fundamental rights are in principle only applicable vertically, i.e. between the State and the citizen. Where does that leave the right of freedom of expression when user-generated content is deleted by intermediaries due to an agreement with the public authority? We must address this question in the light of the use of AI to moderate online speech and its (lacking) regulatory framework.
In 2017 there have been important changes regarding the use of upload-filters in the EU in order to prevent the spreading of terrorist and extremist content online. Via a code of conduct in the context of the EU Internet Forum, four companies (Facebook, Twitter, YouTube and Microsoft) have committed themselves to hash content and share it in a common “Database of Hashes”.
Considering that upload-filters 1) operate before the user-generated content is published (unlike re-upload-filters) and 2) screen all content regardless of suspicions, the potential risks for free speech are very high. It is therefore necessary to analyze whether this type of action can be subsumed under the notion of censorship and whether its categorization in public or private censorship is still appropriate.
One could argue that if the EU Commission (or another public stakeholder) pushes private IT companies to use AI to filter and delete user-generated-content, the censorship could indirectly be state-driven. The detour via a code of conduct makes it possible to delete legal content although censorship is clearly forbidden by Art. 5 of the German Basic Law (compared to Art. 10 ECHR, where prior restraints are not forbidden per se). In German constitutional law the interpretation of censorship is limited to state action and scholars aren’t inclined to widen the scope of application. However, by using soft law instruments to engage intermediaries to use upload-filters, the State could potentially bypass the prohibition of censorship.
How does the constitutional protection of fundamental rights coincide with the use of upload filters on digital platforms when soft law instruments are being used? Is it an “unholy alliance” (Birnhack/Elkin-Koren, 2003) or a necessary cooperation in order to govern the digital environment? How up-to-date is our definition of state action when it comes to online communication? Thus, how relevant is it that the four companies involved are the biggest players on the market? This brings us to the fundamental question whether we need a clear legal basis for the use of intelligent filters or other types of AI in digital communication.