Alexandra Giannopoulou

Alexandra Giannopoulou

IVir, University of Amsterdam

Alexandra Giannopoulou is a postdoctoral researcher at the Blockchain and Society Policy Lab at the Institute for Information Law (IViR), University of Amsterdam. She is an associate researcher at the Institute for Communication Sciences (ISCC) in Paris, within the research group Information and Commons Governance and she has also worked as a research fellow at Humboldt Institute for Internet and Society (HIIG) in Berlin.

She holds a PhD in law from the Center for Legal and Economic Studies of Multimedia (CEJEM) at the University of Paris II Pantheon-Assas. Her thesis, entitled "The Creative Commons licenses" and supervised by Professor Jerome Huet, was presented on December 2016.

Algorithmic systems: the consent is in the details?
The exponential use of algorithmic processing of data has put the applications of advances in artificial intelligence at the forefront of the European Commission’s agenda for the digital single market . The broad vision of algorithmic technology applications is to reengineer current power structures and to create fair decision-making transparent systems whose impact will improve society as a whole. The development of algorithmic systems is largely data-driven. Consequently, with this vision come regulatory legal challenges. As machine learning processes and the use of artificial intelligence (AI) continue to evolve, legal and computer science scholars express concerns about the consequences of the creation of data-intensive information societies.

Within the legal field, scholars have explored questions related to data ownership , privacy , copyright and their role in algorithm technologies. However, the challenges posed by efforts to implement appropriate consent mechanisms in the ubiquitous use of data in everyday transactions and decision-making processes affecting individuals have yet to be resolved.

Despite the extensive research on the role of consent in data protection and privacy in general, the particular nature of informed (and explicit) consent in terms of mass data processing and of “black box” algorithms has not been fully explored. Recently, the urgency of reevaluating the conditions of consent in AI has been underlined by the 2017 Report of the AI Institute of the University of New York . Namely and according to the Institute’s researchers, “adaptive algorithms are changing constantly, such that even the designers who created them cannot fully explain the results they generate (…) We must ask how ‘notice and consent’ is possible or what it would mean to have ‘access to your data’ or to ‘control your data’ when so much is unknown or in flux.” Furthermore, the urgency in reviewing the current consent rules has also been highlighted in the Article 29 Working Party’s Action plan for 2017 .

Consent constitutes one of the cornerstones of data protection regulation, functioning as an expression of the autonomy of individuals and of privacy self-management. The challenge emerging from algorithmic technologies is the growing dissonance of notice-and-consent regulation with the production, collection, and processing of data . While existing data protection rules and the General Data Protection Regulation (GDPR) account for human consent, the diversity in personal data leaves ample room for doubt. For example, it remains unclear whether the data derived from already existing input data can be qualified as personal and what are the consent mechanisms required in that case. What’s more, consent on the input data related to automated decision-making appears to become illusory when the functioning of an algorithmic model remains elusive to the affected individuals.

The presentation will explore the challenges brought forward by algorithmic systems to the traditional approach of consent. Addressing this particularly pressing area of safeguarding privacy in the algorithmic era is interlinked with the issue of the lack of foresight in the mechanisms of machine learning and AI. The evolution of digital consent mechanisms towards automated consent will be explored taking into consideration current policies surrounding the legibility requirement, the application of the right to an explanation, as well as granular solutions that go beyond privacy self management and include accountability of the state and the technology producers in order to reshape the consent requirement in the current technological discourse.

A Campolo e.a. (2017), AI Now Institute 2017 Report. Available online: Last accessed 31 May 2018
L Edwards and M Veale (2017), “Enslaving the algorithm: from a ‘right to an explanation’ to a ‘right to better decisions’?”, Research paper. Available online: Last accessed 31 May 2018
P Hacker (2017), “Personal data, exploitative contracts, and algorithmic fairness: autonomous vehicles meet the Internet of things”, International Data Privacy Law doi:10.1093/idpl/ipx014
B Hugenholtz (2017), “Data Property: Unwelcome Guest in the House of IP”, Better regulation for copyright, pp 65-77. Forthcoming in Kritika. Essays on Intellectual Property, Vol. III. Available online: Last accessed 31 May 2018
D Kamarinou, C Millard and J Singh (2016), “Machine learning with personal data”, Queen Mary School of Law Legal Studies Research Paper No 247/2016. Available online: Last accessed 31 May 2018
A Levendowski (2017), “How copyright law can fix artificial intelligence’s implicit bias problem”, Washington Law Review, forthcoming. Available online: Last accessed 25 Oct 2017
G Malgieri and B Custers (2017), “Pricing privacy – the right to know the value of your personal data”, Computer Law and Security Review: The International Journal of Technology Law and Practice, doi: 10.1016/j.clsr.2017.08.006
M Perel and N Elkin-Koren (2017), “Black box tinkering: Beyond disclosure in algorithmic enforcement”, Florida Law Review, 69:181-221
A Ramalho (2017), “Data producer’s right: Powers, Perils and Pitfalls”, Better regulation for copyright, pp 51-58. Available online
Last accessed 31 May 2018
E Sedenberg and A L Hoffmann (2016), “Recovering the History of Informed Consent for Data Science and Internet Industry Research Ethics”, Research paper. Available online: Last accessed 31 May 2018
T Wan Kim and B Routledge (2017), “Algorithmic transparency, a right to explanation and trust”, Working Paper, Available online: Last accessed 31 May 2018
M L Jones, E Edenberg and E Kaufman (2018), “AI and the Ethics of Automating Consent”, IEEE Security & Privacy, forthcoming