Aviva de Groot

Aviva de GrootTilburg Institute for Law, Technology, and Society (TILT).

Aviva de Groot aims to identify explanatory benchmarks and modalities for providing rights relevant understanding of data driven technologies and their applications to laymen users. In particular, she focusses on automated decision making processes. With privacy at the core, her interests more broadly concern humans and technology, their mutual shaping and how this affects our understanding of human rights protections. She has professional and research experience in fields where technology supports human interaction and where humans interact with machines.

In dumber times, the transformation of citizens' reality to their administrative truth already proved to be a tricky process. Years of professional legal experience in a specialized field of administration law, dealing with phantom vehicle registrations, taught her how the complex interplay of law, policy and technology should discourage a narrow a focus on either element. Her Information Law master's thesis research project dealt with privacy concerns arising from the use of social robots in health care settings. I focussed in particular on levels of transparency and understanding of human-robot interaction.

Publications:
"Nothing comes between my robot and me" - co-author, Conference proceedings CPDP 2018, Hart Publishing (forthcoming).
"The Handbook of Privacy Studies" - co-editor, Amsterdam University Press, October 2018.

Abstract:
To explain automated decisions, we need human rationales for explanations.
This paper responds to the debate on seeking explanations of decisions made with, or supported by, artificial intelligence. Europe's General Data Protection Regulation has fueled much of these discussions, as it directly seeks to regulate the practice. The regulation demands transparent processing of personal data and -arguably- explanations of individual decisions made with 'solely' automated means. In the words of the Article 29 Working Party guidelines on automated decision making and profiling: "subjects will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis."17

The discussions however are not confined to this legal instrument, nor its territory. Problematisation of 'decision making 2.0' lies in the perceived decline of meaningful, intelligent human interaction with these systems. The more autonomous a system 'learns,' the more complex it becomes for humans to evaluate the output models and underlying correlations in terms of their meaning, merits, and usability.18 The computational logic increasingly departs from that of human reasoning, and depending on what form of transparency we demand, explanations of decisions may produce answers that we are not able to follow logically, intuitively, or both.19 

However, taking these technologies, their affordances, and their perceived immunities for explainability as the starting point of enquiries, while noting them as "the agential character[s] in the drama"20 of perceived data driven harms, gets legal and social scholars caught up in the work and grammar of system designers. Doubts about the usefulness of transparency easily follow. But instead of deploring inscrutable machines, it seems more productive to define what we demand to understand about machines' analyses. And, importantly, how to serve decision subjects' need to understand these explanations.

In different legal fields, we have rules that protect our human rights and freedoms from opaque (human) decision making. Analyzing our rationales for 'giving reasons' and building a broader theoretical framework can support our demands to the field of system design and operation.21 When are we legally satisfied with answers? Our mental constraints for processing information means we sometimes 'satisfice' 22 to where this leads to legal fictions. How do we account for that? Also, information is not detachable from communication. A thorough exploration of the values of explanations takes into account the "epistemic and ethical norms" for adequate communication.23
As a first step towards building such a framework, this paper analyses general legal theory and human rights law and scholarship for values per se of explanations. As various scholars note, giving reasons by itself shows respect for a decisions' subject dignity, autonomy,and personhood,24 and in doing so supports their own decisions and life choices. Why is that, and when, and how are these values considered to be served? These functions are well established in privacy literature and rulings of the European Court of Human Rights, that in turn informs the EU's data protection regime. This analysis will support and add to the later stages of the project, when specific legal domains are researched.

 

17 Article 29 Data Protection Working Party, “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679 WP 251rev.1.”
18 Rich Caruana et al., “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission,” in KDD 2015 (Sydney, NSW, Australia: ACM, 2015), https://www.microsoft.com/en-us/research/publication/intelligible-models-healthcare-predicting-pneumonia-risk-hospital-30-day-readmission.
19 Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines (Draft),” Fordham Law Review, Forthcoming, February 19, 2018, https://papers.ssrn.com/abstract=3126971 p.13, 46.
20 Daniel Neyland and Norma Möllers, “Algorithmic IF … THEN Rules and the Conditions and Consequences of Power,” Information, Communication & Society 20, no. 1 (January 2, 2017): 45–62, https://doi.org/10.1080/1369118X.2016.1156141.
21 The recent paper of Doshi-Velez and Kortz acknowledges the value of such an analysis, but rushes through the enterprise to how the findings translate to the digital realm. Finale Doshi-Velez et al., “Accountability of AI Under the Law: The Role of Explanation” (Berkman Center Research Publication, forthcoming, November 2017).
22 Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, May 30, 2015), https://papers.ssrn.com/abstract=2609777.
23 Onora O’Neill, “Transparency and the Ethics of Communication,” in Transparency: The Key to Better Governance?, ed. Christopher Hood and David Heald (Oxford, UK: Oxford University Press, 2011).
24 Katherine Strandburg, “Decision-Making, Machine Learning and the Value of Explanation,” December 16, 2016, http://www.dsi.unive.it/HUML2016/assets/Slides/Talk%202.pdf; Antoinette Rouvroy, “‘Of Data and Men’. Fundamental Rights and Freedoms in a World of Big Data.,” Council of Europe, Directorate General of Human Rights and Rule of Law. T-PD-BUR(2015)09REV (2016); Lilian Mitrou, “The General Data Protection Regulation: A Law for the Digital Age?,” in EU Internet Law: Regulation and Enforcement, ed. Tatiana-Eleni Synodinou et al. (Springer International Publishing, 2017), 19–56. Paul Schwartz, “Data Processing and Government Administration: The Failure of the American Legal Response to the Computer,” Hastings L.J. 43 (1991): 1321; Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines (Draft),” Fordham Law Review, Forthcoming, February 19, 2018, https://papers.ssrn.com/abstract=3126971.