Eduard Fosch-Villaronga

Eduard Fosch VillarongaCenter for Commercial Law Studies, Leiden Univesrity

Dr. Eduard Fosch-Villaronga is a Marie Skłodowska-Curie Postdoctoral Fellow at the eLaw Center for Law and Digital Technologies of Leiden University in the Netherlands. He is co-leader of the Ethical, Legal and Societal Issues Working Group at the European Cost Action CA 16116 on Wearables. Previously he was a researcher at the Microsoft Cloud Computing Research Center at the Queen Mary University of London in the UK and the Law, Governance, and Technology Department at the University of Twente in the NL. Eduard holds an Erasmus Mundus Joint Doctorate (EMJD) in Law, Science, and Technology where he addressed the legal and ethical implications of the use and development of healthcare robot technology. Apart from the mobility plan of his Ph.D. program (the University of Bologna-Italy, University of Turin-Italy, Mykolas Romeris University-Lithuania, University of Leibniz-Germany, Autonomous University of Barcelona, Spain, and University of Pittsburgh-USA), Eduard has held visiting research positions at the Center for Education Engineering and Outreach (CEEO) at Tufts University in the United States; and the Laboratoire de Systèmes Robotiques at EPFL in Lausanne, amongst receiving degrees from University of Toulouse (LLM), the Autonomous University of Madrid (MA), and the Autonomous University of Barcelona (LLB).

How transparent is your AI? The GDPR’s transparency requirement in the age of robots and AI

Artificial intelligence (AI) capabilities are growing rapidly and with them robotic systems that are connected to the cloud. Cloud computing encompasses various deployment models and involves multiple service layers and service providers, with supply chains that are often opaque. This complexity and non-transparency is exacerbated when service-oriented architectures and cloud computing are applied to physical devices, also known as “Robot as a Service” (RaaS). Because transparency is now a fundamental principle for data processing under the general data protection regulation (GDPR), we explore how robot and AI systems may comply with such a requirement. We address this topic from a legal, technical, and perspective.

To understand transparency in the context of robots and AI, it is necessary to first investigate the ratio legis of this legal requirement. In particular, we analyze the conciseness, intelligibility, accessibility, and plainness of the article 12 GDPR, in connection with the articles 13 and 14 GDPR. Secondly, we will discuss the complex nature of AI and cyber-physical ecosystems and the challenges this raises for meeting the transparency requirements of the GDPR. To illustrate these practical challenges, we will explore different examples of AI and RaaS employed at home, in schools or in public spaces (e.g., the companion robot Buddy and delivery robots by Starship Technologies). While the law clearly states which information data controllers have to provide to data subjects when they process and collect their data, the technology-agnostic nature of the regulation not only fails to provide sufficient guidance but even impedes understanding on how this right could be ensured in AI environments. Thirdly, we will address the question of user needs and expectations regarding transparency of the system, especially in light of the diversity of target user characteristics, including vulnerable groups, such as users of assistive robot and AI technologies.

In order to address the transparency challenges, the paper will explore several technical ways to facilitate the fulfillment of the transparency provision. Suggested approaches include both preventive solutions, such as the formalization of the GDPR in RuleML for transparency purposes or the certification of privacy compliance (e.g., via the European privacy seal), and reactive solutions, for instance, providing direct feedback to the user, explaining ambiguities, or guiding the user implicitly on how to interact with the robot, instead of explicitly describing its capabilities outside of the use context. Yet, we argue that these solutions might nevertheless have to be complemented with the development of an algorithm impact assessment, and cross-disciplinary educational mechanisms.

Taking seriously the requirement of transparency also raises broader ethical and social considerations such as to whom and to what extent robots and artificial intelligent systems should be transparent. Since assistive technologies are frequently targeted at children, individuals with disabilities, or older people, who may not fully understand transparency explanations and might not be able to challenge the underlying technology, we propose a differentiated approach that does justice to different system characteristics, user needs, and user interests. In particular, we suggest to follow a purpose limitation and risk reduction approach.