Yonatan Wexler

Yonatan WexlerOrCam

Dr. Yonatan Wexler is an award winning researcher in the field of Computer Vision who has conducted research at the University of Maryland, Oxford University, the Weizmann Institute of Science, and Microsoft. His focus is on efficient use of visual information that enables exciting new abilities.

Yonatan currently leads the R&D team at OrCam, a company that has developed a unique device for people who are blind and visually impaired. OrCam’s mission is to harness the power of artificial vision by incorporating pioneering technology into a wearable platform which improves

Tal Eden

Tal EdenSheba Innovation Center

Tal works as the Chief Data Scientist at the Sheba Innovation Center. In this capacity he leads projects in the fields of big data and AI utilizing the full scale of the hospital's data.
Besides being an expert data scientist, Tal is also a licensed clinical psychologist and is currently pursuing his PHD in Haifa university, under the supervision of prof. Tomer Shechner, researching the connection between smartphone usage and psychological well-being.

Tal has years of industry experience as an algorithms designer and as a data scientist. He has worked in global corporations as well as in cutting edge start up companies. In the past several years he has been following his passion of introducing AI and big data R&D into the fields of medicine and psychology.

Lital Helman

Lital HelmanOno Academic College Faculty of Law

Dr. Lital Helman is a Senior Lecturer at the Ono Academic College Faculty of Law in Israel. Her research focuses on Intellectual Property Law and Law and Technology. Her articles were published in leading journals around the world. Dr. Helman holds an S.J.D. (Doctor of Juridical Science) degree from the University of Pennsylvania Law School, which she pursued as a J. William Fulbright scholar. She also served as a Fellow with the Kernochan Center at Columbia University Law School and with the Engelberg Center at New York University School of Law. Prior to her legal studies, Dr. Helman also taught C++ courses in Hacker Software. She is also a co-founder and board member of GradTrain.com, an AI-driven platform for international students.

Abstract:
Decentralized Patent Systems
Patent law is the paradigmatic case for a system in which effectiveness depends not only on the substantive law in place but also—and even more so—on establishing a structure that supports good decisionmaking on a case by case basis.

Currently, the patent system features a centralized structure from beginning to end. The United States Patents and Trademark Office (PTO) is solely responsible for examining inventions for patentability, as well as for publishing the granted patents to the public, and increasingly, for managing post-grant proceedings. Is a centralized structure optimal for patent decisionmaking? This Paper argues that the answer is almost certainly negative. The centralized nature of the patent system results in inefficiency of the patent examination process, high error rates in the examination outputs, and slow introduction of new technology to the public. These are critical issues that hinder the very purpose of the patent system.

This paper posits that the patent system would function better in a decentralized structure. It begins by a theoretical analysis of the costs and benefits of centralized and decentralized systems and outlines the criteria for selection between the two. It then moves to show that better results are forthcoming for the patent system in a decentralized structure.

What would a decentralized patent system look like? I propose to build on new technological developments in the areas of both blockchain and Artificial Intelligence (AI). Inventors would submit patent applications to a shared patent record on a blockchain (or another architecture), instead of filing them with the PTO. After a grace period, in which inventions remain secret, patent applications would be reviewed by scientists for patentability, with the help of AI technology. With time, more responsibility would shift from examiners to the AI to determine patentability. Following the review, patents would be available to the public on the decentralized record.

A decentralized model fosters a strategy focused on diversity, information gathering and efficiency. This model would boost productivity and reduce the error rate in the patent examination process. The system would also create beneficial spillovers by exposing industry scientists to new ideas early in their conception and would thus boost progress and innovation. Further, the decentralization of the patent record would dramatically improve the effectiveness of the record, by including in the record robust information, use cases, and self-executable smart contracts. Finally, the system would implement technology that would simplify and facilitate the process of patent registration. As a result of all these improvements, the decentralized system would spur innovation and patenting.

Joab Rosenberg

Joab RosenbergEpistema

Mr. Rosenberg (Ret. Colonel) had an extensive career as a data scientist and analyst. He is the former Deputy Head Analyst for the Israeli Government, overseeing the work of almost 1000 analysts. His academic background is originally in Physics and Mathematics, and later on in the Philosophy of Science and Epistemology.

He is a lecturer on “Advanced Analytics” in the Honors Track of the Interdisciplinary Center Hertzelia (School of government).

Mr. Rosenberg had founded Epistema (www.episte.ma) in 2015, and acting as the company CEO. Epistema suggests a business platform for trusted decision making, based on unique AI algorithm to follow and score an experts discussion.

Antonio Santangelo

Antonio SantangeloNexa Ceter for Internet and Society at the Politecnico of Turin (Italy)

Antonio Santangelo is the Executive Director of the Nexa Ceter for Internet and Society at the Politecnico of Turin (Italy). He teaches Semiotics at the University of Turin and Semiotics and Philosophy of Language, Textual Semiotics and New Media Languages at the University eCampus (Como, Italy).

He is the author of many articles published in Italian and international rewiews, of many book chapters and of Handbook of tv quality assessment (UclanPublishing 2013), Sociosemiotica dell’audiovisivo (Aracne 2013: Audiovisual Sociosemiotics), Le radici della televisione intermediale (Aracne 2012: The roots of inter-medial television) and Il gioco delle finte realtà (Vicolo del Pavone 2012: The game of fictional realities).

With Gian Marco De Maria, he has edited La Tv o l’uomo di immaginario (Aracne 2012: Television or the imaginary man); with Guido Ferraro, Uno sguardo più attento (Aracne 2013: A more attentive glance), I sensi del testo (Aracne 2017: The senses of the text), Narrazione e realtà (Aracne 2017: Narration and reality); with Giorgio Borrelli and Giovanni Sgrò, Il valore nel linguaggio e nell'economia (Libellula 2017: The concept of value in language and economy).

Benedikt Fecher

Benedikit FecherAlexander von Humboldt Institute for Internet and Society

Since 2017, Benedikt Fecher has headed the “Knowledge & Society” research programme at the Alexander von Humboldt Institute for Internet and Society. The programme addresses issues at the interfaces of science and digitisation and education and digitisation.

Benedikt is co-editor of the blog journal Elephant in the Lab, which critically examines the scientific system and a member of the editorial board of Publications, an open access journal. In his research, Benedikt deals with questions concerning the governance of science and innovation, in particular with the topics of impact and third mission, open science/open access and research infrastructures.

Abstract:
This 2hrs workshop deals with research impact and the question how scientific knowledge transpires into society. A specific focus lies on the question what quality means in transfer activities. Participants will learn basic concepts of societal impact, learn how to use a model for knowledge transfer, and use it for their own research topics.

Avigdor Gal

Avigdor GalTechnion – Israel Institute of Technology

Avigdor Gal is a Professor at the Technion – Israel Institute of Technology, were he leads the Data Science & Engineering program. He specializes in various aspects of data management and mining with about 150 publications in journals, books, and conferences in the Database, Artificial Intelligence, and Machine Learning communities.

He served as a program co-chair and general co-chair of several top-tier conferences and a PI of multiple national and European projects where data is used for smart cities, better food production, traditional and social media, etc.

Avigdor Gal is a recipient of the test-of-time award in DEBS'2018 and the prestigious Yannai award for excellence in academic education.

Prof. Karine Nahon

Karin NahonKarine Nahon is the elected president of the Israel Internet Association (ISOC-IL), an Associate Professor of Information Science in the Lauder School of Government and Ofer School of Communications at the Interdisciplinary Center at Herzliya (IDC), Israel, and an Affiliated Associate Professor in the Information School at University of Washington (UW). She is a member of the Social Media Lab (SoMe Lab), affiliated faculty at the Center for Communication and Civic Engagement at University of Washington. She is the former director of the Virality of Information (retroV) research group and former director of the Center for Information & Society at University of Washington.

Using interdisciplinary lenses, her research focuses on politics of information, and information politics. More specifically, she studies power dynamics and network gatekeeping in social media, and the role of virality and information flows in elections and in politics in general. Professor Nahon is the author of the book Going Viral 2013 (co-authored with Jeff Hemsley). The book received the ASIS&T Best Information Science Book Award, The American Library Association Outstanding Academic Titles. She has published over 80 research papers in her area in top-tier journals like JASIS&T, ARIST, JCMC, IJoC, ICS and TIS. Since 2013 she serves as the co-chair of the Digital and Social Media track at HICSS.

In addition to her academic role, she also helps shape information policy and promote transparency and accountability through leadership roles on national and international bodies. Currently, she is a board member of Wikimedia, the Freedom of Information Movement. In the past she was a member of the Chief Information Office (CIO) Cabinet and represented Israel in the UN in the committee for science and technology.

Vivian Ng

Vivian NGUniversity of Essex School of Law and Human Rights Centre

Vivian Ng is a Senior Research Officer at the University of Essex School of Law and Human Rights Centre, working with the Human Rights, Big Data and Technology Project. Her work focuses on investigating the human rights implications of big data, smart technology and artificial intelligence, considering the scope and meaning of human rights in the digital age, and developing legal responses at the international and multi-stakeholder forum levels. This includes examining the specific risks and effects of a range of digital technology, as well as a holistic view of their full and interconnected implications for human rights.

She holds an LLM International Human Rights and Humanitarian Law from the University of Essex, and a BSocSc Political Science and Corporate Communication from the Singapore Management University. Vivian previously worked as a consultant for the International Federation for Human Rights (FIDH). Prior to that, she provided research support for the mandates of UN Human Rights Committee Member Professor Sir Nigel Rodley and Swiss Chair of International Humanitarian Law Professor Noam Lubell, and worked with the Office of the UN High Commissioner for Human Rights Regional Office for South-East Asia.

Alina Trapova

Alina TrapovaBocconi University

Alina Trapova completed her LLB at the University of Sheffield (UK) and later obtained her LLM in IP law at Queen Mary University of London (UK), specialising in trade mark and copyright law. She then worked in legal policy within the IFPI, the international body representing the recording industry worldwide.

She has also worked at the EUIPO’s Boards of Appeal as a Legal Assistant to the President of the Boards where, among other things, she drafted trade mark appeal decisions and coordinated the Presidium meetings.

She also has experience in the private legal sector, tech start-up counselling and IP conference management.

Presently, she is a researcher at Bocconi University in Milan, where she pursues a doctorate degree, while also acting as teaching assistant in various IP related courses. Her current research interests focus on copyright licensing, copyright ownership and machine learning. Alina also teaches law and licensing to fashion and design students in a fashion academy in Milan.

Abstract:
Machine learning and copyright law – a net of ownership claims
Machines capable of producing creative output date as early as the 70s: Harold Cohen’s AARON in the realm of art (1970s),1 David Cope’s Emmy regarding music (1980)2 and Ray Kurzweil’s Cybernetic Poet regarding poetry (1980s).3 What characterises these is that there was always clearly a human in control who would be vested the copyright authorship - either the programmer or the user of the software. With the advent of machine learning, one of the most prominent applications of artificial intelligence (AI), traditional understanding of copyright authorship and naturally ownership in the context of such works is disturbed. Certain features of the creative process are automated to the extent that it often becomes difficult to draw the line between which aspects the machine produced prompted by its user, what flows as a result of the original software written by the programmer and what, if anything, the machine learning algorithm generated itself based on the large datasets it is fed with.

This article critically analyses the problem of allocating authorship in the creative output from the perspective of EU copyright law, where most traditions inevitably focus on the human factor. The reasoning no human author-no copyright leads to significant practical problems eventually undermining human authorship and affecting the market for low creativity works.

In search for a working solution this paper draws a parallel to section 9(3) of the UK’s Copyright, Designs and Patents Act 1988 - a viable option, yet only for certain types of AI.4

In other circumstances, where AI would employ a higher degree of decision-making and even exhibit cognition and originality, it is often the case that the arrangements necessary for the creation of a work are not undertaken by any person, but by the machine itself. This limits the human input to the mere pressing of a single button without exercising any further control. In such a case, a different ex ante presumption of authorship is suggested - another common law legal construct, namely the US doctrine of works made for hire. Such an unconventional for the EU copyright tradition approach might indeed be the least disruptive presently available solution to the authorship conundrum as it reflects the “artificial” aspect of the creative process, which respectively justifies an “artificial” legal construct.

1 Chris Garcia, 'Harold Cohen and AARON—A 40-Year Collaboration' (Computer History Museum, 23 August 2016)
accessed 25 September 2018.
2 David Cope, 'Emmy Vivaldi' (YouTube, 12 August 2012) accessed 25 September 2018.
3 Ray Kurzweil, 'CyberArt Technologies' (Kurzweil Cyber Art, -) accessed 25 September 2018.
4 Copyright, Designs and Patents Act 1988 (hereinafter, CDPA).

 

Eduardo Magrani

Eduardo MagraniInstitute for Internet and Society of Rio de Janeiro (ITS Rio)

PhD. and Coordinator of the Institute for Internet and Society of Rio de Janeiro (ITS Rio). Senior Fellow at the Alexander von Humboldt Institute for Internet and Society in Berlin. Eduardo Magrani has been working with public policy, Internet regulation and Intellectual Property since 2008. Professor of Law and Technology and Intellectual Property at FGV Law School, UERJ, IBMEC and PUC-Rio. Researcher and Project Leader at FGV in the Center for Technology & Society (2010-2017).

Author of the books "The Internet of Things" (2018), "Among Data and Robots: Ethics and Privacy in the Age of Artificial Intelligence" (forthcoming, 2018), “Digital Rights: Latin America and the Caribbean” (2017) and "Connected Democracy" (2014). Associated Researcher at the Law Schools Global League and Member of the Global Network of Internet & Society Research Centers. Ph.D. and Master of Philosophy (M.Phil.) in Constitutional Law at Pontifical Catholic University of Rio de Janeiro with a thesis on Internet of Things and Artificial Intelligence through the lenses of Privacy Protection and Ethics. Bachelor of Laws at PUC-Rio, with academic exchange at the University of Coimbra and Université Stendhal-Grenoble 3. Lawyer, acting actively on Digital Rights, Corporate Law and Intellectual Property fields.

Magrani has been strongly engaged in the discussions about Internet regulation and was one of the developers of Brazil's first comprehensive Internet legislation: Brazil's Internet Bill of Rights (“Marco Civil da Internet”). Eduardo has coordinated at FGV the Access to Knowledge Brazil Project, as Project Manager, participating and interested in the copyright reform, intermediary liability and Internet regulation policies in Brazil. Coordinator of Creative Commons Brazil and the Digital Rights: Latin America and the Caribbean Project since 2012, jointly with prestigious Latin American organizations. Currently he coordinates several projects as Coordinator of Law and Technology of ITS Rio.

Abstract:
The continuous interaction between intelligent devices, sensors and people points to the increasing number of data being produced, stored and processed, changing, in various aspects and increasingly, our daily life. On one hand, the context of hyperconnectivity can bring economic benefits to the State, companies, as well as convenience to consumers. On the other hand, increasing connectivity brings significant challenges in the spheres of privacy protection and contemporary ethics, impacting, ultimately, democracy itself. This thesis addresses, from the regulatory point of view, some of these challenges faced by the current rule of law arising from the advance of the scenario called Internet of Things.

Anna Ivanova

Anna IvanovaUniversity of Cape Town

Anna Todorova Ivanova is completing her LL.M. studies and dissertation at the Department of Public Law, University of Cape Town, South Africa.

She completed her undergraduate degree at the University of Cologne, Germany with specialisation in European and International Law and the First State Exam at the Higher Regional Court in Cologne.

Her current research focuses on Artificial Intelligence – the present state of development and future opportunities – and the possibility of introducing legal personality to AI agents under international law with reference to the legal framework this could require.



Abstract:
Legal personality of Artificial Intelligence under International Law

To be able to offer a deeper understanding of the topic this work will first examine the concept of legal personality, its meaning and application in the legal framework of international law over the years. Without claiming advanced technological knowledge in scientific areas like robotics and engineering the presentation will then try to present some overview over the latest developments concerning Artificial Intelligence. The questions introduced throughout the introduction will engage with the nature and different forms of legal personhood, its connection to intelligence and/or consciousness.


Introduction:

– Legal personality within the international law
– Artificial Intelligence

If Artificial Intelligence is not yet defined within the legal framework, where should be the link between these two terms in the legal context? Do rights and obligations reflect the ability/possibility to participate in legal life and organize legal interaction? Are they connected to the ability to cause and experience damage? Could a closer relation be established between legal personhood of AI and such of humans or rather such of organised entities and corporations?

 

Research and analysis:

– Self learning machines and algorithms, Advanced Robotics, Emotional Intelligence
– “Electronic Personhood” - domestic law systems and international law

Subject of reference will be international law and recent developments in EU law, such as the European Parliament initiative of introducing an “electronic personhood”. A lot of the conceptual ideas of international law first gained their legal existence in domestic legal life. For this reason the presentation will examine some regulations and “visions” of national legislation, for example in Estonia and the US. The present state will be shown with regard to concepts of future developments and research projects, such as quantum computing and generated machine consciousness.

– Legal Personality of AI – present and future

Could the concept of legal personality be applied to AI at its present stage of development or are modifications required. Would such modifications become necessary at a later point and how could they look like? How would possible legal reforms affect society, also referring to sociological and psychological perspectives?
Conclusion and Discussion:

– Advantages, dangers?

The advantages and dangers that the different options bear now and for the future, such as creating a robotic or technical veil, which similar to the corporate veil may at some point need to be pierced.
In the final part of the dissertation some conclusions will be drown in accordance with future prognoses.

 

Eliane Bucher

Eliane BucherNordic Centre for Internet and Society

Eliane Bucher is an Assistant Professor with the Nordic Centre for Internet and Society at BI Norwegian Business School in Oslo, Norway. She completed her doctorate in management at the University of St. Gallen, where she is also a lecturer in media- and communications management. Her current research centers on digital labor practices and platforms – including algorithmic labor – as well as modes of collaborative/access-based consumption.

Abstract:
The Alienating Effect of being managed by Artificial Intelligence
Ever more decisions in economic life, both online and offline, are significantly shaped, enabled and predicted by machine-learning algorithms (e.g. Agrawal, Gans & Goldfarb, 2018a, 2018b; Russell & Norvig, 2016; O’Neil, 2016; Domingos, 2015; Pasquale, 2015). Whether it is being accepted for a car loan, receiving a suggestion for a business contact on a social networking site or being allotted a task on a crowd working platform, algorithms are working in the background to aid, approve and allot.

However, there also is a dark side to this: While these algorithms certainly have potential upsides for example in that they may render (managerial) decision-making more rational, this contribution aims to shed light on the potential downsides as well by scrutinizing whether individuals, subjected to automated decision-making “without humans in the loop” may experience alienation. Consider that when treated unfairly, one generally has the right to face their accuser and to understand how a specific verdict or decision has come to pass.

This is not applicable in the case of artificial intelligence. AI-facilitated decision-making processes are however often marked, at least to the person concerned by these decisions, by varying degrees of opacity in terms of (1) which specific data sources and points were considered [opaque input variables] as well as (2) which specific calculations, weights and probabilities [opaque algorithm] were performed and (3) which specific outputs were translated into which decisions according to which thresholds [opaque output conditions]. In the current practice ‘opaque and invisible models are the rule, and clear ones very much the exception’ (O’Neal, 2016). Against this backdrop, we posit that the opacity linked to predictive AI creates a considerable potential for alienation vis-à-vis an anonymous non-human “accuser” (i.e. decision-making entity).

In this contribution, we are thus intending to look at how individuals – users, employees, customers and citizens – are impacted by the advent of algorithmic decision-making mechanisms. More to the point, we explore the potential for alienation inherent in settings where the human element is not ‘in the loop’ anymore: How do individuals feel when they struggle to understand a decision made by a predictive model, when they have trouble connecting their actions in real-life to the decisions taken by the AI and when they seemingly have no or limited means to impact or appeal a decision? (see Table 1 for potential key questions). Here, we follow the social-psychological tradition which assumes alienation to be a conscious subjective experience marked by individual perceptions such as powerlessness, normlessness or isolation (Seeman, 1959; Dean, 1961; Clark, 1959). In particular, we are interested in (1) how individuals make sense of decisions which are derived on the basis of predictive AI and (2) how they perceive their own role and agency in the face of predictive AI.

Table 1: Key Questions pertaining to the alienating effect of predictive AI

Possible Dimension of Alienation
(based on Seeman, 1959)

Potentially alienating effects of…

…AI-based rejection a car-loan

…AI-based lowering of credit score

…AI-based display of offensive web-ad

Proposed Key Questions

Meaninglessness

The outcome does not make sense to me

How do individuals make sense of decisions which are derived on the basis of predictive AI?

Normlessness

It is unclear how the outcome came to be

Estrangement

The outcome does not adequately reflect my situation, actions or performance

Powerlessness

I don’t have any means to influence the outcome

How do individuals perceive their own role and agency in the face of predictive AI?

Isolation

I cannot compare the current outcome to alternative outcomes in similar/other situations

 

The proposed contribution thus seeks to complement to the ongoing normative discourse on AI by an individual and human-centered perspective. We intend to shed light on individuals’ sensemaking and perceived agency in an environment marked by predictive AI. In particular, we are interested in how far individuals feel alienated vis a vis non-human agents in the decision making process. Here, we look into the ethics of AI not just from a deontological perspective (which outcome is fair?) but from a subjective perspective (which outcome is perceived as fair?) as well.

It is conceivable that certain AI-based decisions fulfill objective criteria of fairness but do not match individual notions of fairness. A performance evaluation may thus be objectively fair in the sense that it is holds up in comparison with a large population of similar cases. However, due to individual tendencies to overestimate one’s own skill and performance in certain areas (i.e. ‘inflated self-assessment’ of logical reasoning or specific knowledge), this may still subjectively be perceived as unfair (e.g. Kruger & Dunning, 1999).

Also the reverse is possible. It is conceivable that individuals perceived AI-governed decisions as objectively fair, when in reality they may suffer from systematic bias (e.g. causality vs. correlation fallacy) which may not be accounted for. Here a particularly striking example is cited by O’Neil (2016) in the case where the length of prison sentences was determined (among other factors) by probabilistic algorithms who relied on demographic factors such as family ties, friends and neighborhood which were of no legal relevance and should not have factored into the sentencing decision.

The proposed contribution will first outline the central dimensions of alienation in the context of predictive AI. Here we will put a particular emphasis on the dimensions of opacity of algorithmic decision making processes and how they relate to alienation. Furthermore, we will scrutinize two aspects within the alienation theme, namely (1) the question of sensemaking where we will look into objective and subjective perceptions of fairness as well as (2) individual conceptualizations of role and agency vis a vis AI. Derived from this, we propose a framework based on transparency and embeddedness as a basis for AI implementation and governance. While our contribution will be set-up in the social-psychological tradition (Seeman, 1959; Dean, 1961; Durkheim, 1893), we will draw on insight from the structural tradition (e.g. Marx, 1978; Kalekin-Fishman, 2015) to interpret our results and to set them into a broader societal context.

 

References
Agrawal, A. K., Gans, J. S., & Goldfarb, A. (2018a). Exploring the impact of artificial intelligence: Prediction versus judgment (No. w24626). National Bureau of Economic Research.
Agrawal, A., Gans, J. S., & Goldfarb, A. (2018b). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press: Boston.
Dean, D. G. (1961). Alienation: Its meaning and measurement. American sociological review, 753-758.
Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books.
Durkheim, E. 1893 | 1984. The Division of Labour in Society. Basingstoke : Macmillan
Kalekin-Fishman, D., & Langman, L. 2015. Alienation: the critique that refuses to disap-pear. Current Sociology, 63(6): 916-933.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.
Marx, K. 1978[1844]. Economic and philosophical manuscripts of 1844. In R.C. Tucker (Ed.), The Marx-Engels reader: 66–125. London: W.W. Norton & Company.
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,.
Seeman, M. 1959. On the meaning of alienation. American Sociological Review, 24: 783–791.

 

Tammy Katsabian

Tammy KatsabianHebrew University of Jerusalem

Tammy Katsabian is a PhD student at the law faculty of the Hebrew University. She is writing her thesis under the guidance of Prof. Guy Davidov. Her thesis focuses on the way the internet platform and virtual technology have influenced labour rights in a way that requires new understandings and redefinitions of basic concepts in this field, including fundamental workers’ rights. Her research is consisted of a series of articles focusing on the effects of the internet on three issues: the right to privacy of employees; working time; and freedom of association.

Tammy has an L.L.M degree from Yale Law School and Tel-Aviv University (cum laude) and an L.L.B degree from Bar Ilan University (cum laude). Alongside to her studies at the PhD program, Tammy is working as the academic advisor of the Women’s Rights at Work Clinic.

Abstract:
Employees’ Privacy in the Internet Age – AI and the Human Aspect
The internet age has created a crisis in the notion of employees’ privacy. The AI revolution, with its various new surveillance technologies, enables the employer to supervise her employees, at almost any time and in almost any place. Starting from the stage of being a candidate, and throughout the employment period, employees are constantly watched over, even in their personal time and space. Alongside that, AI enables the employer not only to collect information on the employee in various ways, but also to automatically process and analyze big data and to draw conclusions from it about the behavior and character of the employee.

This constant supervision, however, is possible also due to modifications in the behavior of individuals and society. Thus, in order to accurately grasp the modern privacy dilemma, I believe that we need to add a sociological dimension to the technological understandings on AI and big data and to explore how the internet age has urged the individual to “live in public” and the society to participate in a collective supervision over its members. Following this, the presentation will bring to the fore the sociological literature on internet and society. A gap between privacy in the books and privacy on the ground will be exposed, pointing especially to the paradox of people willingly sharing more and more information with others while at the same time wanting to keep it private (at least to some extent). These sociological insights shed a new light on the legal discourse.

So far attempts in the legal literature to deal with the modern privacy crisis have focused on creating flexible and contextual understanding of privacy (or its violation). This is indeed helpful to broaden the scope of privacy and include new phenomena, but in practice, at least in the context of the workplace, such legal structures are outdated, and in particular cannot provide a necessary degree of determinacy and predictability to employers and employees. Given the power imbalance in employment relations, such indeterminacy is likely to prove detrimental especially to employees, thus making it difficult to protect them against infringements of privacy (broadly conceived, including the right to a private life).

The presentation argues that this gap can be addressed by adding a procedural protection to the right to privacy, which is easier to implement. Three concrete proposals are advanced along these lines: mandating anonymous CVs before the interview stage (to prevent the screening of candidates at this preliminary stage based on “Googling”); creating incentives for developing workplace-specific privacy rules in cooperation with employee representatives; and mandating a “cooling-off” period of one month before dismissals that are based on employees’ private behavior (which is now especially common online).

Maciej Kuziemski

Maciej KuziemskiScience Policy Research Unit (SPRU), at the University of Sussex

Maciej Kuziemski is a PhD researcher at the Science Policy Research Unit (SPRU), at the University of Sussex, UK and a 2018-19 Fulbright-Schuman Visiting Research Fellow with the Program on Science, Technology & Society at the Harvard Kennedy School. While at Harvard, Maciej will examine US government’s algorithmic practices.

In his doctoral research Maciej is studying sociotechnical imaginaries of algorithms and their influence over public sector practices in the US and the UK. His interests include citizen empowerment, policy design and public sector innovation.

Laurens Naudts

Laurens NaudtsKU Leuven Centre for IT & IP Law

Laurens Naudts is a Doctoral Researcher and researcher in law at the KU Leuven Centre for IT & IP Law. His research interests focus on the interrelationship between artificial intelligence, ethics, justice, fairness and the law. Laurens’ Ph.D. research reconsiders the concepts of equality and data protection within the context of machine learning and algorithmically guided decision-making.

As a researcher, Laurens has also been actively involved in several national and EU funded (FP7 and H2020) research projects, including inter alia iLINC, Preemptive and currently VICTORIA, Video Analysis for Investigation of Criminal and Terrorist Activities.

Laurens was formerly appointed as a researcher at the European University Institute (Centre for Media Pluralism and Media Freedom) where he performed scientific research for the project "Strengthening Journalism in Europe: Tools, Networking, Training". E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Maria Lillà Montagnani

MAria lilla MontagnaniBocconi University

Maria Lillà Montagnani is Associate Professor of commercial law at Bocconi University where she teaches and researches in the field of Intellectual Property Law. She has an LL.M in Intellectual Property (Queen Mary University of London) and a Ph.D in Competition Law.

She is the Director of ASK, a Bocconi research center devoted to the promotion of cultural planning and cultural institutions management. She has been Faculty Affiliate at the Berkman Center for Internet & Society of Harvard University, Scholarship holder at the Max Planck Institute for Intellectual Property, Competition and Tax Law of Munich, and Visiting Scholar at the CCLS (Centre for Commercial Law Studies) at Queen Mary University of London.

Her research activity has mostly concerned the interplay between IP law and technology, on which topics she has published on major international journals.

Alek Tarkowski

Alek TarkowskiCentrum Cyfrowe Foundation

dr Alek Tarkowski (1977). Sociologist, copyright reform advocate and researcher of digital society. Co-founder and President of Centrum Cyfrowe Foundation, a think-and-do tank building a digital civic society in Poland. Co-founder of Communia, a European advocacy association supporting the digital public domain, and of the Polish Coalition for Open Education (KOED). Founder and Public Lead of Creative Commons Poland. New Europe Challenger 2016 and Leadership Academy of Poland alumnus in 2017. In 2016-2017, co-chair of the global strategy process for the Creative Commons Network. Member of the Steering Committe of the Internet Governance Forum Poland, Program Board of the School of Ideas at SWPS University of Social Sciences and Humanities and Commonwealth of Learning's Center for Connected Learning. Formerly member of the Polish Board of Digitisation, an advisory body to the Minister of Digitisation (2011-2016), and member of the Board of Strategic Advisors to the Prime Minister of Poland (2008-2011) (responsible for issues related to the development of digital society) and Junior Fellow at McLuhan Program on Culture and Technology, University of Toronto. Co-author of the strategic report "Poland 2030" and the Polish official long-term strategy for growth. Policy expert on open content and copyright policies, open and digital education, and digital skills. For over a decade involved in building a digital commons and public domain in Poland and abroad. Author of "Alfabet nowej kultury i inne teksty" [The Alphabet of New Culture and Other Texts], a collection of essays written together with Mirosław Filiciak. Sociologist of new media, obtained PhD in sociology from the Institute of Philosophy and Sociology, Polish Academy of Science. Teaches at School of Ideas, SWPS University of Social Sciences and Humanities, and Artes Liberales Institute, University of Warsaw.

Lior Zalmanson

Lior ZalmansonUniversity of Haifa

Dr. Lior Zalmanson is a lecturer (assistant professor) at the Information and Knowledge Management Department, University of Haifa.

His research interests include social media, online engagement, commitment, internet business models, creative experimentation, sharing economy and algorithmic management. His research has won awards and grants from Fulbright Foundation, Dan David Prize, Google, Marketing Science Institute, Social Informatics SIG among others.

His Research was covered in The Times, Independent, PBS, Fast Company among others. His research has been published in Management Information Systems Quarterly and MIT Sloan Review. In 2016 he was appointed as a research fellow at the Metropolitan Museum Media Lab. In 2017, Lior has been a visiting assistant professor at NYU Stern, where he taught the "Information Technology for Business and Society" course.

Lior is also the founder of the Print Screen festival, Israel's digital culture festival, which connects internet researchers, activists and artists.

Furthermore, in his parallel life he is a grant and award-winning playwright and screenwriter whose recent film (about drone operators) received its debut at the 2016 Tribeca Film Festival.


Areas of Interest for Student Supervision
User engagement in online environments. Trust and commitment between users and websites. Internet business models for online content. Algorithmic Management and the Sharing Economy.

Selected Recent Publications
Balestra, M., Zalmanson, L., Cheshire, C., Arazy, O., & Nov, O. (2017) “It was Fun, but did it Last” -The dynamic interplay between fun motives and contributors’ activity in peer-production. PACM on Human-Computer Interaction, 1(2), 21.
Zalmanson, L., & Oestreicher-Singer, G. (2016). Turning Content Viewers Into Subscribers. MIT Sloan Management Review, 57(3), 11.
Oestreicher-Singer, G., & Zalmanson, L. (2013). Content or community? a digital business strategy for content providers in the social age. MIS Quarterly, 37(2), 591-616.

Eduard Fosch-Villaronga

Eduard Fosch VillarongaCenter for Commercial Law Studies, Leiden Univesrity

Dr. Eduard Fosch-Villaronga is a Marie Skłodowska-Curie Postdoctoral Fellow at the eLaw Center for Law and Digital Technologies of Leiden University in the Netherlands. He is co-leader of the Ethical, Legal and Societal Issues Working Group at the European Cost Action CA 16116 on Wearables. Previously he was a researcher at the Microsoft Cloud Computing Research Center at the Queen Mary University of London in the UK and the Law, Governance, and Technology Department at the University of Twente in the NL. Eduard holds an Erasmus Mundus Joint Doctorate (EMJD) in Law, Science, and Technology where he addressed the legal and ethical implications of the use and development of healthcare robot technology. Apart from the mobility plan of his Ph.D. program (the University of Bologna-Italy, University of Turin-Italy, Mykolas Romeris University-Lithuania, University of Leibniz-Germany, Autonomous University of Barcelona, Spain, and University of Pittsburgh-USA), Eduard has held visiting research positions at the Center for Education Engineering and Outreach (CEEO) at Tufts University in the United States; and the Laboratoire de Systèmes Robotiques at EPFL in Lausanne, amongst receiving degrees from University of Toulouse (LLM), the Autonomous University of Madrid (MA), and the Autonomous University of Barcelona (LLB).

Abstract:
How transparent is your AI? The GDPR’s transparency requirement in the age of robots and AI

Artificial intelligence (AI) capabilities are growing rapidly and with them robotic systems that are connected to the cloud. Cloud computing encompasses various deployment models and involves multiple service layers and service providers, with supply chains that are often opaque. This complexity and non-transparency is exacerbated when service-oriented architectures and cloud computing are applied to physical devices, also known as “Robot as a Service” (RaaS). Because transparency is now a fundamental principle for data processing under the general data protection regulation (GDPR), we explore how robot and AI systems may comply with such a requirement. We address this topic from a legal, technical, and perspective.

To understand transparency in the context of robots and AI, it is necessary to first investigate the ratio legis of this legal requirement. In particular, we analyze the conciseness, intelligibility, accessibility, and plainness of the article 12 GDPR, in connection with the articles 13 and 14 GDPR. Secondly, we will discuss the complex nature of AI and cyber-physical ecosystems and the challenges this raises for meeting the transparency requirements of the GDPR. To illustrate these practical challenges, we will explore different examples of AI and RaaS employed at home, in schools or in public spaces (e.g., the companion robot Buddy and delivery robots by Starship Technologies). While the law clearly states which information data controllers have to provide to data subjects when they process and collect their data, the technology-agnostic nature of the regulation not only fails to provide sufficient guidance but even impedes understanding on how this right could be ensured in AI environments. Thirdly, we will address the question of user needs and expectations regarding transparency of the system, especially in light of the diversity of target user characteristics, including vulnerable groups, such as users of assistive robot and AI technologies.

In order to address the transparency challenges, the paper will explore several technical ways to facilitate the fulfillment of the transparency provision. Suggested approaches include both preventive solutions, such as the formalization of the GDPR in RuleML for transparency purposes or the certification of privacy compliance (e.g., via the European privacy seal), and reactive solutions, for instance, providing direct feedback to the user, explaining ambiguities, or guiding the user implicitly on how to interact with the robot, instead of explicitly describing its capabilities outside of the use context. Yet, we argue that these solutions might nevertheless have to be complemented with the development of an algorithm impact assessment, and cross-disciplinary educational mechanisms.

Taking seriously the requirement of transparency also raises broader ethical and social considerations such as to whom and to what extent robots and artificial intelligent systems should be transparent. Since assistive technologies are frequently targeted at children, individuals with disabilities, or older people, who may not fully understand transparency explanations and might not be able to challenge the underlying technology, we propose a differentiated approach that does justice to different system characteristics, user needs, and user interests. In particular, we suggest to follow a purpose limitation and risk reduction approach.

Moran Yemini

Moran YeminiThe Center for Cyber Law & Policy

Moran Yemini is a Senior Fellow at the University of Haifa's Center for Cyber, Law and Policy (CCLP), a litigation partner at the law firm of Herzog, Fox & Neeman, and a Visiting Fellow at the Information Society Project at Yale Law School.

Moran has published articles in the fields of law, communications and philosophy (including both theoretical and empirical work), which have been frequently cited in academic articles, as well as by U.S. Federal courts. His research interests consist of various aspects of the intersection of technology, political theory, and law, including the relationship between technology and morality, the theory of Internet governance, constitutional rights in the digital age, and online freedom of expression.

Moran holds an LL.B., magna cum laude, from Tel-Aviv University, an LL.M. from New York University School of Law (where he studied under the merit-based Vanderbilt Scholarship), and a Ph.D. in law from the University of Haifa. In addition to degrees in law, Moran’s academic background also includes an undergraduate degree (B.A.), magna cum laude, in communications and a graduate degree (M.A.), magna cum laude, in political science, both from Tel-Aviv University.

Abstract:
Traditional free speech theory has identified three main justifications for freedom of speech: the attainment-of-truth argument, which holds that freedom of speech assists in advancing knowledge and discovering truth; the argument from liberal democracy, which holds that freedom of speech is essential to collective self-governance; and the argument from personal autonomy, which holds that freedom of speech reflects respect for individuals' autonomy and rationality and promotes individual self-fulfillment.

Although there are significant potential tensions between these various free speech justifications, courts rarely engage in a normative-substantive analysis of the rationales underlying free speech and generally tend to assume that unfettered speech necessarily promotes all of these (and other) rationales simultaneously. But it may no longer be possible to duck these complexities, as digital technologies, and particularly communicative artificial intelligence (AI), are pushing free speech theory and doctrine in profound ways.

Our current technologically-mediated system of free expression, in which AI speech is becoming increasingly present, puts into question one of the basic premises of free speech doctrine, which is that there is no such thing as too much speech. Our current system of free expression validates and amplifies the established critiques of the attainment-of-truth argument, which rests on the assumption that in a "marketplace of ideas" truth will emerge triumphantly. It similarly invokes a reevaluation of the relationship between a regime of free speech and the goal of promoting democracy. Within these larger questions, one of the main points of concern is whether AI speech deserves protection and if so on what grounds.

Moreover, at an even more fundamental level, the development of strong AI, potentially capable of independent "thinking," may also force a reexamination of the relevancy and applicability of the autonomy justification of free speech to an environment in which it may no longer be possible to draw clear lines between human and non-human communication. This presentation will briefly discuss these challenges to free speech theory and doctrine.

Natalie Pompe

Natalie PompeUniversity of Zurich

Natalie Pompe is a PhD Candidate at the University of Zurich and holds a Master of Law from the University of Zurich and an LL.M. from King's College London. In July 2018 she joined HIIG as a Fellow in the research field ‘Data, actors, infrastructures: The Governance of data-driven innovation and cyber security.’ Her PhD thesis investigates the concept of algorithmic information distribution and its effect on democratic will-building processes. She answers the question whether the current algorithmic information distribution in the digital public sphere is still in accordance with the constitutional framework of the Swiss Democracy. She has a strong interest in new technologies shaping our society and the protection of human agency as it is a driver for innovation and societal progress. Her research finds real-world application in the project ‚We.Publish' that creates a media-cryptocurrency in Switzerland and a blockchain-based technology for journalists with the aim to preserve and enhance the quality of journalism. Apart from her research, she is a Yoga and Meditation teacher.

Abstract:
Abstract for ‘AI: Ethical and Legal Implications’ Conference on the 28-30th of

Artificial intelligence affects the global community as a whole, which calls for a global discourse on the basis of fundamental democratic values. However, the traditional regulatory responses linked to the concept of jurisdictions create a fragmentation in legal norms. In other words, various regulatory actors respond to the same technical processes individually resulting in a fragmented use of AI technologies across jurisdictions. This article discusses the dynamics between technological development and its interaction with both legal norms and social values. Within this article I explore how the fragmentation in the legal regulation affects not only the technical development but also the regulatory processes themselves. Furthermore, the responsible regulators are not only subject to different regulatory regimes and concepts of democracy, but also other social norms apply in different jurisdictions. Fragmented application of AI technologies affects experiences with technologies that on one hand can act like a social glue for society, on the other hand they create the informational basis for democratic discourse about their regulation.

An example of this dynamic between legal fragmentation and application of technology is the development of China as a leader in facial recognition. The combination of the Chinese population size and unregulated use of personal photos has allowed Chinese companies to successfully commercialize facial recognition algorithms. While Chinese citizens are getting used to facial recognition in various countries, the introduction of this technology in other countries created societal resistance.

My hypothesis in this article is that there is a circular dynamic between regulatory response and experience with new innovation. As explained, the lack of legal harmonization on AI issues will create a different experience with AI technology across the world. How society perceives and experiences new technologies influences the debates on arising regulatory challenges: such experience shape our values, information exchange and debates shape our realities. In other words, common experiences create a shared understanding for an issue that needs a regulatory response.

A likely conclusion of the analysis of legal fragmentation would be that the legal responses should be more interoperable. However, a closer look at the obstacles of harmonisation attempts show how closely connected social and legal norms are. New innovations usually come with high hopes to solve various problems of mankind. Whilst technical advancements and economic trade has been globalised, social norms are still very fragmented. Deliberative democratic discourse develops in the interplay between institutionalised will-formation and political opinion-formation along informal channels. The latter is strongly affected by the experience with new technologies and the information provided about them. Consequently, I conclude that in order to path a way towards an inclusive and deliberative Global Governance, we need to analyse the informational basis and the access or experience with technologies. As a method in this article I am applying the concept of interoperability developed by Urs Gasser and John Palfrey that distinguishes between a legal layer, data layer, technical and social layers of interoperability.

Luisa Scarcella

luisa Scarcella

University of Graz

Luisa Scarcella is a Ph.D. candidate at the Department of Tax and Fiscal Law of the Karl-Franzens University of Graz (Austria), where she has been working as research and teaching assistant. Her thesis concerns the tax treatment of cryptocurrencies with a special focus on Bitcoin and the Blockchain technology. Recently she has been awarded a scholarship by the Austrian Ministry of Research for doctoral students.

Within her researches, she has been focusing also on the digital economy and the impact of AI on the labour market from a tax law perspective. She received her Master of Laws degree from the University of Udine (Italy) in 2015. Before starting her Ph.D., she interned at the European Investment Bank (Luxembourg) in 2013 and at the European Central Bank (Frankfurt am Main) in 2015. She is also one of the coordinators of the Finance, Law and Economics Working Group of the Young Scholars Initiative – INET.

Abstract:
“Profiling and automated decision making in tax matters. Possible aspects of compatibility with the new GDPR”
As highlighted also by the OECD in more than one report, the introduction of big data combined to advanced data analytics represents a great opportunity for tax administrations. These instruments will enable tax administrations to collect their revenue more efficiently and assess in a more precise way tax avoidance and tax evasion cases.

During the last years, tax administrations around the world have been adopting technologies in order to improve their technical procedures recognizing the unfeasibility of checking every single taxpayer and started to implement taxpayers’ “profiling” systems for risk management purposes. These systems allow the analysis of an incredible amount of data which are mainly provided by third parties with whom the taxpayer engages, and after elaborating these data only the taxpayers which will be considered as more “risky” will be subjected to audits.

In the past, new technologies were able to simplify the relationship between tax administration and taxpayer (e.g. in prefilled tax return), and certainly, in the near future, big data analytics and the possibility of using AI as recommended by OECD and other international organizations, will be an indispensable instrument to counteract tax fraud and other tax related crimes. At the same time, being tax administrations one of the public administrations collecting and storing the highest amount of citizens´data, the right to privacy plays a fundamental role in how the tax administrations should use these technologies and ensuring the required level of protection of the private sphere.

If on one hand the OECD has been promoting in many of its policy recommendations the use of new technologies, on the other hand it has never referred to data protection so far. However, at European level, the new General Data Protection Regulation (GDPR) has been adopted. How these new technologies can and are implemented by national tax administrations when deciding which taxpayer will be subjected to auditing will need to be assessed also by taking into consideration these new provisions.

In particular, this paper aims to address questions regarding whether the profiling activity carried out by an algorithmic software used by tax administrations can fall under the definition of “profiling” according to the new GDPR and if so, whether the auditing of the taxpayer individuated through this activity shall be considered as the outcome of an automated decision. Moreover, the paper will assess the compatibility of these instruments under the GDPR and how provisions should be drafted at national level in order to strike the right balance between ensuring taxpayers’ data protection and collecting revenue which shall then be used for the general public interest.

Elena Beretta

Elena BerettaPolitecnico di Torino

Elena Beretta received the M.Sc. in Cooperation, Development and Innovation in Global Economy from University of Turin (Department of Economics and Statistics "Cognetti de Martiis") in September 2016; she worked on an experimental thesis investigating the diffusion of innovation by agent-based models.

She earned a second level Master degree in Data Science for Complex Economic Systems at the Collegio Carlo Alberto in Moncalieri (TO), in June 2017.
From April 2017 to September 2017 she got involved in an internship at DESPINA - Laboratory on Big Data Analytics at the Department of Economics and Statistics of the University Study of Turin – working on the NoVELOG project ("New Cooperative Business Models and Guidance for Sustainable City Logistics").

In November 2017 she’s starting to collaborate as PhD student, and effective member, with Nexa Center for Internet & Society at Polytechnic of Turin, by working on a project on Data and Algorithms Ethics under the supervision of Professor Juan Carlos De Martin.

Her current research focus for the PhD thesis is on Fair Machine Learning field.

Abstract:
Many support decision software systems today make use of Artificial intelligence techniques fed with large amount of citizens’ data, to generate recommendations or to make automatic decisions. Credit scores, loan granting decision systems, institution rankings, employment application screeners, workplace wellness/control programs, are just some examples. The decisions generated by these software systems have an increasingly relevance on many facets of life: based on the collected data, their algorithms can deny a loan, or reject a job application.

Given such relevance and impact on people life, a fundamental question is "What does it mean -in an operational way- for AI-software to be ethical?

A first aspect emerging from this question is to find a standard by which to measure the ethical impact. For example, most of the problems related to results bias arise from dataset concerns. The sampling theory, or more generally the statistical theory, provides different methods of measurement of distortion, where in statistics the term bias is used with regard to two concepts, the sample and the estimator. However, what statistical theory does not provide are benchmark’s metrics to assess whether an outcome or a sample should be considered ethical or not.

In addition, most of the collected data do not have labels or descriptions referring to their context and acquisition process, neither measures of their representativeness towards the source population, or their quality.

On the base of these motivations, and building on current research efforts in this direction, in my PhD work I propose a conceptual and operational data labeling framework, the “Ethical and Socially-Aware Labels (EASAL)”. At the current state of our research, the framework is based on three metrics:
i) Disproportion;
ii) collinearity and correlation;
iii) inherent data quality.

Experiments on real datasets will show benefits and limitations of this approach.

Shirley Ogolla

Shirley OgollaAlexander von Humboldt Institute of Internet & Society (HIIG)

Shirley Ogolla is a researcher at the Humboldt Institute of Internet & Society (HIIG) in Berlin, investigating emerging forms of Internet-enabled participation. At HIIG, Shirley’s research focuses on workers participation on digital platforms by examining new forms and processes of employee participation online.

Shirley has a background in Media Studies from Humboldt University of Berlin, Sorbonne University in Paris and New York University. Further, she spent the summer of 2017 at the Berkman Klein Center for Internet and Society working on topics on machine learning bias, artificial intelligence and ethics. Shirley is also the co-founder of collective no:topia, an italian-german artist collective based in Torino and Berlin, building interactive art installations on technologically embedded futures for the broader public (http://collectivenotopia.com).

Abstract:
Rethinking ML bias - Ensuring inclusive design to mitigate biases

This paper introduces an inclusivity-framework to foster a higher degree of participation in the design of Machine Learning (ML) systems. Contrary to muted and manageable impacts in other technological shifts, in ML-enabled automated systems exclusion will be much larger and more encompassing because of the scale of deployment and rapid nature of operation.We need to ensure that the already marginalized are not further neglected in our race forward to enrich our lives with technology. I believe that this will be a 

concrete first step in changing some of the core methods as practiced in ML today to better address the issues of fairness and interpretability.

The overestimation of the autonomy of ML systems (Walsh, 2017) dominates public debates, and demands an appropriate disambiguation in order to make room for more immediate and important discussions such as inclusion. Moreover, information asymmetries among users, designers, practitioners and scholars call for a compensation through educational and professional training. Our societal ideas and ideals must be implemented into the design of ML systems, as everyone will be affected by its implementation in their daily practices eventually. When it comes to decision-making by algorithms, some processes still lack explanation, which prevents the results in being human-interpretable (Lipton, 2017). This is especially concerning in the case of decisions-making on humans and their access to credit (Datta, 2017), health (Hart, 2017) or justice (Angwin et al., 2016) for instance.

Furthermore, the diversity of the underlying data is crucial to the success of ML systems in the long run (Hardesty, 2016). As an algorithm is only as good as the data it works with, it relies on large-scale data in order to detect patterns and make predictions (Barocas & Selbst, 2015). This data reflects human history, and therefore reflects biases and prejudices of prior decision makers that reinforce the marginalisation of minorities. The dataset compilation process today only constitutes a certain group of people and it can be improved (Howard et al., 2017).

I propose a framework, here rooted in social science research that will ensure compliance to the proposed inclusivity matrix (Table 1) for the academic and applied communities. Despite the socio-economic and historical distinctions between cultures and nations, there are global cross cutting issues that occur when addressing inclusion in the context of ML systems. Issues regarding transparency, explainability, accountability and liability in ML systems must be taken into account. All stakeholders involved, from design to development, maintenance and end-of-life of these systems, should be considered in this process of attributing responsibility and liability to ensure culturally- and contextually-sensitive inclusive standards.

Current practice and research today agrees on abstract moral imperatives for inclusive design of such systems but there is very little concrete information for practitioners to apply these principles to their work. I propose this framework of inclusive design ensuring a high degree of participation within ML systems, by giving recommendations for diverse stakeholders and justifying my call for inclusion on a policy level, aiming to give tangible guidance for designers and policy makers.

Table & References

References

Walsh, Toby “Elon Musk is wrong. The AI singularity won't kill us all” Wired.co.uk. (2018), accessed 23.04.2018: https://www.wired.co.uk/article/elon-musk-artificial-intelligence-scaremongering

Lipton, Zachary C. "The Doctor Just Won't Accept That!."arXiv:1711.08037 (2017)

Datta, Anupam. "Did Artificial Intelligence Deny You Credit?." The Conversation (2017), accessed 23.04.2018: https://theconversation.com/did-artificial-intelligence-deny-you-credit-73259

Hart, Robert. "When Artificial Intelligence Botches Your Medical Diagnosis, Who’S To Blame?." Quartz (2017), accessed 23.04.2018: https://qz.com/989137/when-a-robot-ai-doctor-misdiagnoses-you-whos-to-blame/

Angwin, Julia & Mattu, Surya. "Machine Bias — Propublica." ProPublica (2016), accessed 23.04.2018: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencin

Hardesty, Larry.“Data diversity. Preserving variety in subsets of unmanageably large data sets should aid machine learning”. MIT News Office (2016), accessed 23.04.2018: https://news.mit.edu/2016/variety-subsets-large-data-sets-machine-learning-1216

Barocas, Solon & Andrew D. Selbst. "Big data's disparate impact." Cal. L. Rev. 104 (2016): 671

Howard, Ayanna, Cha Zhang, & Eric Horvitz. "Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems." Advanced Robotics and its Social Impacts (ARSO) IEEE Workshop (2017)

Table 1. Inclusivity-Matrix

(1)   Bias prevention

(2) Bias detection

(a)   ML

                        (i)          data

(1)   labeling

(2)   generating

(b)   non-ML

                        (i)          transfer knowledge;  transfer methods;  build models

(1)   Social Scientists

(2)   Legal experts

(3)   Policy Makers

(4)   Media

(5)   Public at large

                      (ii)          Educational changes in the training of all the above

(a)   ML

                        (i)          dataset

                      (ii)          modeling

                    (iii)          training

                     (iv)          implementation

(b)   non-ML

                        (i)          overcome information asymmetries

                      (ii)          demystify field of ML

                    (iii)          develop knowledge & expertise on nature of ML practices

                     (iv)          rediscover methods & apply interdisciplinarily

                      (v)          “enlighten” ML community on ethics, etc. on different maturity levels (education; academia; industry)

Felix Ologbonyo

Felix OlogbonyoUniversity of Llorin

Felix Ologbonyo has expansive experience in rendering legal advisory services to wide spectrum of clientele covering local and transnational corporations with an impressive background in dispute resolution. He is listed in the “Nigerian Top Executives in the Law, Legal and Information Services Industry 2015” a publication highlighting successful executives in the Law, Legal and Information Services Industry. A member of the Panel of Neutrals at the Lagos Multi-door Courthouse, the first and foremost court-connected Mediation Centre in Africa, Felix has settled over 100 cases referred to him by the courts in which terms of settlement were signed and entered as consent judgements.

He holds LLB (University of Ilorin), BL (Nigerian Law School) all with Magna Cum Laude and Master’s Degree in law (University of Lagos) in addition to obtaining General and Advanced Certificates in Intellectual Property from WIPO Academy, Switzerland. He is currently rounding off a Master’s programme in International Law and Diplomacy at the University of Lagos and has undertaken a PhD programme at the University of Ilorin, Nigeria.

Abstract:
BRAVE NEW WORLD? LEGAL QUESTIONS FOR AN AI DOMINATED SOCIETY
I. Overview
Our world is a kaleidoscopic one. It changes and keeps on changing at a rate never imagined. From the industrial revolution to the development of computer, the world has continued to witness unprecedented changes in systems and processes. Most recently in human history, Artificial Intelligence (AI) emerged and has continued to topple c conventional standards and introducing new perspectives to every field thereby creating a new world.

AI is the demonstration of human intelligence by computers which enable them to perform tasks which require natural human intelligence. It is the demonstration of human cognitive capacities by computers. Through the emergence of AI, the world has redefined itself as machines are being made to grow, learn from experience, adjust to new inputs, accomplish specific tasks (which could only be performed previously by man) more effectively and efficiently. As the use of computers drive the life of the 21st century man, AI introduces a new brave world with novel legal questions.

II. Research Questions
AI brings to the fore the nature of personality which law ascribes to computers. In this brave new world, the new “artificial man” is sturdy to redefine the law, yet the law has not taken cognizance of him and his proclivities. The tentacles of law has not been stretched to encapsulate the definition of computers as possessing legal personality even though computers now exhibit super human intelligence.

AI casts complicates responsibility and liability. The law naturally presumes man is intelligent and capable of reason. However, AI creates an intelligent “artificial man” with mechanical will and reason in addition to being a legal nonentity, yet the “man” performs task, which create legal liabilities and consequences which interrogate the definition of mens rea.

Many computers exhibiting AI are not limited by space and distance, and very soon, their operations may not be limited by time. Where they invoke liabilities, the determination of the law, which should apply, becomes a problem. With the prevalence of data harvesting with AI, the law must redefine the right to privacy or its essence will be permanently lost. AI has also complicated the ownership, recognition and enforcement of intellectual property and rights arising therefrom, and has therefore raised novel questions in this regard.

III. Methodology
This study will review some literatures available on this subject and generate original thoughts to extensively address the various novel legal questions facing the AI dominated society.

IV. Expected Results
This study is expected to discuss the above questions and many more, provide theoretical basis for addressing them, and advance conceptual and jurisprudential arguments to extend the frontiers of law.

References
Boden, Margaret A., Artificial Intelligence and Natural Man, Basic Books, New York, 1997
Cole G.S.: Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer L.J. 127 (1990).
13. Restatement Greenblatt N.A.: Self-Driving Cars and the Law. IEEE Spectrum, p.42 (16 February 2016)

DR. Maayan Perel Bio

Research Topic (with professor Niva Elkin-Koren) 
The research focuses on the theoretical aspects of enforcement by algorithm, as they are reflected in copyright law.  Computers programs are used today both to safeguard and to enforce the rights of copyright holders.  In the digital realm, different algorithmic mechanisms were developed to execute the job that was once reserved solely to humans: they filter infringing material (i.e., the use of Content ID) and remove automatically allegedly infringing material (Notice & Takedown).  This research will examine the theoretical aspects of this type of enforcement, focusing on how it interacts with the concepts of trust, private autonomy and transparency.   

Education 
S.J.D. University of Pennsylvania Law School (2014)
LL.M. Benjamin N. Cardozo School of Law (2009)
LL.B., Haifa University, Faculty of Law (2008)
B.A., Haifa University, Department of Economics (2007)

 
Publications

Perel & Elkin-Koren, Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement, Florida Law Review __ (forthcoming 2017).

Perel & Elkin-Koren, Accountability in Algorithmic Copyright Enforcement, 19 Stan. Tech. L. Rev. __ (forthcoming 2016).

Elkin-Koren & Perel, Understanding Algorithmic Governance, in Oxford Handbook of International Economic Governance and Market Regulation (Eric Brousseau, Jean-Michel Glachant, & Jérôme Sgard Eds.) (Oxford University Press, forthcoming 2017).

Maayan Perel, From Non-Practicing Patents (NPEs) to Non-Practicing Patents (NPPs): A Proposal for a Patent Working Requirement, 83 U. Cin. L. Rev. 747 (2015).

Maayan Perel, AN EX ANTE THEORY OF PATENT VALUATION: Transforming Patent Quality into Patent Value, 14 J. High Tech. L. 148 (2014).

Maayan Perel, Reviving the Gatekeeping Function: Optimizing the Exclusion Potential of Subject Matter Eligibility, 23 Alb. L.J. Sci. & Tech. 237 (2013).

Maayan Filmar, A Critique of In re Bilski, 20 DePaul J. Art Tech. & Intell. Prop. L. 11 (2009).
 
Active presentations

Accountability in Algorithmic Enforcement: Lessons from Copyright Enforcement by Online Intermediaries, International Conference: The Many Faces of Innovation, Bar-Ilan University, Faculty of Law & Ono Academic College, Jerusalem, Israel (January 5-6, 2016) (in English)
 
Accountability in Algorithmic Enforcement: Lessons from Copyright Enforcement by Online Intermediaries, Oxford Handbook on IP, Tel Aviv University, Tel Aviv, Israel (December 7-8, 2015) (in English)
 
Can We Really Count on Algorithmic Implementations of Notice and Takedown? An Empirical Investigation of Algorithmic Copyright Enforcement, 2015 Conference on Empirical Research on Copyright Issues (CERCI), Chicago-Kent College of Law, Chicago, IL (November 20, 2015) 
 
Accountability in Algorithmic Enforcement: Lessons from Copyright Enforcement by Online Intermediaries, The 3rd Annual Young TAU Workshop for Young Scholars in Law, Tel Aviv University, Tel Aviv, Israel (October 26-27, 2015) (in English)
 
Challenging The Black Box: On the Accountability of Algorithmic Law Enforcement, Openness and Intellectual Property, University of Pennsylvania, Philadelphia, PA (July 22-24, 2015)
 
Are Algorithms Trustworthy Law Enforcers? Lessons From Algorithmic Copyright Enforcement, Trust and Empirical Evidence in Law Making and Legal Process, University of Oxford, Oxford, UK (June 19-20, 2015) 

UNH 4th Annual IP Scholars' Roundtable, From NPEs to NPPs: A proposal for a Patent Working Requirement, New Hampshire University, NH, נובמבר 2014

From NPEs to NPPs: A Proposal for a Patent Working Requirement, סמינר מחלקתי, המרכז האקדמי למשפט ועסקים ברמת גן, נובמבר 2014
 
From NPEs to NPPs: A Proposal for a Patent Working Requirement, סמינר מחלקתי, המרכז האקדמי כרמל, נובמבר 2014
 
From NPEs to NPPs: A Proposal for a Patent Working Requirement, הסדנה השנתית לדיני קניין רוחני, המכללה למנהל, אוקטובר 2014

From NPEs to NPPs: A Proposal for a Patent Working Requirement, סמינר משפט, חברה וטכנולוגיה, אוניברסיטת חיפה, יוני 2014

An Ex Ante Method of Patent Valuation: Transforming Patent Quality into Patent Value, Law in a Transnational World, אוניברסיטת תל אביב, אוגוסט 2013

An Ex Ante Method of Patent Valuation: Transforming Patent Quality into Patent Value, הסדנה השנתית לדיני קניין רוחני, אוניברסיטת תל אביב, אוקטובר 2013

IPSC, An Ex Ante Method of Patent Valuation: Transforming Patent Quality into Patent Value, Benjamin N. Cardozo School of Law, New York, אוגוסט 2013

            
Works in Progress

Blackbox Tinkering for Studying Fair Use (Perel & Elkin-Koren) Chicago-Kent Center for Empirical Studies of IP (CESIP)

Experience

Netanya Academic College, Starting Fall 2016 (סגל קבוע)
Carmel Academic Center, Summer 2016 (מרצה מן החוץ)
Foley Shechter LLP, 2012-2016 (Associate)
Zefat Academic College, 2015-2016 (מרצה מן החוץ)

Fields of Interest
Intellectual Property, Patent Law, Trademark Law, Copyright Law

Contact  

This email address is being protected from spambots. You need JavaScript enabled to view it.

Yafit Lev-Aretz

Yafit Lev AretzThe City University of New York

Yafit Lev-Aretz is an Assistant Professor of Law at Zicklin School of Business, Baruch College, City University of New York. Professor Lev-Aretz is also a research fellow at the TOW center at Columbia Journalism School. Professor Lev-Aretz studies self-regulatory regimes set by private entities and the legal vacuum they create. She is especially interested in the growing use of algorithmic decision-making, intrusive means of news dissemination, choice architecture in the age of big data, and the ethical challenges posed by machine learning and artificially intelligent systems.

Her research also highlights the legal treatment of beneficial uses of data, such as data philanthropy and the data for good movement, striving to strike a delicate balance between solid privacy protection and competing values. Previously, Professor Lev-Aretz was a research fellow at the Information Law Institute at NYU, and an intellectual property fellow at the Kernochan Center for the Law, Media, and the Arts at Columbia University.

Abstract:
The term “data philanthropy” has been used to describe the sharing of private sector data for socially beneficial purposes, such as academic research and humanitarian aid. The recent controversy over an academic researcher’s alleged misuse of Facebook users’ data on behalf of Cambridge Analytica has brought data philanthropy into the spotlight of public debate. Calls for data ethics and platform transparency have highlighted the urgent need for standard setting and democratic oversight in the use of corporate data for public ends. Data philanthropy has also received considerable scholarly attention in various academic disciplines but has, until now, been virtually overlooked by the legal literature. This Article explains and starts filling in the resulting research gap by providing the first legal accounting of data philanthropy. Following a detailed description of current developments and scholarly thinking, this Article homes in on a normative assessment of privacy risks that are often cited as a conceptual and practical barrier to data philanthropy.

The Article refines the scope of data philanthropy's informational risks and proposes a framework for mitigating some of these risks through the Fair Information Practice Principles (FIPs). Specifically, the purpose specification and use limitation principles, which limit data collection to ex-ante specified purposes, are discordant with the unanticipated, ex-post quality of data philanthropy. This Article suggests adopting a new “data philanthropy exception,” which accounts for the existence and nature of the privacy risks, the time frame for action, the social risks of using the data, and the allowed retention time following the reuse. This Article concludes that the data philanthropy exception reinforces the values at the heart of the FIPs, provides guidance in a field that currently operates in a legal vacuum, and introduces the possibility of responsible sharing by and to smaller market participants.

Karni Chagal

Karni ChagalThe Center for Cyber Law & Policy

Karni is a Ph.D. candidate at the Haifa University Faculty of Law, writing on "The Reasonable Algorithm" under the supervision of Professor Niva-Elkin-Koren.

Karni is an attorney admitted to the NY, CA & Israel Bar, and holds an LL.M. degree from Stanford Law School. She is a founding partner at Lexidale, a consultancy firm specializing in comparative research pertaining to law & regulation.

Abstract:
"I am an algorithm, not a product": Why Product Liability is Not Well-Suited for Certain Types of Algorithms
Over the years, mankind has come to rely more and more on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance in the fields of law, medicine and accounting.

Advanced and safe as such machines may be, however, they are still bound to occasionally cause damage. From a tort law perspective, machines have generally been treated as "products" or "tools" in the hands of their manufacturers or users. Whenever they caused damage, therefore, they were subject to legal frameworks such as products liability or negligence by the humans involved. A growing number of scholars and entities, however, now acknowledge that the nature of certain "sophisticated" or "autonomous" decision-makers– requires different treatment than their "traditional" predecessors. In other words, while technology is always advancing, many believe that there is something 'special' in nowadays self-learning algorithms that separates them from the general group of "machines" and warrants a different tort legal framework that would apply to them.

The use of sophisticated machines that outperformed men existed for a long time. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework, and what I would call "thinking algorithms", that seem to warrant their own custom-made treatment? Why have "auto-pilots", for example, been traditionally treated as "products", while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn?

Different scholars touched upon this question. Some have referred in general to whether the underlying system fully replaces humans or not; others to whether the system may outperform humans or not; While others focused on the level of autonomy the system possesses. Said distinctions, however, are only partial and were not discussed in depth. A Roomba vacuum robot, for example, does replace a person in its action of cleaning; it (arguably) does so in a manner that outperforms the human cleaner, and it possesses sufficient levels of autonomy that enable it to "decide" on its own which directions to turn to continue cleaning. Is the Roomba a "thinking algorithm" that requires a special legal framework that would apply to it? If the answer is negative, then what is it that does distinguish the different types of algorithms?

The article would offer a systematical analysis of the parameters that differentiate traditional systems from ones that deserve a separate legal treatment. It would, in other words, provide rules of thumb that could be applied in the various fields taken over by algorithmic decision makers in order to identify the line between ordinary algorithms and their "thinking" counterparts that warrant a different legal perspective. It would also analyze why current product liability rules are not well-suited for the former type of algorithms.

Kamil Mamak

Kamil MamakJagiellonian University

Kamil Mamak, Ph.D. in Law, Ph.D. candidate in philosophy. He is an author of blog criminalfuture.com, which cover topics on the edge of law, philosophy, and technology. He is an author of several dozen articles and book chapters in criminal law and philosophy. His book "Future of Criminal Law" (in polish) published by Wolters Kluwer Poland was sold out twice. He is a member of the board at Cracow's Institute of Criminal Law. For five last years, he was working at Jagiellonian University Law Clinic.

Abstract:
Ethical dilemmas and Self-driving Cars: perspective of Criminal Law
The rise of the autonomous robots, especially cars, bring us to the point in which we feel to obliged to answer some question connected with the ethical and legal dilemmas. There is the discussion on how to program crash algorithms in autonomous or self- driving cars. A lot people from the wide range of scientists tries to put some light on the problem. There are arguments pointed out by philosophers, ethicists or lawyers. I think there is a common belief that we should prepare some ethical solutions which will be incorporated into cars algorithms.

The recent fatal crash involving Uber's autonomous car clearly visualized to the public that the problem is real. Online survey Moral Machine by MIT is collecting data about that issue. There are scenarios in which we have to decide how the car should behave. Sometimes, who should die. The far-reaching aim of the survey - and the discussion - is to prepare useful guidance. One of the problems which I covered in my Ph.D. thesis is connected with that issue. I was looking at it from the perspective of criminal law. I found out that criminal law sets some impassable boundaries for imposing ethical solutions in crash algorithms. One of the specific limitations is that we cannot differentiate any life based on its "quality". For example, criminal law forbids to legally accept provisions in which young child is more valid for the society from middle-age man.

The other issue is that in criminal law there is no possibility to count the number of people who should die. Life - from the perspective of law - is uncountable value. In that sense, there is a problem with the utilitarian approach which seems to be widely accepted as a solution in that kind of dilemmas. Criminal law is also giving some hierarchy of values which should be used in algorithms. For example, life is above health, health is above property. The other issue is that, even if we finally legally decide how algorithms should behave, from the perspective of criminal law, we will be forced to accept self-preservation instinct. It means that, it will never be forbidden to hack our own car to reprogram crash algorithms in that way, we – as an owner - will be always privileged during crush.

The last thought could be unacceptable by the public because it means that "the more expensive car you bought, the better protection you have". My considerations are based on the polish doctrine of criminal law, but there are a lot of similarities in principles among civil law countries.

Aviva de Groot

Aviva de GrootTilburg Institute for Law, Technology, and Society (TILT).

Aviva de Groot aims to identify explanatory benchmarks and modalities for providing rights relevant understanding of data driven technologies and their applications to laymen users. In particular, she focusses on automated decision making processes. With privacy at the core, her interests more broadly concern humans and technology, their mutual shaping and how this affects our understanding of human rights protections. She has professional and research experience in fields where technology supports human interaction and where humans interact with machines.

In dumber times, the transformation of citizens' reality to their administrative truth already proved to be a tricky process. Years of professional legal experience in a specialized field of administration law, dealing with phantom vehicle registrations, taught her how the complex interplay of law, policy and technology should discourage a narrow a focus on either element. Her Information Law master's thesis research project dealt with privacy concerns arising from the use of social robots in health care settings. I focussed in particular on levels of transparency and understanding of human-robot interaction.

Publications:
"Nothing comes between my robot and me" - co-author, Conference proceedings CPDP 2018, Hart Publishing (forthcoming).
"The Handbook of Privacy Studies" - co-editor, Amsterdam University Press, October 2018.

Abstract:
To explain automated decisions, we need human rationales for explanations.
This paper responds to the debate on seeking explanations of decisions made with, or supported by, artificial intelligence. Europe's General Data Protection Regulation has fueled much of these discussions, as it directly seeks to regulate the practice. The regulation demands transparent processing of personal data and -arguably- explanations of individual decisions made with 'solely' automated means. In the words of the Article 29 Working Party guidelines on automated decision making and profiling: "subjects will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis."17

The discussions however are not confined to this legal instrument, nor its territory. Problematisation of 'decision making 2.0' lies in the perceived decline of meaningful, intelligent human interaction with these systems. The more autonomous a system 'learns,' the more complex it becomes for humans to evaluate the output models and underlying correlations in terms of their meaning, merits, and usability.18 The computational logic increasingly departs from that of human reasoning, and depending on what form of transparency we demand, explanations of decisions may produce answers that we are not able to follow logically, intuitively, or both.19 

However, taking these technologies, their affordances, and their perceived immunities for explainability as the starting point of enquiries, while noting them as "the agential character[s] in the drama"20 of perceived data driven harms, gets legal and social scholars caught up in the work and grammar of system designers. Doubts about the usefulness of transparency easily follow. But instead of deploring inscrutable machines, it seems more productive to define what we demand to understand about machines' analyses. And, importantly, how to serve decision subjects' need to understand these explanations.

In different legal fields, we have rules that protect our human rights and freedoms from opaque (human) decision making. Analyzing our rationales for 'giving reasons' and building a broader theoretical framework can support our demands to the field of system design and operation.21 When are we legally satisfied with answers? Our mental constraints for processing information means we sometimes 'satisfice' 22 to where this leads to legal fictions. How do we account for that? Also, information is not detachable from communication. A thorough exploration of the values of explanations takes into account the "epistemic and ethical norms" for adequate communication.23
As a first step towards building such a framework, this paper analyses general legal theory and human rights law and scholarship for values per se of explanations. As various scholars note, giving reasons by itself shows respect for a decisions' subject dignity, autonomy,and personhood,24 and in doing so supports their own decisions and life choices. Why is that, and when, and how are these values considered to be served? These functions are well established in privacy literature and rulings of the European Court of Human Rights, that in turn informs the EU's data protection regime. This analysis will support and add to the later stages of the project, when specific legal domains are researched.

 

17 Article 29 Data Protection Working Party, “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679 WP 251rev.1.”
18 Rich Caruana et al., “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission,” in KDD 2015 (Sydney, NSW, Australia: ACM, 2015), https://www.microsoft.com/en-us/research/publication/intelligible-models-healthcare-predicting-pneumonia-risk-hospital-30-day-readmission.
19 Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines (Draft),” Fordham Law Review, Forthcoming, February 19, 2018, https://papers.ssrn.com/abstract=3126971 p.13, 46.
20 Daniel Neyland and Norma Möllers, “Algorithmic IF … THEN Rules and the Conditions and Consequences of Power,” Information, Communication & Society 20, no. 1 (January 2, 2017): 45–62, https://doi.org/10.1080/1369118X.2016.1156141.
21 The recent paper of Doshi-Velez and Kortz acknowledges the value of such an analysis, but rushes through the enterprise to how the findings translate to the digital realm. Finale Doshi-Velez et al., “Accountability of AI Under the Law: The Role of Explanation” (Berkman Center Research Publication, forthcoming, November 2017).
22 Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, May 30, 2015), https://papers.ssrn.com/abstract=2609777.
23 Onora O’Neill, “Transparency and the Ethics of Communication,” in Transparency: The Key to Better Governance?, ed. Christopher Hood and David Heald (Oxford, UK: Oxford University Press, 2011).
24 Katherine Strandburg, “Decision-Making, Machine Learning and the Value of Explanation,” December 16, 2016, http://www.dsi.unive.it/HUML2016/assets/Slides/Talk%202.pdf; Antoinette Rouvroy, “‘Of Data and Men’. Fundamental Rights and Freedoms in a World of Big Data.,” Council of Europe, Directorate General of Human Rights and Rule of Law. T-PD-BUR(2015)09REV (2016); Lilian Mitrou, “The General Data Protection Regulation: A Law for the Digital Age?,” in EU Internet Law: Regulation and Enforcement, ed. Tatiana-Eleni Synodinou et al. (Springer International Publishing, 2017), 19–56. Paul Schwartz, “Data Processing and Government Administration: The Failure of the American Legal Response to the Computer,” Hastings L.J. 43 (1991): 1321; Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines (Draft),” Fordham Law Review, Forthcoming, February 19, 2018, https://papers.ssrn.com/abstract=3126971.

 

 

Prof. Tal Zarsky

Tal ZarskiThe Center for Cyber Law & Policy

Prof. Tal Zarsky is the Vice Dean of the University of Haifa's Faculty of Law. His research focuses on Information Privacy, Cyber-Security, Internet Policy, Social Networks, Telecommunications Law, Online Commerce, Reputation and Trust. He published numerous articles and book chapters in the U.S., Europe and Israel. His work is often cited in a variety of contexts related to law in the digital age.

Among others, he participated in the Data Mining without Discrimination project, funded by the Dutch Research Council (NWO) as well as other national and international research projects. He has advised various Israeli regulators, legislators and commercial entities on related matters. He severed on a variety of advisory boards and is a frequent evaluator of articles and research grants for various international foundations.

Prof. Zarsky was a Fellow at the Information Society Project, at Yale Law School and a Global Hauser Fellow, at NYU Law School. He completed his doctorate dissertation, which focused on Data Mining in the Internet Society, at Columbia University School of Law. He earned a joint B.A. degree (law and psychology) at the Hebrew University with high honors and his master degree (in law) from Columbia University.

Wolfgang Schulz

WolfgangHans-Bredow-Institut for Media Research
Alexander von Humboldt Institute for Internet and Society
Professor of Media Law and Public Law including its theoretical basis at the Faculty of Law of the University Hamburg, Director of Hans-Bredow-Institut for Media Research at the University of Hamburg, Director of the Alexander von Humboldt Institute for Internet and Society in Berlin, Research areas: Freedom of Speech, Media Regulation, Law and Technology, Internet Governance.

Eldar Haber

Eldar HaberThe Center Cyber Law & Policy

Dr. Eldar Haber is an Associate Professor (Senior Lecturer) at the Faculty of Law, Haifa University and a member of the Haifa Center for Law & Technology (HCLT) and the Center for Cyber Law & Policy (CCLP). He served as a fellow and a faculty associate at the Berkman-Klein Center for Internet & Society at Harvard University (2015-2018).

His main research interests consist of various facets of law and technology including cyber law, intellectual property law (focusing mainly on copyright), privacy, civil rights and liberties, and criminal law. His works were published in various flagship law reviews worldwide, including top-specialized law and technology journals of U.S. universities such as Harvard, Yale and Stanford, and he is the author of Criminal Copyright (Cambridge University Press, 2018).

Over the years, he has won several academic awards such as the IAPP best privacy paper award in the EU (2017). His works are frequently presented in various workshops and conferences around the globe, and were cited in academic papers, governmental reports, the media, and U.S. Federal courts.

Ronald Leenes

Ronald Leens

Tilburg Institute for Law, Technology, and Society (Tilburg University)

Prof.dr. Ronald Leenes (1964) is full professor in regulation by technology at the Tilburg Institute for Law, Technology, and Society (Tilburg University) and director of the same institute. TILT currently has about 50 researchers.

His primary research interests are techno-regulation, conceptual issues with respect to privacy, data protection in practice, data analytics, accountability and transparency, regulatory failure, robotics and human enhancement. He has been deeply involved in seven EU projects (FP6, FP7, H2020), both as researcher, work package lead and member of the management team.

He was principal investigator and TILT project coordinator in the EU FP7 Robolaw project studying ethical and legal issues of robotics, human enhancement and neurosciences. He has been involved as project lead and researcher in many projects for the European Commission and Dutch ministries.

Courtney Bowman

Courtney Bowman

Palantir Technologies

Courtney Bowman co-directs Palantir’s Privacy and Civil Liberties Engineering Team.
As lead for Palantir Technologies’ in-house Privacy and Civil Liberties Team, Courtney Bowman works extensively with local government (including law enforcement, criminal justice, health and social services) and philanthropic partners to develop technology-driven solutions to information sharing and inter-agency cooperation in a manner that respects applicable privacy, security, and data integrity requirements.

Additionally, Bowman and his team actively engage with members of Palantir’s esteemed advisory council and the privacy advocacy world at large to explore privacy and data protection challenges on the horizon and ensure that the broader concerns from the community are addressed in the design and implementation of Palantir’s software platforms.

Prior to Palantir, Courtney earned degrees in Physics and Philosophy at Stanford University and worked as a quantitative and economic analyst at Google.

Dalit Ken Dror Feldman

Dalit Ken Dror Feldman

The Center for Cyber Law & Policy


Dalit Ken-Dror Feldman holds LL.B. (Summa Cum Laude) and B.Sc., Computer Science (Magna Cum Laude) Haifa University (2001); LL.M .in Commercial Law (Magna Cum Laude), Tel-Aviv University (2008); Ph.D. in Law, Haifa University (2018). Her research interests focus on Law & Technology and Legal Ethics, including Regulation of Technologies such as Artificial Intelligence and Unmanned Aerial Vehicle, Cyber Security and Cyber Terrorism, Intellectual Property and Open Access, Software Law and Information Law.

Dalit Ken-Dror Feldman is the legal supervisor of the Law, Technology and Cyber Clinic at the Haifa Center for Law and Technology, Faculty of Law, University of Haifa and a Lecturer at Zefat Academic College.

 

Abstract:
Autonomous Vehicles – Criminal Liability Challenges
Autonomous vehicles differ in degrees of automation. SAE (Society of Automotive Engineers) international, suggested in 2014, 6 levels of driving automation. These levels are affected by dependence on human driving. For example level 0 refers to no automation at all whereas level 1 is referred to "driver assistance". In levels 0-2 a human driver monitors the driving environment whereas in levels 3-5 automated driving system monitors the driving environment. When we talk about Autonomous vehicles we usually refer to level 3 and above.

Autonomous vehicles are based on artificial intelligence and machine learning. Therefore we assume that the more hours a car accumulates on real roads, the better it performs. During its training process, as well as in real life, autonomous vehicles might be involved in car accidents, as indeed has happened already.

On January 20 2016, a driver of a Tesla car was killed in China when the car crashed the back of a road-sweeping truck. On May 7, 2016 a driver of a Tesla car was killed in an accident when the car was in self-driving mode. Again on March 2018 we witnessed another deadly Tesla car accident in California. However, during 2018 we witnessed a new kind of autonomous vehicle accident. This time the victim was a pedestrian who crossed the road in Arizona. An Uber car was involved in that accident.

These car accidents clarify well why we should find a legal solution to criminal liability where a criminal offense is committed by an autonomous vehicle. In order to impose criminal liability, the existence of two elements is required: (1) the existence of criminal conduct; (2) general knowledge or intent with respect to the behavioral element (except in strict liability, for example).

In addition, with regard to the first element, sometimes proof of a certain result is also required. In the research, I examined several subjected models of criminal liability concerning artificial intelligence and four possible categories of criminal offenses in which an automated vehicle might be involved: (1) offenses of strict liability; (2) driving under the influence of alcohol; (3) negligent driving; (4) vehicular homicide. As a test case I referred to the Israeli Criminal Law. The research was conducted with the help of the Law, Technology and Cyber Clinic, University of Haifa, Faculty of Law, and led by Dr. Dalit Ken Dror Feldman.

 


See Automated Driving Levels of Driving Automation are Defined in New SAE International Standard J3016 available at: https://web.archive.org/web/20170104162849/https://www.sae.org/misc/pdfs/automated_driving.pdf.
Josh Horwitz & Heather Timmons, The Scary Similarities between Tesla's (TSLA) Deadly Autopilot Crashes, QUARTZ (September 20, 2016) available at: https://qz.com/783009/the-scary-similarities-between-teslas-tsla-deadly-      autopilot-crashes.
Bill Vlasic & Neal E.Boudette, Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says, THE NEW YORK TIMES (June 30, 2016) available at: https://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html
Jason Green, Tesla: Autopilot was on During Deadly Mountain View Crash, THE MERCURY NEWS (March 30, 2018) available at: https://www.mercurynews.com/2018/03/30/tesla-autopilot-was-on-during-deadly-mountain-view- crash.
Alex Lubben, Self-Driving Uber Killed a Pedestrian as Human Safety Driver Watched, VICE NEWS (March 19, 2018) available at: https://news.vice.com/en_us/article/kzxq3y/self-driving-uber-killed-a-pedestrian-as-human-safety-driver-watched.

 

 

 

 

Alexandra Giannopoulou

Alexandra Giannopoulou

IVir, University of Amsterdam

Alexandra Giannopoulou is a postdoctoral researcher at the Blockchain and Society Policy Lab at the Institute for Information Law (IViR), University of Amsterdam. She is an associate researcher at the Institute for Communication Sciences (ISCC) in Paris, within the research group Information and Commons Governance and she has also worked as a research fellow at Humboldt Institute for Internet and Society (HIIG) in Berlin.

She holds a PhD in law from the Center for Legal and Economic Studies of Multimedia (CEJEM) at the University of Paris II Pantheon-Assas. Her thesis, entitled "The Creative Commons licenses" and supervised by Professor Jerome Huet, was presented on December 2016.

Abstract:
Algorithmic systems: the consent is in the details?
The exponential use of algorithmic processing of data has put the applications of advances in artificial intelligence at the forefront of the European Commission’s agenda for the digital single market . The broad vision of algorithmic technology applications is to reengineer current power structures and to create fair decision-making transparent systems whose impact will improve society as a whole. The development of algorithmic systems is largely data-driven. Consequently, with this vision come regulatory legal challenges. As machine learning processes and the use of artificial intelligence (AI) continue to evolve, legal and computer science scholars express concerns about the consequences of the creation of data-intensive information societies.

Within the legal field, scholars have explored questions related to data ownership , privacy , copyright and their role in algorithm technologies. However, the challenges posed by efforts to implement appropriate consent mechanisms in the ubiquitous use of data in everyday transactions and decision-making processes affecting individuals have yet to be resolved.

Despite the extensive research on the role of consent in data protection and privacy in general, the particular nature of informed (and explicit) consent in terms of mass data processing and of “black box” algorithms has not been fully explored. Recently, the urgency of reevaluating the conditions of consent in AI has been underlined by the 2017 Report of the AI Institute of the University of New York . Namely and according to the Institute’s researchers, “adaptive algorithms are changing constantly, such that even the designers who created them cannot fully explain the results they generate (…) We must ask how ‘notice and consent’ is possible or what it would mean to have ‘access to your data’ or to ‘control your data’ when so much is unknown or in flux.” Furthermore, the urgency in reviewing the current consent rules has also been highlighted in the Article 29 Working Party’s Action plan for 2017 .

Consent constitutes one of the cornerstones of data protection regulation, functioning as an expression of the autonomy of individuals and of privacy self-management. The challenge emerging from algorithmic technologies is the growing dissonance of notice-and-consent regulation with the production, collection, and processing of data . While existing data protection rules and the General Data Protection Regulation (GDPR) account for human consent, the diversity in personal data leaves ample room for doubt. For example, it remains unclear whether the data derived from already existing input data can be qualified as personal and what are the consent mechanisms required in that case. What’s more, consent on the input data related to automated decision-making appears to become illusory when the functioning of an algorithmic model remains elusive to the affected individuals.

The presentation will explore the challenges brought forward by algorithmic systems to the traditional approach of consent. Addressing this particularly pressing area of safeguarding privacy in the algorithmic era is interlinked with the issue of the lack of foresight in the mechanisms of machine learning and AI. The evolution of digital consent mechanisms towards automated consent will be explored taking into consideration current policies surrounding the legibility requirement, the application of the right to an explanation, as well as granular solutions that go beyond privacy self management and include accountability of the state and the technology producers in order to reshape the consent requirement in the current technological discourse.

Bibliography
A Campolo e.a. (2017), AI Now Institute 2017 Report. Available online: https://ainowinstitute.org Last accessed 31 May 2018
L Edwards and M Veale (2017), “Enslaving the algorithm: from a ‘right to an explanation’ to a ‘right to better decisions’?”, Research paper. Available online: https://ssrn.com/abstract=3052831 Last accessed 31 May 2018
P Hacker (2017), “Personal data, exploitative contracts, and algorithmic fairness: autonomous vehicles meet the Internet of things”, International Data Privacy Law doi:10.1093/idpl/ipx014
B Hugenholtz (2017), “Data Property: Unwelcome Guest in the House of IP”, Better regulation for copyright, pp 65-77. Forthcoming in Kritika. Essays on Intellectual Property, Vol. III. Available online: https://juliareda.eu/wp-content/uploads/2017/09/2017-09-06_Better-Regulation-for-Copyright-Academics-meet-Policy-Makers_Proceedings.pdf Last accessed 31 May 2018
D Kamarinou, C Millard and J Singh (2016), “Machine learning with personal data”, Queen Mary School of Law Legal Studies Research Paper No 247/2016. Available online: https://ssrn.com/abstract=2865811 Last accessed 31 May 2018
A Levendowski (2017), “How copyright law can fix artificial intelligence’s implicit bias problem”, Washington Law Review, forthcoming. Available online: https://ssrn.com/abstract=3024938 Last accessed 25 Oct 2017
G Malgieri and B Custers (2017), “Pricing privacy – the right to know the value of your personal data”, Computer Law and Security Review: The International Journal of Technology Law and Practice, doi: 10.1016/j.clsr.2017.08.006
M Perel and N Elkin-Koren (2017), “Black box tinkering: Beyond disclosure in algorithmic enforcement”, Florida Law Review, 69:181-221
A Ramalho (2017), “Data producer’s right: Powers, Perils and Pitfalls”, Better regulation for copyright, pp 51-58. Available online https://juliareda.eu/wp-content/uploads/2017/09/2017-09-06_Better-Regulation-for-Copyright-Academics-meet-Policy-Makers_Proceedings.pdf
Last accessed 31 May 2018
E Sedenberg and A L Hoffmann (2016), “Recovering the History of Informed Consent for Data Science and Internet Industry Research Ethics”, Research paper. Available online: https://ssrn.com/abstract=2837585 Last accessed 31 May 2018
T Wan Kim and B Routledge (2017), “Algorithmic transparency, a right to explanation and trust”, Working Paper, Available online: https://www.business-ethics.net Last accessed 31 May 2018
M L Jones, E Edenberg and E Kaufman (2018), “AI and the Ethics of Automating Consent”, IEEE Security & Privacy, forthcoming

Amélie Heldt

Amelie Heldt

Hans-Bredow-Institut Amélie Heldt studied French and German Law at the universities of Paris Nanterre and Potsdam.

After passing the German first state examination, she took a supplementary training programme in Design Thinking at the Hasso-Plattner-Institut in Potsdam and worked in the legal department of a Universal Music. She completed her two-year clerkship at the superior Court of Justice in Berlin and was working i.a. for the Berlin Opera Foundation, in the media and copyright area of the law firm Raue LLP and for the GIZ in Cambodia. Since May 2017, she is a junior researcher within the research program “Transformation of Public Communication” at the Hans-Bredow-Institute and a PhD candidate with Prof. Wolfgang Schulz. She is an associated researcher with the Humboldt-Institute for Internet an Society in Berlin.

Abstract:
“Upload-filters: bypassing classical concepts of censorship?”
Keywords: freedom of expression, censorship, democratic legitimation, upload-filters, prior restraints Protecting human rights from automated decision making might not be limited to the relationship between intermediaries and their users. In the European human rights framework, fundamental rights are in principle only applicable vertically, i.e. between the State and the citizen. Where does that leave the right of freedom of expression when user-generated content is deleted by intermediaries due to an agreement with the public authority? We must address this question in the light of the use of AI to moderate online speech and its (lacking) regulatory framework.

In 2017 there have been important changes regarding the use of upload-filters in the EU in order to prevent the spreading of terrorist and extremist content online. Via a code of conduct in the context of the EU Internet Forum, four companies (Facebook, Twitter, YouTube and Microsoft) have committed themselves to hash content and share it in a common “Database of Hashes”.

Considering that upload-filters 1) operate before the user-generated content is published (unlike re-upload-filters) and 2) screen all content regardless of suspicions, the potential risks for free speech are very high. It is therefore necessary to analyze whether this type of action can be subsumed under the notion of censorship and whether its categorization in public or private censorship is still appropriate.

One could argue that if the EU Commission (or another public stakeholder) pushes private IT companies to use AI to filter and delete user-generated-content, the censorship could indirectly be state-driven. The detour via a code of conduct makes it possible to delete legal content although censorship is clearly forbidden by Art. 5 of the German Basic Law (compared to Art. 10 ECHR, where prior restraints are not forbidden per se). In German constitutional law the interpretation of censorship is limited to state action and scholars aren’t inclined to widen the scope of application. However, by using soft law instruments to engage intermediaries to use upload-filters, the State could potentially bypass the prohibition of censorship.

How does the constitutional protection of fundamental rights coincide with the use of upload filters on digital platforms when soft law instruments are being used? Is it an “unholy alliance” (Birnhack/Elkin-Koren, 2003) or a necessary cooperation in order to govern the digital environment? How up-to-date is our definition of state action when it comes to online communication? Thus, how relevant is it that the four companies involved are the biggest players on the market? This brings us to the fundamental question whether we need a clear legal basis for the use of intelligent filters or other types of AI in digital communication.

Rebecca Scharlach

Rebecca Scharlach

Film University Babelsberg.

Rebecca Scharlach is 25 years old and a so-called digital native. She completed her Bachelor in media studies at the University of Siegen in early 2016. As a media studies graduate student at the Film University Babelsberg, she focusses primarily on Internet research.

During her studies she has been working in several research projects, among others, at the Alexander von Humboldt Institute for Internet and Society.

Currently, she works in two research projects under the leadership of Prof. Dr. Jens Eder (Film University) and Prof. Dr. Judith Ackermann (University of Applied Sciences Potsdam). Until September 2018, she was a scholarship holder of the German National Scholarship Program (Deutschlandstipendium). To combine science with practice is very important to her.

Over the last years, she worked for several media productions (e.g. SBS Australia, UFA), as well as student projects that involved social media storytelling. At the moment, she is working on her Master thesis and aspires to do a PhD in the field of Internet research.

Abstract:
I post, therefore I am? The consequences of losing our backstage to AI.

The datafication (van Dijck 2014; Schöneberger 2013, 29; Fritsch 2018, 1) of our lives, as well as the value of our personal data for companies like Facebook is not a new phenomenon (Spiekermann; Korunovska 2016, 1; Larsson 2018). Snowden and Cambridge Analytica made an impact on the awareness of data violation and surveillance, nonetheless the majority of people using Social Media every day are not aware of the way it affects their behavior, as well as the way they help algorithms and AI to develop constantly. We could also be very aware of issues with Social Media, but still do not quit. Has Social Media become part of our identity?
Erving Goffman analyses the interaction and performances of individuals using the theater as a metaphor. He argues that we behave differently on the front stage with an audience watching us.

Furthermore, he describes the backstage as the personal room of the actor (the individual): "The actor’s behavior will be different in a private, backstage environment, however, as no performance is necessary." (Bullingham, Vansconcelos 2013, 1). Goffman uses the mask as a metaphor for our differing behaviors in different environments (Goffman 1990, 57).

Marion Fourcade stated in her lecture: "Facebook has entrance to the backstage" (see Fourcade 2018, 00:54:02 min). I would like to expand this and stress the thesis that we lost our backstage through the rise of artificial intelligence in Social Media platforms.
In my presentation I will talk about our online identity, our masks and the value of online privacy. I would like to face the question of how Social Media influences our behavior, and ultimately what (data) privacy means for diverse groups of people, what value it has for the individual.

Did we all become Social Media method actors? Is being visible on Social Media platforms more important than our personal data? Ultimately, these questions lead to the question of how AI does and will affect our behavior on Social Media. Instagram will be the object of study. I focus on younger adults who grew up with Social Media and have a long-standing experience. The presentation will be based on early stage research for my master thesis. My research will analyze the material collected through a currently ongoing online survey. I will prepare a presentation about the theoretical background and first results, the problems I encountered and how I tackled those.

At the "Artificial Intelligence: ethical and legal implications" workshop I intend to present my first research results, as well as open a discussion about my project.

References
Bullingham, Liam and Ana C. Vasconcelos. 2013. "The presentation of self in the online world’: Goffman and the study of online identities". Journal of Information Science 39 (1), pp. 101-112. January 13, 2013. DOI: https://doi.org/10.1177/0165551512470051.
Fourcade, Marion. 2018. "Social order in the digital society". Lecture Making sense of the digital society. https://www.hiig.de/events/marion-fourcade-digital-society.
Fritsch, Karin. 2018. "Towards an Emancipatory Understanding of Widespread
Datafication." SSRN. December 2, 2018. DOI: http://dx.doi.org/10.2139/ssrn.3122269.
Goffman Erving. 1990. The presentation of self in everyday life. London: Penguin.
Goffman Erving. 1955. "On face-work: an analysis of ritual elements in social interaction"
Psychiatry: Journal for the Study of Interpersonal Processes 18, pp. 213-231.
Larsson, Stefan. 2018. "Algorithmic governance and the need for consumer empowerment in data-driven markets". Internet Policy Review. May 15, 2018. DOI: 10.14763/2018.2.791.

Kuba Piwowar

Kuba Piwowar2

SWPS University of Social Sciences and Humanities.

I am a sociologist, data analyst at Google in Warsaw, where I lead Insights Team that is responsible for providing Polish businesses with knowledge about digital consumers and market trends, from both horizontal and vertical perspective. I am currently working on my PhD at the University of Social Sciences and Humanities, where from the cultural studies’ perspective I research algorithmic bias/fairness and ethics of A.I. I am a member of the Polish Society of Market and Opinion Researchers. I graduated from sociology at the University of Social Sciences and Humanities, hold a postgraduate degree in gender studies at Polish Academy of Sciences, art history in Collegium Civitas and Marketing Academy at The Wharton Business School of the University of Pennsylvania.

Abstract:
“Sources and ethical  implications of bias in A.I.: overview and critical analysis of current research”

Recent years’ discussion about artificial intelligence became pervasive and brought into the table practitioners and theorists across different industries and disciplines, but has been far from culturally diverse. Tech’s trouble with disproportion of men working on data, algorithms and A.I. resulted in diversity & inclusion programmes being introduced in Silicon Valley companies, addressing topic of gender, race, sexual orientation, among many. A.I. algorithms that are fuelled by ‘digital footprints’ - data collected from billions of signals every day - are facing the threat of bias being a result of interests standing behind those who gather, analyse and utilise these data, as well as mirroring simple but pervasive line dividing stable and growth economies from developing countries that has scarce technology penetration and almost non-existent internet access.

The purpose of the presentation is to explore current state of research in the field of ethical implications of artificial intelligence and algorithms from the perspective of social sciences and humanities, in a context of current discussion mapping.

First, I identify most important actors in the field of A.I., including businesses and academia. Then, by analysing research papers on the topic, as well as business’ efforts to address it, I place the discussion around ethical A.I. in a broader context of bias in decision making, looking at selected sources where it can stem from. Whether coming from incomplete datasets, spurious assumptions or cultural background being an always-on-context for algorithm designers, bias has pervasive impact on societies. It can amplify inequalities (with examples coming from researchers like Virginia Eubanks Eubanks or Cathy O’Neil) or give governments dangerous weapons to oppress minorities (as Stanford researchers Michal Kosinski and Yilun Wang pointed out in their study of deep neural networks being more accurate than humans in detecting sexual orientation). In the presentation I analyse sources of bias and their implications, pointing out significant gaps in research that has already been conducted, as well as limitations of these studies, as perceived by authors.

Juxtaposition of business and academic take on ethics in big data and autonomous systems gives a picture of what “fairness” looks like and what is perceived as “ethical”, but also sheds light on underexplored areas. To fill the gap, I go through efforts of selected academic centers, including research and educational projects, as well as discussion around role of humanities in developing autonomous systems, building a case for the idea that only interdisciplinary efforts in building artificial intelligence can result in an A.I. as a successful project.

Proposed analysis will be a starting point for a research to be included in my PhD dissertation, aimed at understanding what is computer scientists’, designers, product managers, data practitioners and analysts’ take on bias in data, ways of mitigating it, and eventually, its effect on algorithms and autonomous machines, including, but not limited to, artificial intelligence.

Thiago Dias Oliva

thiago dias oliva

Law Scholl of the University of São Paulo

Thiago Dias Oliva is a Ph.D. student in International Law at the University of São Paulo and head of research on freedom of expression at InternetLab, an independent research center based in São Paulo, Brazil. Thiago holds a master degree in Human Rights (2015) from the same university, having been granted a scholarship from the São Paulo Research Foundation (FAPESP) and the German Academic Exchange Service (DAAD). Author of the book "Sexual minorities and the limits of free speech" (2015).

Abstract:
CONTENT REMOVAL TECHNOLOGIES: NEW CHALLENGES TO FREEDOM OF EXPRESSION 

The digital environment has brought new ways of communicating and sharing content. Internet platforms like Facebook, Twitter and YouTube – protected from third party content liability in the USA pursuant to §230 of the Communication Decency Act – have enabled users to produce, publish and disseminate all sorts of content, which meant taking freedom of expression and access to information to a whole new level of enjoyment. With the increase in content production and growth in number of accesses, however, new challenges have arisen: the dissemination of defamatory content, non-consensual intimate images, hate speech, fake news, as well as the increase of copyright violations. In this context, internet platforms have been pushed by public authorities in many countries to increasingly moderate content.

The move has led these companies to adjust their terms and conditions of use, design more sophisticated frameworks for identifying illegal content and hire more people to work as moderators. Due to the huge amount of work required for moderating content, internet platforms have started developing artificial intelligence in order to automate decision making regarding content removal. Both Facebook and YouTube have already implemented a technology for removing copyright-protected content, terrorist propaganda as well as non-consensual intimate images. It looks for “hashes”, a unique digital fingerprint that these companies automatically assign to certain videos and images, allowing all content with the same fingerprints to be struck down very quickly.

Other technologies developed to remove text-based content are underway. Despite removing content in a very effective manner, technologies such as these may cause human rights violations whenever they fail to understand context and recognize specific content as socially valuable, whether from a freedom of expression or an access to information perspective. Mindful of these challenges, the present study intends to further understand and analyze existing automated content removal technologies from a legal perspective, in order to assess the risks they pose to freedom of expression and access to information in the digital environment.

 

Key-words: Content removal technologies; Internet platforms; Freedom of Expression; Access to Information

Christoph Lutz

christopgh Lutz2Nordic Centre for Internet & Society

Christoph Lutz is an associate professor at the Nordic Centre for Internet and Society, BI Norwegian Business School (Oslo). His research interests are digital and social media, with a particular emphasis on their social implications in terms of privacy and participation. With a background in sociology and a PhD in Management from the University of St. Gallen (Switzerland), Christoph has become increasingly involved in researching the digital economy, including social aspects of the sharing economy, the gig economy and social robots. His work has been published in major journals in communication, information systems and business/management research.


Abstract:
Ethical, Legal, and Social Issues of Social Robots: Findings from Expert Workshops

Introduction and Literature Review
Social robots are increasingly employed in industry, healthcare, and in households (Bekey, 2012). The application of social robots comes with numerous benefits. Therapeutic robots, for instance, can alleviate daily tasks for older patients and help doctors monitor their patients’ recovery (Beane & Orlikowski, 2015). Assistive robots in autism therapy facilitate the communication between physicians and 

This contribution aims at identifying key legal, ethical, and social issues (ELSI) of social robots, particularly in therapy and education, and at exploring solutions to these issues. To do so, we draw on data from the Workshop Series on the Ethical, Legal and Social Issues of Robots in Therapy and Education. Between 2015 and 2017, researchers from different disciplines and practitioners participated in structured discussions on ELSI and solutions. Here, we summarize the key findings of the workshops, addressing two research questions: What are the key ELSI of social robots, as perceived by experts? What solutions may help overcome these challenges?

Methods
Starting as a standalone workshop in 2015, the workshop turned into a workshop series, where participants discussed pressing ELSI of social robots in solution-oriented roundtables. Three additional workshops to the 2015 workshop were held in 2016 and 2017: one in Barcelona, Spain, as part of the New Friends 2nd International Conference on Social Robots in Therapy and Education in November 2016; another in Yokohoma, Japan, as part of the International Symposia in Artificial Intelligence (isAI) in November 2016; and a final one at the Edinburgh Center for Robotics, Scotland, as part of the European Robotics Forum in March 2017. To recruit participants from different backgrounds, a call for papers targeting engineers, legal practitioners, psychologists, ethicists, philosophers, and cognitive scientists was distributed through social media, university channels , and other channels, including mailing lists (e.g., eurobotics, ECREA, AoIR) and word-of-mouth. Two websites were created and 43 participants across various fields took part in the four workshops. With the consent of the participants, the workshop discussions were recorded and analyzed.

Preliminary Results
Preliminary analysis of the data indicates five main ELSI: 1) privacy, 2) legal uncertainty, 3) autonomy and agency, 4) impact of social robots on employment, 5) human-machine interaction and dehumanization. In the further analysis and reporting of the paper, we discuss both these five challenges and solutions to them in depth, outlining sub-issues and sub-discourses.

Prof. Niva Elkin Koren

nivaElkin

The Center for Cyber Law & Policy

Niva Elkin-Koren is a Professor of Law at the University of Haifa, Faculty of Law and a Faculty Associate at the Berkman Klein Center at Harvard University. She is the Founding Director of the Haifa Center for Law & Technology (HCLT), a Co-Director of the Center for Cyber, Law and Policy. During 2009-2012 she served as Dean of the Faculty of Law at the University of Haifa.

Her research focuses on innovation policy and access to knowledge, digital governance, online intermediaries, and the legal implications of AI and big data.

Prof. Elkin-Koren has been a Visiting Professor of Law at Harvard University, Columbia Law School, UCLA, NYU, George Washington University and Villanova University School of Law. She is the Chair of the Scientific Advisory Council, of the Alexander von Humboldt Institute for Internet and Society in Berlin, a member of the Executive Committee of Association for the Advancement of Teaching and Research in Intellectual Property (ATRIP), and a board member of the MIPLC Scientific Advisory Board of the Munich IP Law Center at the Max Planck Institute for Innovation and Competition. She is also a member of the editorial boards of the Journal of the Copyright Society (since 2009) the Journal of Information Policy (since 2010) and the Internet Policy Review (since 2016).

Prof. Elkin-Koren received her LL.B from Tel-Aviv University Faculty of Law in 1989, her LL.M from Harvard Law School in 1991, and her S.J.D from Stanford Law School in 1995.

© 2018 THE CENTER FOR CYBER LAW & POLICY. All rights reserved.