Karni Chagal

Karni ChagalThe Center for Cyber Law & Policy

Karni is a Ph.D. candidate at the Haifa University Faculty of Law, writing on "The Reasonable Algorithm" under the supervision of Professor Niva-Elkin-Koren.

Karni is an attorney admitted to the NY, CA & Israel Bar, and holds an LL.M. degree from Stanford Law School. She is a founding partner at Lexidale, a consultancy firm specializing in comparative research pertaining to law & regulation.

Abstract:
"I am an algorithm, not a product": Why Product Liability is Not Well-Suited for Certain Types of Algorithms
Over the years, mankind has come to rely more and more on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms' self-learning abilities now enable us to entrust machines with professional decisions, for instance in the fields of law, medicine and accounting.

Advanced and safe as such machines may be, however, they are still bound to occasionally cause damage. From a tort law perspective, machines have generally been treated as "products" or "tools" in the hands of their manufacturers or users. Whenever they caused damage, therefore, they were subject to legal frameworks such as products liability or negligence by the humans involved. A growing number of scholars and entities, however, now acknowledge that the nature of certain "sophisticated" or "autonomous" decision-makers– requires different treatment than their "traditional" predecessors. In other words, while technology is always advancing, many believe that there is something 'special' in nowadays self-learning algorithms that separates them from the general group of "machines" and warrants a different tort legal framework that would apply to them.

The use of sophisticated machines that outperformed men existed for a long time. What is it that separates "traditional" algorithms and machines that for decades have been subject to traditional product liability legal framework, and what I would call "thinking algorithms", that seem to warrant their own custom-made treatment? Why have "auto-pilots", for example, been traditionally treated as "products", while autonomous vehicles are suddenly perceived as a more "human-like" system that requires different treatment? Where is the line between machines drawn?

Different scholars touched upon this question. Some have referred in general to whether the underlying system fully replaces humans or not; others to whether the system may outperform humans or not; While others focused on the level of autonomy the system possesses. Said distinctions, however, are only partial and were not discussed in depth. A Roomba vacuum robot, for example, does replace a person in its action of cleaning; it (arguably) does so in a manner that outperforms the human cleaner, and it possesses sufficient levels of autonomy that enable it to "decide" on its own which directions to turn to continue cleaning. Is the Roomba a "thinking algorithm" that requires a special legal framework that would apply to it? If the answer is negative, then what is it that does distinguish the different types of algorithms?

The article would offer a systematical analysis of the parameters that differentiate traditional systems from ones that deserve a separate legal treatment. It would, in other words, provide rules of thumb that could be applied in the various fields taken over by algorithmic decision makers in order to identify the line between ordinary algorithms and their "thinking" counterparts that warrant a different legal perspective. It would also analyze why current product liability rules are not well-suited for the former type of algorithms.