PI: Karni Chagal-Feferkorn
Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage.
A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered “mere tools” in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithmic decision-makers could interchangeably cause damage.
This Article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a “reasonable algorithm” standard to nonhuman decision makers—similar to the “reasonable person” or “reasonable professional” standard that applies to human tortfeasors.
While the safety-promotion advantages of a similar notion have been elaborated on in the literature, the general concept of subjecting non-humans to a reasonableness analysis has not been addressed. Rather, current anecdotal references to applying a negligence or reasonableness standard to autonomous machines mainly discarded the entire concept, primarily because “algorithms ... Read More in the PDF FIle.