What happens when decisions made by an artificial intelligence platform lead to injury or damage? Is the “system” responsible? Fox Rothschild’s Chris Temple discusses legal liability when AI decisions made without human input go wrong.
More and more process systems deployed in public infrastructure, industrial, commercial and residential applications are not only automated, but also are increasingly using autonomous non-human agents to manage and to direct such systems. Until recently, these systems have been a combination of mechanical hardware and software code with some level of human worker input and control.
Now, however, more sophisticated and powerful artificial intelligence platforms are using code allowing industrial internet of things (IIoT)-networked “smart” equipment and robots to communicate interchangeably to identify issues and to devise solutions as a result of human-like reasoning processes. In other words, machines are identifying the problems, devising solutions and, in many instances, autonomously executing the steps necessary to implement the chosen solution. The systems also simultaneously keep track of the data upon which decisions were made.
Who – or What – is Responsible When Things Go Wrong?
Machine learning or smart process equipment, robots and related software presents unusual challenges in applying product liability laws as they presently exist. For example, if a machine makes a “decision” that causes injury or damage, who or what is legally responsible for the non-human decision-making, particularly if the hardware and software performed precisely as they were intended and without a demonstrable defect or malfunction of any kind?
In the “old” days, the focus of attention would have been on the human operator or control room technician. His or her conduct would be investigated and evaluated. In modern circumstances, however, the smart process system – a combination of equipment and software – is the only apparent culprit. Since a machine cannot be liable (at least not yet), we can expect the law to assign liability to some person or legal entity, including for example, the hardware manufacturer, the software providers, the sellers of the process, the installers of the process equipment and software and owners of the process or facility where the process is located.
The law has a number of candidates from which payment of damages may be coerced, but how the law will impose liability in a likely case scenario requires consideration now. Counsel and internal compliance professionals within organizations need to consider the potential liability risks of the technology used in today’s operations.
The Evolution of American Tort Law to Create Liability Where None Previously Existed
Over the past 50 years, American tort law has consistently evolved to compensate for injuries or damages caused by newly emerging products or services. It is not wholly unfair to conclude that American tort law will continue to evolve in such a way that where there is an injured party, the law will bend and twist to make someone pay compensation for damages. Thus, the laws in effect at the time the product was manufactured may not be the same laws as are applied years later, after someone is injured. Many companies designed, marketed and sold a product under one set of product liability laws, only to find that a new law would be later created to impose a liability based on a standard that did not exist when the product left the plant floor.
In a recent U.S. Supreme Court decision in Air and Liquid Systems Corp v. DeVries 586 U.S. ___ (2019), for example, the Court held that a manufacturer supplying critical equipment to the U.S. Navy during the wartime exigencies of World War II in 1943 should have issued a written warning based on a legal duty that would not be recognized by the law until 2019 – more than 75 years later. At the time the product was made and supplied, the manufacturer could not have anticipated any such legal obligation in the distant future, but it was nonetheless held liable for its 1943 product.
When marketing a product, it is necessary to consider the current legal obligations, but it is also prudent for those vested with stewardship and compliance to consider what the legal obligations of the buyers or sellers may be in the future, especially for a product that is relatively new to commerce.
What is new in today’s marketplace are tasks and functions being undertaken without human input or intervention. In a situation where decisions are made and solutions are executed by artificial intelligence without human involvement, how does the law assess potential liability when things go wrong? And perhaps more importantly, how will the law assess liability in the future?
If the smart process system was not defective and performed as intended, but an injury causing decision occurred, can we recreate the circumstances and the machine’s reasoning process and judgment as a technical matter or in a manner suitable for evidentiary requirements in a subsequent court proceeding? In other words, can the assembled system of interconnected equipment “testify” based on stored data? If the data becomes unavailable, questions will arise whether evidence is being destroyed. If data is available, how does one recreate what the machine knew or should have known about the circumstances and conditions and how do you measure whether the machine’s response was reasonable or appropriate as a matter of law? Stated another way, will the law develop a standard where a smart machine is considered to be dumb and, if so, who will be responsible for a dumb decision?
Can Management Anticipate Future Liability?
Unfortunately, there are no crystal balls for organizations to predict accurately the course of future liability, but general counsel or compliance officers can expect lawyers to develop alternate theories of liability peculiar to smart process systems and equipment. For example, does a company have independent liability for making, installing or relying upon a smart machine to make potential life-or-death decisions without human intervention, especially where the machine is expected to “learn” and evolve over the course of time and experience unaided by human influence? Marketing materials for companies engaged in the development, sale and design of smart process systems very often tout the increased safety and reliability of a fully interconnected IIoT-based installation. Will such claims give rise to tort liability in the event that injury and damage occur, especially where there was no record that the old system operated by human workers ever had a problem?
Some Practical Considerations While We Wait for the Courts
The courts have not yet answered these and many other related questions and there is nothing to suggest that courts will soon offer much guidance. These matters take some time, as the Supreme Court’s decision more than 70 years after the fact demonstrated. While waiting for the law to catch up to rapidly emerging technologies, however, companies engaged in the development, design, sale, acquisition, installation and use of smart process systems should consider the following:
- Do purchase order, contract or licensing documents purporting to allocate liability for damages caused by the future operation of the smart process system (or any of its components including software) take into account the impact of potential future changes in the law which may reassign or reallocate liability in ways different than today’s tort and contract law?
- Can marketing claims about the safety and reliability of smart process systems as a replacement for existing systems using at least in part human workers provide potential evidence for lawyers in the event of an injury-causing decision?
- Does the process system have the ability to collect and to maintain data in a legally and technically sufficient form that would permit the subsequent recreation of the events and the decision-making process such that the system itself can collectively act as a “witness” to the circumstances causing injury or damage?
Much attention has been given to cybersecurity concerns over IIoT-hacking risks and tech ethics experts continue to explore questions about artificial moral agents, but sellers, buyers and users of smart process systems in the U.S. need to consider how the American tort law will apply liability principles to this emerging technology and who (or what) will be the legally responsible party when artificial intelligence decisions made without human input go wrong.