The price of autonomy: liability standards for complementary and substitutive medical robotics and artificial intelligence - Núm. 28-1, Enero 2022 - Ius et praxis - Libros y Revistas - VLEX 903469652

The price of autonomy: liability standards for complementary and substitutive medical robotics and artificial intelligence

AutorFrank Anthony Pasquale III
CargoExpert on the law of artificial intelligence (AI), algorithms, and machine learning
Páginas3-19
Revista Ius et Praxis, Año 28, 1, 2022
Frank Anthony Pasquale III
pp. 3 - 19
3
Revista Ius et Praxis
Talca, Chile, 2022
Artículo
THE PRICE OF AUTONOMY: LIABILITY STANDARDS FOR COMPLEMENTARY AND SUBSTITUTIVE MEDICAL
ROBOTICS AND ARTIFICIAL INTELLIGENCE
El precio de la autonomía: estándares de responsabilidad para la rob ótica médica complementaria y
sustitutiva y la inteligencia artificial
FRANK ANTHONY PASQUALE III*
Brooklyn Law School
Abstract
When AI or robotics assist a professional, they are tool s. In medicine, the doctrine o f “competent human
intervention” has shifted liability away from those w ho make devices and toward the professionals who use
them. However, the professional i n such scenarios should not bear the entire burden of respo nsibility. Tools
can be defective, and vendors o f defective, complementary AI a nd robotics should be held responsible for
negligence. The burden of proo f will still be on the pla intiff to demonstrate that not only a skilled medical
professional, but also the maker of the tools used by such a professional, should be h eld liable for a
preventable adverse outcome.
When AI and robotics replace, rather tha n merely assist, a skilled medical pro fessional, the burden should
shift. The vendor of such computational systems needs to take on responsibility for errors and accidents. In
the medical field, there has long been a standar d of competent professional supervision of the deployment
of advanced technology. When s ubstitutive automation short-c ircuits that review, it is both defect ive and
unreasonably dangerous. Nevertheless, a t the damages phase of litigation, the vendor of the substitutive AI
should be entitled to explain how damages should be mitigated based on its AI’s performa nce relative to the
extant human- or human-machine based st andard of care. Such responsibili ty for explanation will serve an
important information-forcing func tion in areas where public understan ding is often limited by trade
secrecy.
As law and political economy methods demonst rate, law cannot be neutral with respect to markets for new
technology. It constructs these markets, mak ing certain futures more or less likely. Distin guishing between
technology that substitutes fo r human expertise and that which complements professionals is fundamental
not just to labor policy and the political economy of automation, but also to tort law.
Keywords
artificial intelligence, liability, tort.
Resumen
Cuando la IA o la robótica ayudan a un profesional, son herramientas. E n medicina, la doctrina de la
“intervención humana compete nte” ha desplazado la responsabilidad de quiene s fabrican dispositivos hacia
los profesionales que los utilizan. Sin embargo, el profesional en tales escenarios no debe cargar con to do el
peso de la responsabilidad. Las herra mientas pueden ser defectuosas, y los proveedores de IA y robótica
complementaria defectuosa deben ser c onsiderados responsables por ne gligencia. La carga de la prueba
aún recaerá en el demandante para demostra r que no solo el profesional médico capac itado, sino también
* Frank Anthony Pasquale III is an expert on the law of artificial intell igence (AI), algorithms, and machine learning. He is a
Professor of Law at Brooklyn Law School, Brooklyn, U.S.A, a V isiting Scholar at the AI Now Institute, an Affiliate Fellow at Yale
University's Information Society Project, and a m ember of the American Law Institute. Before coming to Brooklyn Law, he was
Piper & Marbury Professor of Law at the University of Maryland. H e is co-editor-in-chief of the Journal of Cross-Disciplinary
Research in Computational Law (CRCL), based in th e Netherlands, and a member of an Australian Research Cou ncil (ARC) Centre of
Excellence on Automated Decision-Making & Society (ADM+S). His book T he Black Box Society: The Secret Algorithms That Control
Money and Information (Harvard University Press 2015) has been re cognized internationally as an important study of the law and
political economy of information. His latest book, New Laws of Robo tics: Defending Human Expertise in the Age of AI (Harvard
University Press 2020) develops a political economy of autom ation focused on professionalization, in which human capacities are
the irreplaceable center of an inclusive economy. Professor o f Law, Brooklyn Law School. Email: pasquale.frank@gmail.com. I wish
to thank Prof. Dr. Carolina Riveros Ferrada for the in vitation to publish this piece. I wish to thank Brooklyn Law School’s summer
research fund for supporting this research. I also wish to thank anonymous reviewers and Prof. Anita Bernstein for thoughtful
comments. Any errors remain my own responsibility. I also wish to note that this essay was comm issioned by the Academia Sinica
Law Journal, and is planned to be published there, as well as in a collection on AI and Robotics law.
Revista Ius et Praxis, Año 28, 1, 2022
Frank Anthony Pasquale III
pp. 3 - 19
4
el fabricante de las herramientas util izadas por dicho profesional, debe se r considerado responsable de un
resultado adverso prevenible.
Cuando la IA y la robótica reemplacen, en lugar de simplemente ayudar, a un profesional médico calificado,
la carga debería cambiar. El prov eedor de dichos sistemas computacionales debe asumir la responsa bilidad
por errores y accidentes. En el campo de la medicina, ha existido dura nte mucho tiempo un estándar d e
supervisión profesional compet ente del despliegue de tecnología a vanzada. Cuando la automatiza ción
sustitutiva cortocircuita esa revis ión, es defectuosa e irrazonablemente peligrosa. Sin embargo, en la fase de
daños del litigio, el proveedo r de la IA sustitutiva debe tener derecho a explicar cómo se de ben mitigar los
daños en función del desempeño de su IA en relación con el est ándar de atención existente b asado en
humanos o máquinas humanas. Tal respo nsabilidad de explicación cumplirá una i mportante función de
obtención de información en áreas donde la comprensión del público a menudo se ve limitada por el
secreto comercial.
Como demuestran los métodos del derecho y la econo mía política, el derecho no puede ser neutral con
respecto a los mercados de nuevas tecno logías. Construye estos mercados, haciendo ciertos futuros más o
menos probables. Distinguir e ntre la tecnología que sustit uye a la experiencia humana y la que
complementa a los profesionales es fundamental no solo para la política laboral y la economía política de la
automatización, sino también par a la responsabilidad civil extracontractual.
Palabras clave
inteligencia artificial, responsabilidad, derecho delic tivo.
1. Introduction
Robotics and AI in medicine raise critical liability questions for the medical profession.
Consider the case of robotically assistive surgical devices (RASDs) which surgeons use to
control small cutting and grasping devices. If a surgeon’s hand slips with a scalpel, and a vital
tendon is cut, our intuitive sense is that the surgeon bears the primary responsibility for the
resultant malpractice suit. But the vendor of an RASD may eventually market a machine which
has a special “tendon avoidance subroutine,” akin to the alarms that automobiles now sound
when their sensors indicate a likely collision. If the tendon sensors fail, and the warning does
not sound before an errant cut is made, may the harmed patient sue the vendor of the RASD?
Or only the physician who relied on it?
Similar problems arise in the context of some therapy apps. For example, a counselor
may tell a patient with substance use disorder (SUD) to use an app in order to track cravings,
states of mind, and other information helpful to those trying to cure addictions. The app may
recommend certain actions in case the counselor cannot be reached. If these actions are
contraindicated and result in harm to the patient or others, is the app to blame? Or the doctor
who prescribed it? Home health aide businesses may encounter similar dilemmas as they
deploy so-called “care robots”1.
Of course, in neither the surgical nor the mental health scenario is the answer
necessarily binary. There may be shared liability, based on an apportionment of responsibility.
But before courts can trigger such an apportionment, they must have a clear theory upon
which to base the responsibility of vendors of technology.
This article develops such an approach. What is offered here is less a detailed blueprint
for liability determinations than a binary approach to structure policy discussions on liability
for harm caused by AI and robotics in medical contexts 2. The binary is the distinction between
substitutive and complementary automatio3. When AI and robotics substitutes for a physician,
1 For a fascinating overview of legal issues raised by care robots, see BLACK (2020). For examples of medical autom ation gone
awry, see WACHTER (2015).
2 This article will draw on common law principles in many jurisdictions, in order to inform a general policy discussion. It does not
attempt to give detailed legal guidance, or map how courts presently d o handle cases involving AI and complex computation in
medical contexts. Rather, cases and other legal materials are dra wn upon to illustrate the complement/substitute distinction.
3 This distinction may also be styled as a contrast be tween artificial intelligence (AI) and intelligence augmentation (IA). However,
that contrast would probably confuse matters at present, given that much of what is called AI in contemporary legal and policy
discussions is narrow enough to be IA.

Para continuar leyendo

Solicita tu prueba

VLEX utiliza cookies de inicio de sesión para aportarte una mejor experiencia de navegación. Si haces click en 'Aceptar' o continúas navegando por esta web consideramos que aceptas nuestra política de cookies. ACEPTAR