TY - JOUR
T1 - How much can potential jurors tell us about liability for medical artificial intelligence?
AU - Price, W. Nicholson
AU - Gerke, Sara
AU - Cohen, I. Glenn
N1 - Funding Information:
This work was supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784). I. Glenn Cohen serves as a bioethics consultant for Otsuka Pharmaceuticals on its Abilify MyCite digital medicine product and on the ethics advisory board for Illumina. No other potential conflict of interest relevant to this article was reported.
Publisher Copyright:
© 2021 Society of Nuclear Medicine Inc.. All rights reserved.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Artificial intelligence (AI) is rapidly entering medical practice, whether for risk prediction, diagnosis, or treatment recommendation. But a persistent question keeps arising: What happens when things go wrong? When patients are injured, and AI was involved, who will be liable and how? Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, hospitals that implement AI tools for physician use, and developers who create those tools in the first place. If physicians are shielded from liability (typically medical malpractice liability) when they use AI tools, even if patient injury results, they are more likely to rely on these tools, even if the AI recommendations are counterintuitive. On the other hand, if physicians face liability from deviating from standard practice, whether an AI recommends something different or not, the adoption of AI is likely to be slower, and counterintuitive rejections-even correct ones-are likely to be rejected. In this issue of The Journal of Nuclear Medicine, Tobia et al. (1) offer an important empiric look at this question, which has significant implications as to whether and when AI will come into clinical use.
AB - Artificial intelligence (AI) is rapidly entering medical practice, whether for risk prediction, diagnosis, or treatment recommendation. But a persistent question keeps arising: What happens when things go wrong? When patients are injured, and AI was involved, who will be liable and how? Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, hospitals that implement AI tools for physician use, and developers who create those tools in the first place. If physicians are shielded from liability (typically medical malpractice liability) when they use AI tools, even if patient injury results, they are more likely to rely on these tools, even if the AI recommendations are counterintuitive. On the other hand, if physicians face liability from deviating from standard practice, whether an AI recommends something different or not, the adoption of AI is likely to be slower, and counterintuitive rejections-even correct ones-are likely to be rejected. In this issue of The Journal of Nuclear Medicine, Tobia et al. (1) offer an important empiric look at this question, which has significant implications as to whether and when AI will come into clinical use.
UR - http://www.scopus.com/inward/record.url?scp=85098741947&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098741947&partnerID=8YFLogxK
U2 - 10.2967/jnumed.120.257196
DO - 10.2967/jnumed.120.257196
M3 - Review article
C2 - 33158905
AN - SCOPUS:85098741947
SN - 0161-5505
VL - 62
SP - 15
EP - 16
JO - Journal of Nuclear Medicine
JF - Journal of Nuclear Medicine
IS - 1
ER -