Extract

As with many new technologies, artificial intelligence (AI) in healthcare raises new opportunities and new dilemmas. To understand these, it is helpful to review the values relevant to implementing AI, including machine learning applications, in healthcare. This essay identifies a wide variety of values relevant to AI in healthcare and explains how AI applications can lead to conflicts between and among these values.

Introduction.

Artificial intelligence has been transforming healthcare, from diagnosis to treatment recommendations and prognosis. AI systems can learn to identify tumors in medical images or use EEG data to predict which SSRI will be effective for individual patients.1 AI can use large datasets to help identify which molecules have potential for drug development,2 or predict the next epidemic.3 As applications of AI proliferate, it seems that no aspect of healthcare will be left unchanged.

Like any computer application, AI systems take input and produce output in the form of data. It is characteristic of AI systems that they need to “learn” from a large amount of input before they can provide any useful or reliable output. This is distinctly different from traditional computer applications such as calculators. Calculator programs deliver a correct answer with certainty the first time and every time. AI is helpful for tasks for which we can’t provide a straightforward method to produce a correct answer every time. We have traditionally relied on human intelligence for such tasks, but once an AI system is trained it can often produce answers more accurately and more quickly than humans can.

You do not currently have access to this article.

Comments

0 Comments
Submit a comment
You have entered an invalid code
Thank you for submitting a comment on this article. Your comment will be reviewed and published at the journal's discretion. Please check for further notifications by email.