• Profile
Close

Researchers say use of artificial intelligence in medicine raises ethical questions

Stanford School of Medicine News Mar 17, 2018

Artificial intelligence is hard at work crunching health data to improve diagnostics and help doctors make better decisions for their patients. But researchers at the Stanford University School of Medicine say the furious pace of growth in the development of machine-learning tools calls for physicians and scientists to carefully examine the ethical risks of incorporating them into decision-making.

In a perspective piece published March 15 in The New England Journal of Medicine, the authors acknowledged the tremendous benefit that machine learning can have on patient health. But they cautioned that the full benefit of using this type of tool to make predictions and take alternative actions can’t be realized without careful consideration of the accompanying ethical pitfalls.

“Because of the many potential benefits, there’s a strong desire in society to have these tools piloted and implemented into health care,” said the lead author, Danton Char, MD, assistant professor of anesthesiology, perioperative, and pain medicine. “But we have begun to notice, from implementations in non-health-care areas, that there can be ethical problems with algorithmic learning when it’s deployed at a large scale.”

Among the concerns the authors raised are:

  • Data used to create algorithms can contain bias that is reflected in the algorithms and in the clinical recommendations they generate. Also, algorithms might be designed to skew results, depending on who’s developing them and on the motives of the programmers, companies, or health-care systems deploying them.
  • Physicians must adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models designed to predict outcomes, understand how the models function, and guard against becoming overly dependent on them.
  • Data gathered about patient health, diagnostics, and outcomes become part of the “collective knowledge” of published literature and information collected by health-care systems might be used without regard for clinical experience and the human aspect of patient care.
  • Machine-learning–based clinical guidance may introduce a third-party “actor” into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.

“We need to be cautious about caring for people based on what algorithms are showing us,” Char said. “The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically.”

Go to Original
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay