The American Medical Association (AMA) has published new guidelines to guide the development, implementation and use of artificial intelligence (AI) in healthcare. The set of seven principles aims to reduce risks for patients and physicians while maximizing benefits, says HealthITAnalytics.
“The AMA recognizes the enormous potential of artificial intelligence in healthcare to improve diagnostic accuracy, treatment outcomes and patient care,” said AMA President Jesse M. Ehrenfeld. “However, this technology is ethically controversial and carries potential risks, so it requires special oversight.”
The first principle concerns supervision. Non-governmental organizations play an important role in it. The second principle focuses on transparency, which the AMA believes is necessary to establish the trust necessary for the successful use of AI in healthcare. It is proposed that transparency of information related to the design, development and use of these tools be written into law as a requirement. The third principle describes how information should be disclosed and reported on the use of technology.
The fourth principle details the AMA’s approach to generative AI and directs healthcare organizations to develop and implement policies that predict and help manage the risks associated with the technology before it is used. The fifth principle focuses on privacy and (cyber)security. The AMA emphasizes that patient privacy and data security should be a top priority when developing and deploying AI in healthcare. The sixth principle concerns identifying and mitigating bias in algorithms to ensure fairness.
The final principle discusses the responsibility of physicians. According to the AMA, liability for the use of AI-enabled products should be limited and consistent with current legal approaches.
In addition, the press release notes that the introduction of AI into clinical practice should not exclude human analysis of the individual characteristics of patients or replace the doctor’s clinical thinking.