Before You Say “Aye” to AI…
In April 2024, the Federation of State Medical Boards (FSMB) released the document, “Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice.” Artificial Intelligence (AI) has been ramping up as a hot topic in policy the past few years as federal and state policymakers grapple with how best to address this area on a regulatory basis. The recently released FSMB document is a result of a recommendation made by the task force FSMB convened which was intended to monitor AI developments. As a result, FSMB adopted this document to provide physicians and state medical boards with guidance on how to navigate AI in practices in a “responsible and ethical incorporation” manner.
Currently, AI is primarily utilized in health care for managing administrative tasks such as scheduling and documentation of clinical visits. However, in some instances, it is also being used in clinical decision support systems. While most providers may already be comfortable with AI, and can likely see the immediate benefit of using it to help with paperwork and administrative tasks, there are also several areas of concern with it that should not be overlooked, especially in situations where it is being used for more clinical related decisions. Additionally, as it has been highlighted in recent reports and news articles, bias can sometimes be found within products produced utilizing AI. Therefore, a physician needs to make certain that they are regularly taking reasonable steps to identify and address any potential biases within a system utilizing AI and ultimately, as the FSMB document notes, it remains important for any AI-generated clinical information to always be checked for accuracy.
The FSMB document continually encourages physicians to educate and inform themselves as the AI landscape evolves. FSMB encourages physicians to remain aware of developments and changes to AI policy because ultimately the care of the patient is always the physician’s responsibility. Practitioners are also urged in the FSMB document to strike a balance between underutilizing AI, which might cause practitioners to inadvertently overlook helpful tools for patient care, and over relying on it, possibly neglecting other tools and compromising the quality of care. FSMB does caution that once a provider does decide to use AI for clinical decision support, they also “accept responsibility to respond appropriately to the AI’s recommendations.” The guidance notes that regardless of whether the practitioner accepts or rejects the AI’s recommendation, there should be a solid rationale behind that decision and appropriate documentation. Ultimately the choice on how to treat the patient always lies with the practitioner who is accountable for their own decision on how to treat.
The new document also emphasizes the importance of protecting health information, data and privacy. Over the last few years, discussion about protecting patient information has been a big topic in health privacy and practitioners should be sure they take steps to make certain such information is secure and safe. This includes being aware of the security measures that are in place and in compliance with current and relevant laws. Additionally, practitioners should ensure that if AI creates and records information into a patient record that it is done so accurately.
Additionally, the document notes that it will be difficult for the regulatory process to keep pace with AI’s technological development. Therefore, FSMB recommends to state medical boards and other relevant parties that there should be a focus on how AI is being utilized and governed during its use through already established ethical principles. The guidance continues on to identify seven principles and makes several recommendations under each.
Examples of recommendations include:
1.Transparency and Disclosure
- Licensees should be required to maintain transparency about the use of AI in health care.
2.Education and Understanding
- FSMB and its partners in the medical education community should identify structured educational resources for physicians, medical boards, and patients about AI in healthcare. Such programs should include resources to help understand how AI works, its benefits, potential risks, and implications for patient care.
3.Responsible Use and Accountability
- Hospital systems, insurers, or others who select AI tools to support clinical decision making should provide physicians with education about AI tools, access to performance reports of the individual tools, and should design a process for regular review of the efficacy of the tools.
4.Equity and Access
- Efforts should be made to ensure equitable access to the benefits of AI for all patients.
5.Privacy and Data Security
- Licensees should generally be informed about how patient data will be used and be prepared to convey this to patients.
6.Oversight and Regulation
- FSMB should explore and pilot ways in which AI can aid medical boards in decision-making, with the potential to shift from a reactive to proactive system.
7.Continual Review and Adoption of Law and Regulations
- Policy makers should consider the impact of AI on fundamental legal principles such as the definition of the practice of medicine and the impact of AI on the corporate practice of medicine.
Finally, FSMB notes that this document is only a starting point and that the use of AI will need to be continually monitored and the policy around it regularly refined. As we move forward, there should be a collaboration in efforts in this development of policy, not only with regulatory agencies, but also with physicians, health systems, data scientists and state medical boards. While a useful tool, similar to telehealth technologies, AI is there to enhance, not replace, human judgment and accountability in the provision of care. Although medical boards cannot directly regulate AI, they can oversee and recommend how physicians can use AI and other technological advanements to provide care.
Last week, the Bipartisan Senate AI Working Group released its roadmap for AI policy, Driving US Innovation in Artificial Intelligence. With the AI issue potentially spanning across multiple existing Senate committees, the decision was made to create this working group to complement the committees’ work on this developing area of policy. Many fields in which AI may impact were examined, health care being one. The report is to act as a guidance to lawmakers on future regulatory actions involving AI and thus does not contain specific policy recommendations. The guidance is to be used to help inform development of such recommendations as the appropriate committees take up this work on regulating AI.
Some of the specific health-related suggestions include:
- Consider legislation that both supports further deployment of AI in health care and implements appropriate guardrails and safety measures to protect patients…This includes consumer protection, preventing fraud and abuse, and promoting the usage of accurate and representative data.
- Consider legislation that would provide transparency for providers and the public about the use of AI in medical products and clinical support services, including the data used to train the AI models.
- Consider policies to promote innovations of AI systems that meaningfully improve health outcomes and efficiencies in health care delivery.
From the foregoing examples, one can see that some of the suggestions from the Bipartisan Senate AI Working Group touch upon the same concerns raised in the FSMB document, particularly around transparency (both to the patient and the provider) of when AI is being utilized and putting protections in place for the patient. In regards to health care, these will be policy issues that will need to be monitored as AI policy develops.
To read more about the recommendations, download a copy of Driving US Innovation in Artificial Intelligence, and a complete copy of FSMB’s “Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice” can be accessed from the link provided.