The authors of Automation complacency: risks of abdicating medical decision making. AI Ethics will present their work at a CLE event hosted by Columbia University Bioethics on November 13 at 6:15 pm. The webinar will provide insights about what leads people to rely on AI, exploring time constraints as well as automation bias.
The risks of AI in clinical decision making range from errors to interference with building trust in science. Additionally, relying on big data and correlation differs from reliance on clinical trial data and chemical, biological, and physics-based computational analyses. AI may create illusions of understanding which the clinician then passes on to the healthcare consumer.
Explainability is often at the forefront of discussions about appropriate reliance on AI – open questions about whether reliance on AI that has been proven highly accurate, precise, and specific is justified in the absence of an explainable algorithm remain. Challenges arise when distinguishing the possibility of significant benefits from the hype. Furthermore, beyond machine learning, the use generative AI adds complexity to analyzing ethics. Both LLMs and LMMs in the healthcare arena add risks and have benefits. The ability of AI to create new content is both an opening for errors (hallucination, poor summaries of journal articles, incorrect attribution for ideas, incomplete recommendations) and flaws, including bias. Generating new images, text, code, synthetic data, summaries of meeting notes, and chats and emails for patient communications bring special issues to the forefront of AI ethics for healthcare.
The authors will address the complexity and the ethical issues. Michael Saadeh, lead author will be joined by Joel Janhonen, Camille Castelyn, Emily Beer, and David Hoffman,
Leave a comment