Published Works

Papers that members of the group authored or co-authored as part of the forum are listed below:

Laretto, P., Bowers, O., Zimmerman, A. The Dark Side of Wearables: A Potential Fertility Surveillance Network. Bioethics Today. (September 2, 2025) https://bioethics. today.org/blog/the-dark-side-of-wearables-a-potential-fertility-surveillance-network/ Wearables and Fertility-Related Data. Paper discusses apps that track bio data including respiration, heartrate, sleep/wake cycles, movement, how training data is collected, ability to estimate likelihood of pregnancy from data, and the ability of law enforcement and others to access this data which is seemingly unrelated to pregnancy.

Saadeh, M.I., Janhonen, J., Beer, E., Castelyn, C., Hoffman, D. Automation complacency: risks of abdicating medical decision making. AI Ethics (2025). https://doi.org/10.1007/s43681-025-00825-2 This work investigates automation complacency in relation to decision support systems used in healthcare contexts, especially their impact on clinicians, patients, and the quality of care. While AI and decision support systems can enhance efficiency and outcomes in healthcare, the potential for automation bias risks clinical perils. These include eroded vigilance, impoverished therapeutic relationships, and potentially poorer outcomes regarding overall well-being. This work highlights these concerns to urge actors in the health sector to effectively integrate technology in a way that spares cognitive resources without compromising the essential role of human experts in making medical decisions. To ensure decision support improves patient care, it is crucial to balance computational processing of information with embodied local expertise; we provide a possible starting point for mindful integration. The implementation of systems in the clinical context should encourage vigilance and guard against fatigue and complacency. There is reason to be excited about increasingly efficient and available care. If the risks of automation complacency are avoided, shared time and resources can be used to preserve and promote valuable interactions, insights, and holistic aspects of care.

Janhonen, J., Zimmerman, A., Bowers, O. TikTok’s Influence on Social Health and Democracy: A Bioethical Perspective. Digit. Soc. 4, 52 (2025). https://doi.org/10.1007/s44206-025-00210-w This paper explores the role of China and its potential to use TikTok to negatively impact adolescents in the US through demoralization, failure to moderate dangers, and anti-democracy sentiment. It looks at TikTok from a bioethical viewpoint, focusing on its effects on social health and democracy. Human engagement with social media and its sway over users appears to narrow user identities, fuel outgroup animosity, and exacerbate societal polarization. We call attention to the risks posed by TikTok’s connection to an authoritarian government that may have an incentive to demoralize and confuse. Inorganic trends could distort a country’s national image, shaping attitudes with tailored representations. As a discipline devoted to bringing insights and new perspectives to inform public deliberation, bioethics should address social media and its influence on sense-making and the health of democratic societies.

Zimmerman, A. and Lively, C. (2025) EMTALA and State Abortion Bans: Juggling a Power Struggle. Voices in Bioethics. https://doi.org/10.52214/vib.v11i.13414 Paper is an explainer for bioethicists. Written by lawyers, the paper explores the reasoning in relevant caselaw. The authors argue that EMTALA should require abortions when they are needed to stabilize patients despite state laws, yet note relevant counterarguments and the reasoning of Supreme Court justices who may weigh in more in future cases.

Fain, R. Save the Pigs: An Argument Against Xenotransplantation (For Now). Bioethics Today. March 17, 2025. https://bioethicstoday.org/blog/save-the-pigs-an-argument-against-xenotransplantation-for-now/ Dr. Fain’s practical viewpoint suggests that efforts to explore alternatives, especially reducing waste and improving the efficiency of organ procurement organizations, are morally required before resorting to widescale xenotransplantation. She also addresses the “ick” factor that prevents people from registering as donors. This piece arose from the efforts of a group of forum members to contextualize xenotransplantation in animal ethics. That group was led by Emily Beer and included Olivia Bowers and Anne Zimmerman.

Zimmerman, A. and Lively, C. NRP, The Dead Donor Rule, and Consent Requirements in Post-Death Organ Donation. The Health Lawyer (American Bar Association). August 28, 2024. https://innovativebioethicsforum.com/wp-content/uploads/2025/08/nrp-pdf.pdf This article focuses on normothermic regional perfusion. The authors take the position that the practice of NRP must require specific consent that is separate and clear in addition to consent to donate. Article touches on family and donor expectations, the deviation from perceptions of the line between life and death, the dead donor rule, and strategies to ensure ethical practices in organ transplant.

Zimmerman, A., Lively, C.P., Bowers, O. The Right to Information During Migration: A Call for Transparency and Internet Access. Columbia University Academic Commons (2024). https://doi.org/10.7916/ykx1-yk52 This paper explores the rights to correct information and access to tools including the internet during the migration and settlement process. The example explored is migrants who traveled north from southern states in the United States under false pretenses about opportunity and jobs. The paper argues that certainly the benefits of migrants to communities should be spread throughout states as should the costs and infrastructure necessary to assimilate them. However, transparency and information are important rights allowing them to make informed decisions about their futures. When Governor DeSantis arranged flights from Texas (not Florida) to Martha’s Vineyard, passengers were recruited under false pretense. The fraud violated the human right to information set forth in the Principles and Guidelines on the Human Rights Protections of Migrants in Vulnerable Situations. Copy on Research Gate. DOI: 
10.13140/RG.2.2.26621.17128

Janhonen, J. Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00372-8 This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.

Zimmerman, A., Janhonen, J. and Beer, E., Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8 This paper argues that people have a tendency to anthropomorphize technology. The more lifelike technology becomes, people are engaging with it as they would other people, yet technology cannot return affection, love, or even some of the feelings associated with being a mere acquaintance. The nature of friendship is changing with technological artefacts like Replika. While people use technology as a tool for caring and companionship, we risk overlooking societal necessities like personal care for older adults and community for those who may be isolated or experiencing loneliness.

Investigating the Darker Side of AI and Its Impact on Human Relationships. SciTube Video. https://scitube.io/investigating-the-darker-side-of-ai-and-its-impact-on-human-relationships/

Zimmerman, A., Janhonen, J. & Saadeh, M. Attention Span and Tech Autonomy as Moral Goods and Societal Necessities. Digital Society (2023) (acknowledgement to Camille Castelyn.) https://doi.org/10.1007/s44206-023-00053-3 In this paper, we argue that attention span should be protected and cultivated as a societal and moral good and that individuals should have the autonomy to choose where they direct their attention without technological impediments. We use the term distraction technology and describe the ethical impetus to regulate it.

Zimmerman, A., Janhonen, J., Saadeh, M., Castelyn, C., and Saxén H. Values in AI: bioethics and the intentions of machines and people. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00242-9 Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. The intentions of creators as well as investors may not comport with the values of users and broader society. Environmental, social, and governance (ESG) considerations and metrics should include ethical technology, wellness, public health, and societal wellbeing. This paper concludes that the process by which technology creators infuse values should be couched in bioethical and general ethical considerations, reflective of potential multiple intentions, and should entail a willingness and process to adapt the AI after the fact as the circumstances of its use change.

Hoffman, D. N., Zimmerman, A., Castelyn, C., & Kaikini, S. (2022). Expanding the Duty to Rescue to Climate Migration. Voices in Bioethics8. https://doi.org/10.52214/vib.v8i.9680 Since 2008, an average of twenty million people per year have been displaced by weather events. Climate migration creates a special setting for a duty to rescue. A duty to rescue is a moral rather than legal duty and imposes on a bystander to take an active role in preventing serious harm to someone else. This paper analyzes the idea of expanding a duty to rescue to climate migration. We address who should have the duty and to whom the duty should extend. The paper discusses ways to define and apply the duty to rescue as well as its limitations, arguing that it may take the form of an ethical duty to prepare. 

Pending

Janhonen, J., Bowers, O., Saadeh, M., Criteria for Precautionary and Biocentric Geoengineering. This work explores the application of precautionary thinking to assess bio-geoengineering strategies aimed at enhancing the nature-based capture and burial of atmospheric carbon. By leveraging balancing mechanisms in effect during Earth’s past periods of high atmospheric carbon, such interventions could promote negative climate feedbacks that reduce warming.