Published Works

Papers that members of the group co-authored as part of the forum are listed below:

Zimmerman, A., Janhonen, J., Saadeh, M. et al. Values in AI: bioethics and the intentions of machines and people. AI Ethics (2022). Abstract: Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. The intentions of creators as well as investors may not comport with the values of users and broader society. Users also may mean to use a technological device in an illicit or unexpected way. Devices change people’s intentions as they are empowered by technology. What people mean to do with the help of technology reflects their choices, preferences, and values. Technology is a disruptor that impacts society as a whole. Without knowing who intends to do what, it is difficult to rely on the creators of technology to choose methods and create products that comport with user and broader societal values. The AI is programmed to accomplish tasks according to chosen values or is doing so through machine learning and deep learning. We assert that AI is quasi-intentional and changes people’s intentions. Investors wishing to promote or preserve public health, wellbeing, and wellness should invest in ethical, responsible technology. Environmental, social, and governance (ESG) considerations and metrics should include ethical technology, wellness, public health, and societal wellbeing. This paper concludes that the process by which technology creators infuse values should be couched in bioethical and general ethical considerations, reflective of potential multiple intentions, and should entail a willingness and process to adapt the AI after the fact as the circumstances of its use change.

Hoffman, D. N., Zimmerman, A., Castelyn, C., & Kaikini, S. (2022). Expanding the Duty to Rescue to Climate Migration. Voices in Bioethics8. Abstract: Since 2008, an average of twenty million people per year have been displaced by weather events. Climate migration creates a special setting for a duty to rescue. A duty to rescue is a moral rather than legal duty and imposes on a bystander to take an active role in preventing serious harm to someone else. This paper analyzes the idea of expanding a duty to rescue to climate migration. We address who should have the duty and to whom the duty should extend. The paper discusses ways to define and apply the duty to rescue as well as its limitations, arguing that it may take the form of an ethical duty to prepare. 

Pending Publication

Zimmerman, A., Janhonen, J., Saadeh, M., Attention Span and Tech Autonomy as Moral Goods and Societal Necessities (acknowledgment to Camille Castelyn) (Publication in Digital Society pending)(preprint not available). In this paper, we argue that attention span should be protected and cultivated as a societal and moral good and that individuals should have the autonomy to choose where they direct their attention without technological impediments. Attention span allows people to devote their attention to meaningful tasks and to focus on tasks or topics that require and deserve deeper concentration. Technology often diverts attention from the meaningful to the less so, and changes how we cultivate attention span. We explore the concept of distraction technology, referring to technologies that divert attention and harm attention span. While recognizing the many benefits of information technology, it is important to be aware of and address the negative impacts of distraction technology on attention span and tech autonomy. Rather than giving in to technology that needlessly interrupts, scatters thoughts, and invades spaces where people once concentrated, we argue that there is a moral duty to protect attention span and tech autonomy. Apps, computers, and smartphones have allowed faster, efficient research and communication at the expense of some depth. Ethicists and the responsible tech ecosystem should support policies that preserve attention span and limit distraction. Technology developers should make design decisions that aim to keep people on task, rather than distracting them. Legislators, regulators, and the tech industry should work toward responsible technology that aligns with human interests.

Janhonen, J., Zimmerman, A., Beer, E., Human/AI Relationships: Exploring the Ethical Issues. This paper argues that people have a tendency to anthropomorphize technology. The more lifelike technology becomes, people are engaging with it as they would other people, yet technology cannot return affection, love, or even some of the feelings associated with being a mere acquaintance. The nature of friendship is changing with technological artefacts like Replika. While people use technology as a tool for caring and companionship, we risk overlooking societal necessities like personal care for older adults and community for those who may be isolated or experiencing loneliness.

Bowers, O., Lively, C., Zimmerman, A,. The Right to Information During Migration. This paper explores the rights to correct information and access to tools including the internet during the migration and settlement process. The example explored is migrants who traveled north from southern states in the United States under false pretenses about opportunity and jobs. The paper argues that certainly the benefits of migrants to communities should be spread throughout states as should the costs and infrastructure necessary to assimilate them. However, transparency and information are important rights allowing them to make informed decisions about their futures.