The Center for Practical Bioethics (CPB) and Cerner Corporation collaborated to present “Defining Ethical Guidelines for Augmented Intelligence” on August 9, 2019. KC Digital Drive representatives were invited to attend. From 8 am to 5 pm, attendees listened to lectures about ethics, augmented intelligence (AI), and healthcare and discussed these things in the design and development, implementation and dissemination, and user experience contexts. 

CPB and Cerner recognized that healthcare will soon be saturated with AI and that their companies are in a unique position to develop and implement ethical AI strategies to meet our region’s unique needs. In late 2018, they began collaborating on this initiative. A steering committee led by CPB’s Matthew Pjecha, MS, and Cerner’s Lindsey Jarrett, PhD, met for the first half of 2019 to plan this workshop, the starting point for Kansas City’s healthcare community to address the ethical challenges of AI in healthcare. Over 70 professionals were individually invited to participate. 

Invited stakeholders represented three broad categories. The first is design and development, including engineers, researchers, computer scientists, data specialists, and others who create AI tools. The second is dissemination and implementation, including clinicians, providers, payers, and others who use AI tools. The third is patients and community, including those who would be subject to, and potentially at risk from, AI tools, from patients, their advocates, and other beneficiaries to corporate and government policymakers. The organizations represented range from startups to community health agencies, faith-based educational institutions to the private IT sector. Three of the day’s speakers each had expertise in one of these categories. The fourth, an ethics expert from CPB, was equipped to give an ethics crash course. Seven facilitators from Cerner’s UX team helped Pjecha and Jarrett to organize and facilitate the event. Cerner executive sponsors Eva Karp and Dick Flanigan served as an advisory council for the initiative. 

After a welcome from the workshop conveners, the first speaker began the day’s learning. That speaker was Kathy Pham, MS, Computer Scientist and Product Leader at Harvard and Mozilla. She spoke on Ethics in Design and Development of Healthcare Intelligence. Pham started with an explication of her background as a software engineer, researcher, consultant, and researcher. 

Throughout her work, she noticed a theme of design and development teams not fully understanding the systems they were designing software for, and especially not the people who would be using that software. A clear example of the significance of this, Pham explained, is in aviation software. When there is a lack of understanding of what it’s like to be a pilot in the cockpit, software for use by those pilots may not be as effective. 

Because of this, one of Pham’s most emphasized points was a reminder that “we are not our users.” Software built out from the developer’s own experience will not work how it is supposed to work. Following, there has been more and more attention paid to human rights, justice, and ethical concerns with the failures of AI and, Pham argues, not enough data professionals trained to think about these things. This lead to her two main points. The first is: Power exists with the builders. This means that the developers and designers of software will influence the outcomes of its use. The other is: Honor expertise. This means it is important to include other fields involved in a system throughout the entire development process. 

Pham’s example of the development cycle is as follows: Scope, research, synthesize, ideate, prototype, validate, and trust. Many only bring in people outside the development team in those last two, but Pham argues that this expertise should be involved throughout to help identify blindspots. This may help in gaining an understanding of why technology hasn’t worked for all populations. In this way, Pham argues, human centered design can be prioritized. 

The next speaker was Ryan Pferdehirt, PhD, Director of Membership and Ethics Education at the Center for Practical Bioethics. He gave attendees an Ethics 101 Crash Course. The talk started with a definition of ethics as: 1) a general pattern of one’s way of life; 2) a set of rules of conduct or moral code; 3) inquiry about ways of life and rules of conduct. Ethics, then, relates to one’s character. 

Pferdehirt explained that ethics is not merely something for academics to contemplate. Instead, everyone engages in right and wrong and learning how to ask the right questions, and these are aspects of ethics. He was sure to clarify that ethics is not about determining right and wrong, but rather the study of it. He emphasized this further with the example of ethics as a process rather than a factory. 

Pferdehirt went on to explain normative versus applied ethics. The former includes deontology, virtue ethics, and consequentialism, while the latter concerns obligations, such as in bioethics and business. 

For him, principlism was most relevant in this context, with regard to patient interactions. Ideally, these interactions would include four qualities: autonomy (patient self-determination), beneficence (obligation to do good), nonmaleficence (“do no harm”), and justice (fairness and equity). Healthcare ethics conflicts come up when two or more of these come into conflict. Examples of these situations come up when a patient refuses a certain treatment due to their faith, the risk-benefit analysis that comes with chemotherapy, and the sometimes prohibitive costs of medication. 

Pferdehirt went on to list his proposed ethical framework for AI. This includes fairness, inclusion, accountability, transparency, data privacy and security, and safety and reliability. Additionally, the cycle involved in engaging an ethical AI framework is: identifying an ethical problem, collecting information, evaluating information, considering alternatives, making a decision, acting/implementing, all in a cyclical formulation. The ethics consultations these are involved in are best done in groups with a variety of voices and perspectives. 

Pferdehirt closed with a distinction between ethics and morality. Morality, he explained, is something natural that changes as culture changes. Ethics are the rules societies put in place to uphold a given morality. 

After lunch, attendees heard from Mark Hoffman, PhD, Chief Research Information Officer at Children’s Mercy Hospital. Hoffman spoke on Ethics in Implementation and Dissemination of Healthcare Intelligence. Hoffman began his talk by explaining that he has conceded his drive to work to Siri. However, there are some things that Siri doesn’t know, such as roads that are subject to sudden traffic jams or construction on smaller roads. When talking about ethics in AI, then, it is important to be cognizant that the algorithm can’t take into account in making decisions. 

Hoffman reinforced the above discussion about ethics, stating that it is about how we make decisions. This includes rules and utility and is situational. He went on to define medicine as a stream of life-affecting decisions, often with life or death consequences. Ethics in medicine, then, includes considering how to deliver the best outcome, how to deliver safe care, how to manage costs, and how to respond to patient priorities and values. 

He then pivoted to discussing the need for AI and algorithms in healthcare. Reasons for AI include limits to human memory, limits to human perception, major opportunities to improve the quality of care, efficiency and the shortage in the healthcare workforce, the high costs of healthcare, and the knowledge explosion. And the algorithms that can help provide solutions to these gaps in human ability do not just happen. Most data, though, is “cleansed” before it is fed into an algorithm, meaning the person behind the algorithm has a large impact. The quality of this person’s supervision is hugely important, but the need for this person is not always realized or acknowledged by developers. 

Following, it is important to understand the policy environment of given data, rather than just reading and trying to understand it in a vacuum. So, incorporating AI into medical decision making, we must have a high confidence in the logic and rules of this process. There is a current lack of a policy framework for the implementation and outcomes of this process, and a framework to assess the risks involved. To start feeding AI into healthcare, then, it would be logical to start in areas with lower risk, such as administration. It is also crucial to understand the limitations of AI, and especially to recognize when humans are needed to supplement it. 

The final speaker was Paul Weaver, Vice President of User Experience and Human Factors Research at Cerner Corporation. He spoke about Killer Robots, Boss Fights, and Healthcare: Trust Can’t be Artificial. Weaver has experience in the video game industry, and he used video game and film history to explain the general wariness people tend to have about AI, supplemented by rhetoric from prominent figures in business and the media. 

Weaver covered several decades of film and video game development, noting themes of “killer robots” but also of lifelike robots with human qualities. The ability to give human qualities to AI, he argues, is where AI can be implemented in a way that will make a real difference. This is important because of the ability AI has to work longer and more effectively than humans to find gaps that people will inevitably miss. Regardless, though, particularly when life or death decisions are involved, in order for there to be reasonable trust in AI there must be consistent high-quality decisions being made. 

Between speakers, workshop attendees were engaged in group activities to work within the speakers’ insights to gain some individual and group perspective about how ethics in AI are already happening, and how they should happen, throughout the design and development, implementation and dissemination, and user experience phases. Using the six-tenet framework set out by Pferdehirt, groups were asked to list what each of the six components should mean in each of those three phases, given their occupational role. 

The workshop’s purpose was to examine the concerns of Kansas City area stakeholders and to propose solutions. Key questions addressed included those of fairness, inclusion, accountability, transparency, data privacy and security, and reliability and safety. The three main objectives were for each individual to share with participants at their table what each keyword looks like from their professional and personal perspective; for each table to discuss and record what each keyword should look like across design, development, implementation, dissemination, and user experience; and to achieve consensus on recommendations for incorporating ethics across the pillars of healthcare intelligence. 

At the end of the day, the whole room came together for a large group session structured by the common themes facilitators saw during these group sessions. A total of 10 tables with about five participants at each contributed to the workshop. Facilitators processed about 360 sticky notes of recommendations to develop the themes used in this group discussion. These included bias and diversity, audit-ability, feedback loops, limitations in models, human and AI, and data use and ownership. CPB and Cerner have a goal to produce a final report by the end of 2019. 

During breaks, interactive AI demos from Mozilla were available to be explored. These were Survival of the Best Fit, Stealing Ur Feelings, and A Week with Wanda. Survival of the Best Fit is an interactive game that shows players how machine learning software can be biased, especially in automated hiring decisions. Stealing Ur Feelings is an interactive film experience that demonstrates how social networks and other apps use your face to collect data on users’ emotions. A Week with Wanda is a web-based simulation meant to explore the risks and rewards of AI. Wanda is an AI assistant who interacts with users over the course of a week to try improving their lives, but quickly does just the opposite. All of these projects were recipients of Mozilla’s 2018 Creative Media Awards

Further Reading

Central Middle School Enhances Digital Literacy Through Student-Led Podcast

Andrea Cook’s Student Tech Team at Central Middle School produces the award-winning Student Tech Team Podcast, exploring technology’s impact on society. The team, recognized with the Stellar Tech Team Award from Verizon Innovative Learning Schools, hosts global guests to discuss issues like digital access. Student-led, the podcast fosters essential skills in production, public speaking, and teamwork. The team aims to inspire future careers in tech and seeks industry professionals for future episodes.

Read More

KC Digital Drive Collaboration with New Programming at Literacy KC

Kansas City non-profit Literacy KC has expanded its services to include not just traditional and digital literacy, but health and financial literacy, as well. These topics now fit nicely into the online and in-class hybrid learning that Literacy KC offers. In health, this includes things like knowing the roles of various practitioners, choosing a health […]

Read More

New Effort Offers Steep Discounts on Internet Access

Eligible Households in Kansas City Metro Can Save up to $30/Month KANSAS CITY— July 26, 2023 — Eligible households in the Kansas City metro area can now sign up for steep discounts on their monthly internet service through the Affordable Connectivity Program (ACP).  The Federal Communications Commission (FCC) has contracted with KC Digital Divide to help […]

Read More