Policy Talks 2024: A National Convening on Ethics, Narratives, and Artificial Intelligence
We are thrilled to announce Policy Talks 2024! Some details about this event are below, but we welcome you to send us any questions you have: firstname.lastname@example.org.
Policy Talks is an invitation-only event produced by The Center for Practical Ethics and brings together a group academics, business leaders, legal experts, and policy makers to learn about recent industry practices, explore current academic educational and research models, and consider ethical challenges and goals.
On Friday, February 9th from 9:30 a.m. – 4 p.m., we will discuss this year’s topic, “When Human Narratives Meet Artificial Intelligence: Responsible Design and User Protection.”
Topic – When Human Narratives Meet Artificial Intelligence:
Responsible Design and User Protection
Research on Artificial Intelligence continues to advance at a rapid pace, with developments chased by a host of governments, regulatory agencies, and oversight bodies. No one is sure what the future will bring, either for the technology itself or the laws and policies that govern it. What is clear is the transformative power of AI for human experience.
Part of the trouble with prediction, regulation, and ethics in this area is that the term ‘Artificial Intelligence’ refers to a sprawling and diverse field of technology. It is not the case that every AI system is like every other. Task-respondent AI systems function differently than AI systems designed to simulate agents within teams or dynamic decision-making. Different goals yield different methods, functions, applications, and outcomes, and moral relevance and outcomes change when we consider various aspects of human-AI activity. How to design an AI system with goals such as trustworthiness in mind is one question; how doctors should make use of recommendations from AI is another. These questions differ and cannot be answered by the same moral concepts, methods, or practices.
To narrow the focus and frame our conversations, the topic for Policy Talks 2024 is “When Human Narratives Meet Artificial Intelligence: Responsible Design and User Protection.” Narratives encode and convey information among humans with an efficiency, memorability, and impact that mere propositions cannot. Narratives are powerful tools for communication and understanding, and AI systems that make use of narrative features and dynamic functions affect user psychology in powerful ways.
Our focus on narrative is not about students cheating on their creative writing papers using AI systems. It is about how some AI systems utilize narrative features and how they impact the psychology of human agents. We are familiar with algorithms on social media that can maintain and transmit narratives which strongly reinforce beliefs that desperately need to be questioned. Similarly, AI systems used in police training can support a narrative—operating below the level of a user’s conscious awareness—that has devastating consequences.
How are we to understand narratives and AI? What moral obligations do users and developers have to ensure its safe and ethical use? What guardrails ought to be in place for design and application? These are open questions that need to be answered. Policy Talks 2024 is our contribution to mapping out the moral landscape for AI, ethics, and narratives. We will examine existing policy proposals, analysis and ethical concepts from philosophy, methods and technologies from data and computer science, concepts of narrative and their features from narratology, and research from psychologists on the impact on human agents to explore answers. We will examine this underexplored area of human-AI interaction, ask hard questions, and work toward practical solutions to real-world problems.
Working groups are the heart of Policy Talks. They are comprised of the experts we have invited in order to gain a better understanding of the context, risks, and opportunities of our topic. Each person is assigned to a working group and joins in an intimate yet vigorous discussion about the topic with fellow experts and stakeholders. A notetaker* will keep track of the ideas the group covers. The notes and ideas generated in working groups provide us with the base materials for the initial report we send to all attendees. This report serves as our launching point as we dive deeper into our investigation and draft the white paper. You can learn more about the Policy Talks process at this link.
Users and Trust
How is trust in AI systems understood, gained, used, and abused?
Are there ways to help policymakers ensure that insights from diverse fields and industries (medicine, law enforcement, etc.) are consistently included?
Designers and Communicative Interaction
What assumptions are users and designers making about communicative interactions with AI systems?
Education and Development
How can we train budding computer scientists and engineers to build AI systems with ethical constraints in mind?
Guidance and Guardrails
What guidelines should be adopted to ensure the ethical development and adoption of AI technology for organizations or institutions?
Societal Values and Public Impact
What changes will AI bring for workers, social institutions, disadvantaged groups, etc.?