Risk and Harm: Unpacking Ideologies in the AI Discourse

Gabriele Ferri, Inte Gloerich

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.
Original languageEnglish
Title of host publicationProceedings of the 5th International Conference on Conversational User Interfaces, CUI 2023
EditorsMinha Lee, Cosmin Munteanu, Martin Porcheron, Johanne Trippas, Sarah Theres Völkel
Place of PublicationNew York
PublisherAssociation for Computing Machinery
Number of pages6
ISBN (Print)9798400700149
DOIs
Publication statusPublished - 2023
EventCUI '23: ACM 5th Conference on Conversational User Interfaces - Eindhoven, Netherlands
Duration: 19 Jul 202321 Jul 2023

Conference

ConferenceCUI '23: ACM 5th Conference on Conversational User Interfaces
Country/TerritoryNetherlands
CityEindhoven
Period19/07/2321/07/23

Fingerprint

Dive into the research topics of 'Risk and Harm: Unpacking Ideologies in the AI Discourse'. Together they form a unique fingerprint.

Cite this