Trust and Responsibility in Generative Foundation Models

CIKM 2025 Tutorial - A Comprehensive Overview of Building Socially Responsible AI Systems

3 Hours Half-day Session
Interactive Hands-on Exercises
Expert-led Industry & Academia

Overview

Generative foundation models (GenFMs), such as large language and multimodal models, are transforming information access, retrieval, and knowledge systems. However, their deployment raises critical concerns around social responsibility, including fairness, bias mitigation, environmental impact, misinformation, and safety.

This tutorial provides a comprehensive overview of recent research and best practices at the intersection of GenFMs and responsible AI. We introduce foundational concepts, evaluation metrics, and mitigation strategies, with case studies across various domains (e.g., text, vision, code).

The tutorial is designed for researchers and practitioners interested in building or auditing socially responsible GenFMs. We highlight open challenges and propose future research directions relevant to the CIKM community.

Tutorial Outline

Six comprehensive modules covering theory, practice, and policy perspectives

20 minutes

The Dual Nature of GenFMs and the Need for Responsibility

Understanding how GenFMs serve as assistants and simulators, and why responsible behavior is critical. We'll explore societal risks including misinformation, bias, and privacy concerns.

Foundation Risk Assessment
45 minutes

Understanding Social Responsibility: Taxonomy and Case Studies

A structured framework covering six key dimensions: Safety, Privacy, Robustness, Truthfulness, Fairness, and Machine Ethics. Includes hands-on exercises with diagnostic tools.

Safety
Privacy
Robustness
Truthfulness
Fairness
Machine Ethics
Hands-on Case Studies Tools
15-minute Break
20 minutes

Evaluation and Benchmarks

Introduction to evaluation frameworks including BBQ, TruthfulQA, HarmBench, and TrustLLM. Hands-on experience with tools like OpenAI Evals and AI Fairness 360.

Benchmarks Metrics Evaluation
20 minutes

Enhancement for Responsible GenFMs

Practical mitigation strategies including data filtering, prompt steering, model fine-tuning (RLHF), and post-processing techniques like RAG and detoxification.

Mitigation RLHF RAG
20 minutes

Governance and Policy Perspectives

Overview of regulatory frameworks (EU AI Act, NIST AI RMF), industry initiatives, and community standards for responsible AI deployment.

Policy Governance Standards
25 minutes

Open Challenges and Community Discussion

Interactive discussion on adaptive evaluation, dual effects of alignment, and advanced AI risks. Collaborative exploration of future research directions.

Discussion Future Research Interactive

Presenters

Yue Huang

Yue Huang

Ph.D. Student, Computer Science and Engineering, University of Notre Dame

Yue Huang is a Ph.D. student in Computer Science and Engineering at the University of Notre Dame. He earned his B.S. in Computer Science from Sichuan University. His research investigates the trustworthiness and social responsibility of foundation models. Yue has published extensively at premier venues including NeurIPS, ICLR, ICML, ACL, EMNLP, NAACL, CVPR, and IJCAI. His work has been highlighted by the U.S. Department of Homeland Security and recognized with the Microsoft Accelerating Foundation Models Research Award and the KAUST AI Rising Star Award (2025). He has delivered invited talks on "Trustworthiness in Large Language Models" and "Socially Responsible Generative Foundation Models" at UIUC, USC, UVA, IBM Research, and other institutions.

Canyu Chen

Canyu Chen

Ph.D. Student, Northwestern University

Canyu Chen is a Ph.D. student at Northwestern University. He focuses on truthful, safe and responsible Large Language Models with the applications in social computing and healthcare. He has started and led an initiative "LLMs Meet Misinformation" (https://llm-misinformation.github.io), aiming to combat misinformation in the age of LLMs. He has publications in top-tier conferences including ICLR, NeurIPS, EMNLP, EACL, and WWW. He won multiple awards such as Sigma Xi Student Research Award 2024, the Didactic Paper Award in the workshop ICBINB@NeurIPS 2023, Spotlight Research Award in the AGI Leap Summit 2024. He is a co-organizer of the Workshop Reasoning and Planning for Large Language Models at ICLR 2025.

Lu Cheng

Lu Cheng

Assistant Professor, Computer Science, University of Illinois Chicago

Lu Cheng is an assistant professor in Computer Science at the University of Illinois Chicago. Her research interests are responsible and reliable AI, causal machine learning, and AI for social good. She is the recipient of the PAKDD Best Paper Award, Google Research Scholar Award, Amazon Research Award, Cisco Research Faculty award, AAAI New Faculty Highlights, 2022 INNS Doctoral Dissertation Award (runner-up), 2021 ASU Engineering Dean's Dissertation Award, SDM Best Poster Award, IBM Ph.D. Social Good Fellowship, Visa Research Scholarship, among others. She co-authors two books: "Causal Inference and Machine Learning (Chinese)" and "Socially Responsible AI: Theories and Practices".

Bhavya Kailkhura

Bhavya Kailkhura

Staff Scientist, Lawrence Livermore National Laboratory

Bhavya Kailkhura a Staff Scientist and a council member of the Data Science Institute (DSI) at LLNL. He leads efforts on AI safety, efficiency, and their applications to science and national security. His work has earned several awards, including the All-University Doctoral Prize (Syracuse Uni., 2017), the LLNL Early and Mid Career Recognition Program Award (2024), and the best paper awards at ICLR SRML, AAAI CoLoRAI, and others. He is an IEEE Senior Member and served as Associate Editor for ACM JATS (2023) and Frontiers in Big Data and AI (2021). He has held roles such as panelist, program chair, and organizer for workshops and conferences including ICASSP, AAAI, and GlobalSIP.

Nitesh Chawla

Nitesh Chawla

Frank M. Freimann Professor, University of Notre Dame

Nitesh Chawla is the Frank M. Freimann Professor of Computer Science and Engineering at the University of Notre Dame. He is the Founding Director of the Lucy Family Institute for Data and Society. He is an expert in artificial intelligence, data science, and network science, and is motivated by the question of how technology can advance the common good through interdisciplinary research. He is the recipient of 2015 IEEE CIS Outstanding Early Career Award; the IBM Watson Faculty Award; the IBM Big Data and Analytics Faculty Award; and the 1st Source Bank Technology Commercialization Award. He was recognized with the Rodney F. Ganey Award and Michiana 40 under 40 honor. He is a Fellow of both ACM and IEEE.

Xiangliang Zhang

Xiangliang Zhang

Leonard C. Bettex Collegiate Professor, University of Notre Dame

Xiangliang Zhang is a Leonard C. Bettex Collegiate Professor in the Department of Computer Science and Engineering, University of Notre Dame. She was an Associate Professor in Computer Science at the KAUST. Her main research interests and experiences are in machine learning and data mining. She has published more than 270 refereed papers in leading international conferences and journals. She serves as associate editor of IEEE Transactions on Dependable and Secure Computing, Information Sciences, and International Journal of Intelligent Systems, and regularly serves as area chair or on the (senior) program committee of IJCAI, SIGKDD, NeurIPS, AAAI, ICML, and WSDM.