

人机交互领域的年度学术盛宴——ACM CHI Conference on Human Factors in Computing Systems (简称CHI 2025) 已于5月1日圆满落幕。作为全球人机交互领域最具影响力的标杆性学术会议、中国计算机学会(CCF)认证的A类会议,CHI在Core Conference Ranking中位列A*级(flagship conference),其入选论文一直以来被公认为具有很高的含金量。CDI数字创新中心 此次共有4篇论文入选,包含人类-机器人交互、基于反思的严肃游戏、对话式智能体设计、可持续设计等多个研究方向。下面是CDI成员们展示的精彩研究成果。
01
Papers
● GenComUI: Exploring Generative Visual Aids as Medium to Support Task-Oriented Human-Robot Communication
● Walk in Their Shoes to Navigate Your Own Path: Learning About Procrastination Through A Serious Game
02
Late-Breaking Work
● Align with Me, Not TO Me: How People Perceive Concept Alignment with LLM-Powered Conversational Agents
03
Student Design Competition
● HabitAt: Bridging Humans and Wildlife toward a Sustainable Future
01 PAPERS
GenComUI: Exploring Generative Visual Aids as Medium to Support Task-Oriented Human-Robot Communication
Yate Ge, Meiying Li, Xipeng Huang, Yuanda Hu, Qi Wang, Xiaohua Sun, and Weiwei Guo†
Keywords Human-Robot Interaction, Robot Programming, Service Robots, Conversational Interaction, Large Language Models, Generative UI

Abstract
This work investigates the integration of generative visual aids in human-robot task communication. We developed GenComUI, a system powered by large language models (LLMs) that dynamically generates contextual visual aids—such as map annotations, path indicators, and animations—to support verbal task communication and facilitate the generation of customized task programs for the robot. This system was informed by a formative study that examined how humans use external visual tools to assist verbal communication in spatial tasks. To evaluate its effectiveness, we conducted a user experiment (n = 20) comparing GenComUI with a voice-only baseline. The results demonstrate that generative visual aids, through both qualitative and quantitative analysis, enhance verbal task communication by providing continuous visual feedback, thus promoting natural and effective human-robot communication. Additionally, the study offers a set of design implications, emphasizing how dynamically generated visual aids can serve as an effective communication medium in human-robot interaction. These findings underscore the potential of generative visual aids to inform the design of more intuitive and effective human-robot communication, particularly for complex communication scenarios in human-robot interaction and LLM-based end-user development.
https://dl.acm.org/doi/10.1145/3706598.3714238
Walk in Their Shoes to Navigate Your Own Path: Learning About Procrastination Through A Serious Game
Runhua Zhang, Jiaqi Gan, Shangyuan Gao, Siyi Chen, Xinyu Wu, Dong Chen, Yulin Tian, Qi Wang†, and Pengcheng An†
Keywords Procrastination, Serious Games, Learning, Reflection

Abstract
Procrastination, the voluntary delay of tasks despite potential negative consequences, has prompted numerous time and task management interventions in the HCI community. While these interventions have shown promise in addressing specific behaviors, psychological theories suggest that learning about procrastination itself may help individuals develop their own coping strategies and build mental resilience. However, little research has explored how to support this learning process through HCI approaches. We present ProcrastiMate, a text adventure game where players learn about procrastination’s causes and experiment with coping strategies by guiding in-game characters in managing relatable scenarios. Our field study with 27 participants revealed that ProcrastiMate facilitated learning and self-reflection while maintaining psychological distance, motivating players to integrate newly acquired knowledge in daily life. This paper contributes empirical insights on leveraging serious games to facilitate learning about procrastination and offers design implications for addressing psychological challenges through HCI approaches.
https://dl.acm.org/doi/10.1145/3706598.3715271
02 LATE-BREAKING WORK
Align with Me, Not TO Me: How People Perceive Concept Alignment with LLM-Powered Conversational Agents
Shengchen Zhang, Weiwei Guo†, and Xiaohua Sun
Keywords Concept Alignment, Grounding, Conversational Agents, Large Language Models, Human-Agent Interaction

Abstract
Concept alignment—building a shared understanding of concepts—is essential for human and human-agent communication. While large language models (LLMs) promise human-like dialogue capabilities for conversational agents, the lack of studies to understand people’s perceptions and expectations of concept alignment hinders the design of effective LLM agents. This paper presents results from two lab studies with human-human and human-agent pairs using a concept alignment task. Quantitative and qualitative analysis reveals and contextualizes potentially (un)helpful dialogue behaviors, how people perceived and adapted to the agent, as well as their preconceptions and expectations. Through this work, we demonstrate the co-adaptive and collaborative nature of concept alignment and identify potential design factors and their trade-offs, sketching the design space of concept alignment dialogues. We conclude by calling for designerly endeavors on understanding concept alignment with LLMs in context, as well as technical efforts to combine theory-informed and LLM-driven approaches.
https://dl.acm.org/doi/10.1145/3706599.3720126
03 STUDENT DESIGN COMPETITION
HabitAt: Bridging Humans and Wildlife toward a Sustainable Future
Yu-chieh Cheng*, Zixuan Zhang*, and Huiting Huang*
Keywords Sustainable Cities and Communities, Human-Wildlife Coexistence, Participatory Workshop, Empathy-Driven Interaction Design

Abstract
Urban pollution poses significant challenges to both human and wildlife health, necessitating innovative approaches to promote sustainable coexistence. This study explores the potential of human-animal interaction as a lens to foster environmental awareness and empathy. We propose HabitAt, a conceptual design prototype that leverages animals’ superior sensory capabilities to detect urban pollution, enabling a deeper understanding of environmental conditions while encouraging sustainable behavior. To inform the design, we conducted a participatory workshop where participants engaged in role-playing activities to experience urban environments from multiple angles. Observations of the workshop were synthesized into HabitAt’s design, which integrates ecological data visualization and interactive elements to create deeper human-animal connections and provoke daily actions in environmental protection. Moving forward, we will refine app features, resolve technical challenges, and expand its applications to better support sustainable urban development.
https://dl.acm.org/doi/10.1145/3706599.3720313
