黑料不打烊

A collaborative paper led by Chris Chen wins award at TAS 鈥24

The assistant professor of communication design explored whether coaching users in prompting during their interaction with LLM-powered chatbots might impact users' perceptions and engagement with prompting, and further influence their trust calibration in the system.

Chris Chen stands at podium during the Second International Symposium on Trustworthy Autonomous Systems
Cheng 鈥淐hris鈥 Chen, assistant professor of communication design, presents at the Second International Symposium on Trustworthy Autonomous Systems (TAS 鈥24) in Austin, Texas. Held on the campus of the University of Texas, the Sept. 15-18 event featured talks on the use of autonomous systems in transportation, healthcare, sustainability, robotics, law enforcement, defense and surveillance.

Cheng 鈥淐hris鈥 Chen, a prolific researcher on subjects relating to psychology of communication technologies, such as social media, AI and generative AI, earned a best paper award this week at the (TAS 鈥24) in Austin, Texas, on the University of Texas campus.

The assistant professor of communication design was recognized for her collaborative project, examining the effects of prompt coaching on users鈥 perceptions, engagement and trust in the AI system. The article is titled 鈥淚s Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users鈥 Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools.鈥

Cheng Chen with S. Shyam Sundar
Chen stands with one of her frequent collaborators, S. Shyam Sundar, a Penn State professor and the director of the Center for Socially Responsible Artificial Intelligence.

Chen published the paper with Eunchae Jang and S. Shyam Sundar of Penn State University and Sangwook Lee of the University of Colorado Boulder. Sundar also served as the symposium鈥檚 keynote speaker.

鈥淯sers had a strong need for prompt coaching when they interacted with text-to-image generative AI tools,鈥 Chen said. 鈥淏y providing assistance in helping users specifying their prompts, they appear to elaborate on their prompts, which further increased their ability to align their trust with the AI鈥檚 true trustworthiness, a process known as trust calibration.鈥

Chen presented a second paper at the symposium examining explanation timing, titled 鈥淲hen to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI Systems.鈥 Chen co-authored the paper with Sundar, as well as Mengqi Liao of the University of Georgia.

According to the TAS 鈥24 program, the event featured 25 accepted papers presented as talks, focusing on the use of autonomous systems in transportation, healthcare, sustainability, robotics, law enforcement, defense and surveillance.

In recent months, Chen has published and presented research examining how to communicate algorithmic bias through training data; how patients view their interactions with individualizing AI doctors; and how users perceive autoplay features on video platforms. In July, she made an in-person presentation at the 2024 International Communication Association Conference in Gold Coast, Australia.