
Spring 2026 Student Research
Research Team 1
Meghan Economos | Ediz Kerim | Benjamin Liddell | Gavin Murphy

As generative AI becomes increasingly embedded in digital advertising production, firms face growing pressure to disclose AI involvement. While prior research shows that AI-origin disclosures often reduce credibility and consumer evaluations, little is known about how disclosure affects consumers during advertisement exposure. This study examines how AI disclosure design (presence, wording, and timing) shapes visual attention, physiological arousal, perceived authenticity, and advertising effectiveness. Across three within-subject laboratory experiments, participants view digital image and video advertisements while eye-tracking, facial expression analysis, and galvanic skin response data are collected alongside survey measures. We test whether disclosure imposes an attentional and authenticity cost that alters ad and brand attitudes. By integrating biometric process measures with evaluative outcomes, this research shifts focus from post-exposure judgments to real-time cognitive and emotional processing, offering theoretical and practical insight into responsible AI transparency in digital media.

Research Team 2
Alaina Mcclanen-Clemons | Sawyer Smith | Ruby Voke

Visual Attention and Engagement in AIGenerated Fashion Media: When Visual Scrutiny Diverges from Preference
The growing use of generative AI in digital fashion media raises important questions about how consumers perceive and interact with AI-generated imagery compared to human-created content. This study examines differences in visual attention, behavioral engagement, and perceptual responses when individuals view AI-generated versus human-generated fashion images in a social media environment. Using a controlled laboratory experiment, participants will view curated fashion boards resembling Pinterest-style interfaces while their visual attention is recorded using eye-tracking technology. Behavioral engagement will be captured through image “save” (pinning) behavior, complemented by post-exposure perception measures. Drawing on theories of processing fluency, visual attention, and authenticity perceptions, the study proposes that AI-generated imagery may attract greater visual scrutiny due to perceptual irregularities, yet generate lower engagement because users perceive human-created content as more authentic. The research further examines whether the presence of human faces moderates these effects. By integrating biometric attention data with behavioral engagement outcomes, this study contributes to emerging research on AI-generated media, consumer authenticity judgments, and human–AI interaction in visually driven digital platforms.

Research Team 3
Karoline Shipton | Margot Hartley| Aniko Kittridge

Believing the Machine: Human Trust and the Impact of AI Language Confidence and Elaboration
​As artificial intelligence increasingly functions as a cognitive partner in decision-making, understanding how users evaluate and trust AI recommendations becomes critical. While prior research has emphasized algorithmic accuracy, considerably less attention has been given to how the presentation of AI output, particularly linguistic confidence and elaboration, shapes trust independent of correctness. This study proposes a research agenda examining how these communication cues influence user trust, attention, and emotional engagement during AI-assisted decision-making. Drawing on persuasion theory, trust in automation, and the Elaboration Likelihood Model, the research investigates how confidence framing, explanatory detail, and information accuracy interact to shape user responses. To examine these effects, the study proposes three randomized, counterbalanced within-subject laboratory experiments in which participants evaluate AI-generated recommendations modeled after ChatGPT-style outputs. Experiment 1 manipulates AI language confidence (high vs. low) and information format (text-only vs. text with image). Experiment 2 varies elaboration depth (low vs. high detail) and information format. Experiment 3 examines the moderating role of information accuracy (correct vs. incorrect) in combination with AI language confidence. Across all experiments, multimodal biometric data, including eye tracking, galvanic skin response, and facial expression analysis, will capture real-time cognitive and affective reactions, complemented by survey-based trust measures. By integrating NeuroIS methods with behavioral outcomes, the study advances understanding of how linguistic presentation cues shape trust calibration and reliance in human–AI decision environments.

Research Team 4
Felipe Bravo | Noah Lepore | Adelisa Pelinkovic | Jillian Rossman

Why Think When You Can Prompt? Reliance of Students on Generative AI and Cognitive Outsourcing
Generative AI tools such as ChatGPT are reshaping how students engage with academic tasks, raising concerns about cognitive offloading and reduced productive engagement. Prior research documents widespread adoption but not how AI availability alters students’ real-time cognitive effort, visual attention, stress responses, and performance. This emergent study proposes a within-subjects experiment examining how generative AI assistance during preparation (reading) and assessment (quiz-taking) influences engagement and outcomes. Participants will complete study–quiz cycles under AI-assisted and AI-unassisted conditions while multimodal biometric and behavioral data are collected, including eye tracking, galvanic skin response, facial expression analysis, and quiz performance. Results will highlight whether AI availability reduces cognitive effort and perceived strain, shifts attentional allocation, dampens physiological arousal, and differentially affects multiple-choice versus short-answer performance. Integrating NeuroIS measures with behavioral and survey data provides objective evidence on whether generative AI functions as a learning scaffold or a substitute for cognitive effort in higher education.

Research Team 5
Finn Norman I Favour Mamudu I Mariana Vargas Diaz I Zoha Bilal

As AI-based decision-support systems increasingly shape decision-making, users often override algorithmic predictions when outputs conflict with their expectations. This study proposes a dynamic model of trust calibration in AI-driven sports analytics, conceptualizing trust as enacted reliance measured through accept-override decisions. We examine how expectation violation magnitude and explanation quality jointly influence reliance. Using a repeated-measures experimental design, participants generate their own forecasts, view AI predictions, and decide whether to accept or override them; expectation violation is modeled continuously as the discrepancy between user and AI forecasts. Integrating perspectives from NeuroIS and HCI, we incorporate eye tracking and galvanic skin response to capture pre-decisional cognitive effort and arousal preceding override behavior. Multilevel analyses will test how expectation violations predict reliance, how explanations moderate these effects, and whether trust predicts behavior beyond perceived surprise. The study advances research on algorithm aversion and explainable AI by positioning trust as a dynamic, process-level phenomenon.
