AI Interviews Deepen Survey Responses But May Also Shape Opinions, New Study Finds
AI Interviews Deepen Survey Responses But May Also Shape Opinions, New Study Finds
COLUMBUS, OH – A new study from researchers at IDEA at The Ohio State University and the University of Houston, led by Dr. Ryan Kennedy, has found that using AI chatbots to interview people on political topics can produce much richer, more detailed data than traditional surveys. The experiment revealed that while participants provided more thoughtful answers to the AI, the process of explaining their reasoning also seemed to polarize their opinions slightly.
For years, researchers have struggled to understand not just what people think, but why they think it. This research tested whether AI could help solve this problem by conducting in-depth interviews on a large scale.
The experiment included 2,243 U.S. adults recruited through CloudResearch. They were randomly assigned to one of five groups to discuss their views on immigration and trade tariffs. Some groups had a conversation with an AI chatbot that could ask up to three follow-up questions, while others took a standard survey with either closed-ended questions or a single, generic open-ended prompt.
The results showed that the AI interviewer was highly effective at getting more information from participants.
- People who chatted with the AI provided a significantly higher number of reasons for their views, adding 0.75 more reasons for immigration and 0.63 for tariffs.
- They also wrote more than double the number of words to explain their positions, resulting in about 70 additional words on immigration and 57 on tariffs.
A key finding was that this extra effort did not make the experience worse for participants. Although the AI interviews took significantly longer to complete (a median time of approximately 14 minutes versus 6 minutes for the control group), there was almost no drop in satisfaction, with only a minimal adverse effect of –0.06 on a seven-point scale.
Could AI Tools Shape the Outcome?
The study revealed a critical caution for researchers. The AI interview was not just a passive measurement tool; it actively shaped the opinions it was measuring. After having a conversation with the AI to justify their initial answers, participants’ views became slightly more polarized on both immigration (+0.064) and tariffs (+0.045) when they were asked again later.
The authors conclude that researchers should treat AI interviewers as a powerful new tool, but with an important caveat. As the study suggests, AI-enabled interviewing substantially enriches the depth and specificity of survey data without appreciably harming respondent satisfaction, yet it can subtly shift measured attitudes. Researchers should treat AI not only as a measurement tool but also as a potential source of attitudinal change.
Read the full working paper to learn more about the study’s methods and findings.