How generative AI and AI agents are impacting the market research playbook
By Andrew Littlefield●6 min. read●May 14, 2025

Market research has always been a game of scale.
Getting deep insights from participants (especially in qualitative studies) has historically been time-consuming, expensive, and limited to small sample sizes. It’s easy to survey 1,000 people if you're asking yes or no questions. It’s much harder when you want open-ended responses and thoughtful answers. Small teams are often left making decisions without the hard data to back it up.
“Many organizations say they’re data-driven,” says Dr. Stefano Puntoni, Sebastian S. Kresge Professor of Marketing at The Wharton School. “But most decisions are still made based on habit, gut feelings, or whatever the competition did yesterday.”
That’s starting to change.
Generative AI and agent-based tools are giving researchers new ways to scale open-ended insights, test ideas at speed, and even build “synthetic” versions of real customers to pressure-test decisions. But for all its promised benefits, this new technology comes with some major challenges and drawbacks as well.
It’s time to break down how generative AI is reshaping the market research playbook: what it unlocks, where it falls short, and what to watch for next.
4 ways generative AI is being used in research today
Generative AI is beginning to affect research workflows in a few different ways. These tools have the potential to speed up market research and generate more insights, but you should also account for their weaknesses.
1. Survey creation and refinement
Large language models (LLMs) are already being used to draft, translate, and refine survey questions. They can A/B test question phrasings, assist with translation, and adapt tone or complexity for different audiences. Tools like Outset.ai and Meaningful are baking these capabilities into streamlined workflows.
But automating survey creation comes with risk. LLMs are built to be helpful — sometimes too helpful.
“There’s a real danger that AI-generated survey questions reinforce confirmation bias,” said Dr. Puntoni. “Models like ChatGPT tend to agree with the user and validate their assumptions. If a researcher already has a bias, AI can amplify it, not challenge it.”
Researchers in AI call this quality sycophancy.
2. Qualitative analysis and theme generation
One of AI’s biggest breakthroughs for researchers is in summarizing open-ended responses at scale.
“Doing a qualitative study with 1,000 participants used to be impossible,” Dr. Puntoni said. “Now it’s feasible, even for small teams.”
LLMs excel at pattern recognition and language summarization, which makes them particularly adept at surfacing common themes from long-form responses. Researchers can extract insights, compare sentiment across demographics, or find recurring language.
That said, these models still hallucinate (meaning they make stuff up). LLMs sometimes invent insights or generalize too broadly.
“LLMs aren’t optimized to tell you the truth,” said Dr. Puntoni. “They’re optimized to sound correct. That makes it very easy for users to trust what feels like confident output, even when it’s wrong.”
This is where human review remains critical. AI can surface patterns, but only human researchers can validate whether those patterns are meaningful or misleading.
3. Real-time chatbot moderators and virtual interviews
One potentially promising application of AI in research is the use of autonomous agents to run interviews or moderate focus groups in real time.
These agents can (theoretically) be scripted to follow interview guides, adapt to user responses, and conduct dozens (or thousands) of interviews simultaneously. Unlike human moderators, they never get tired, and they log everything. Combined with automated analysis, automated moderation enables the kind of scale and repeatability that qualitative research has historically lacked.