The Focus Group Just Got Fired

October 15, 2025      Kevin Schulman, Founder, DonorVoice and DVCanvass

I ran a lot of focus groups in a prior life and they were the bane of my existence.  People told me I was good at moderating. I wasn’t.

My “skill” was tolerating the strange theater of it all, the artificial setting, the too-small snacks, and the polite nodding of strangers paid to opine about things they didn’t really care about.

Every session was a small production.

  • The bully in the room who feigned expertise and the lemmings who gave permission.
  • The illusion of representativeness, which meant flying to the four corners of the country to sit behind mirrored glass
  • A “cast of thousands” in the back room pretending to listen even though they’d already decided what they wanted to do
  • The focus group existed to bless it with the faint aroma of validation.

Qualitative research is useful, a way to see how people talk and act in real life. But even the good qual, structured, observational, genuinely curious, still feels hollow. It has no math, no model, no sense of confidence or robustness, it’s insight without infrastructure.

Now, along comes something that fixes nearly every one of those problems.

A team from Colgate-Palmolive published a study showing that LLMs can replicate human research, not approximate, replicate.

And not in a hand-wavey, “sort of similar” way. Their model hit 90% of human test–retest reliability, meaning it agreed with real consumers about as often as real consumers agree with themselves.

The trick wasn’t more data. It was smarter design.

They didn’t ask the AI to pick a number on a 1-to-5 scale, that’s where most LLM “survey” experiments go wrong. Models regress to the mean; everything becomes a 3.  In human world it’s called satisficiing or just being mentally lazy.

Instead, they asked the model to respond like a person, to explain how likely it would be to buy something and why. Then they translated that response into numbers using semantic similarity.

  • The AI writes a short open-ended answer (“Seems interesting, maybe worth a try if it’s not too expensive”).
  • That answer gets converted into a long list of numbers, each number represents part of the sentence’s meaning. You can picture it as a coordinate in a space with hundreds or thousands of invisible dimensions, one for “positivity,” one for “price sensitivity,” one for “enthusiasm,” and so on. The whole sentence becomes a single point in that space.
  • They use math, cosine similiarity, to see how well machine lines in space match human lines. Two lines mean roughly the same thing point in the same direction, two that mean opposite things point in opposite directions.
  • The result is a probability distribution across the 1–5 survey scale, not a single guess.

In plain terms, it takes the nuance of language — tone, uncertainty, qualification — and quantifies it. It’s a smarter form of open-ended question, one that doesn’t rely on a human coder to interpret or summarize. It’s like turning every respondent into a well-written focus group transcript, but with statistics attached.

Why this matters 

Our sector spends almost nothing on donor insight. We pour millions into channels, targeting, and “optimization,” yet operate with an understanding of our supporters that wouldn’t pass a freshman sociology class.  We know who gave and when but not why.  We know what they clicked but not what they care about.

And then we “personalize” it by inserting someone’s name or adding a single line to the “midlevel” mailing referencing their higher giving. It’s personalization sans the person.

You can’t build competence or connection, two of the basic psychological needs that drive loyalty, if you treat people like widgets.

And now, you don’t need to commission a $200,000 segmentation study or fly to four cities to find out what people think. You can DIY your own insight by asking better questions and analyzing meaning with math.

When done right, what people provide in a survey isn’t just predictive, it’s causal.  It’s understanding.  Focus groups were an expensive way to listen badly. This is a cheap way to listen well.

Kevin