AI Chatbots, Donor Questions, and the Quiet Rules That Govern Trust
A new study looked at what happens when a donor on a charity website opens the chat window to ask a few questions before deciding whether to give.
The experiment had three moving parts,
- Sometimes the responses came from a human donor care agent identified as such
- Sometimes they came from an AI chatbot clearly labeled as such
- Sometimes the response came from an AI chatbot that included an avatar image
- And within both AI conditions the replies were written in either
- concrete, detail-oriented language—process, facts, specifics—
- or abstract, values-based language—impact, hope, social equity.
The pattern that emerged is straightforward.
- When the AI used concrete, procedural, fact-driven language, trust in the organization went up and donation intention rose with it.
- When the same AI used abstract, moral, values-based language, the opposite happened – message credibility, Org trust and willinginess to give dropped.
- The human agent delivering the very same abstract language paid no penalty.
Why the asymmetry? People carry a stereotype of AI as competent, factual, and cold. Machines are expected to be precise, not moral. So when the chatbot started talking like a human advocate, donors judged it against a role it shouldn’t be playing, and credibility evaporated.
The researchers confirmed this mechanism: message credibility was the fulcrum and lower credibility from the AI’s “off-brand” abstraction transferred into lower trust in the organization.
But the plot twist is the most interesting finding. When the researchers swapped the bot icon for a human-like face, the entire abstract-language penalty disappeared. Same text, same script, same AI training, the only difference was the image.
That single cue pushed participants to evaluate the AI through a human schema, which made the abstract moral framing feel appropriate again. The mismatch vanished because the underlying expectation had shifted.
This isn’t evidence that human cognition being infinitely flexible but it is evidence that our norms for AI are not yet formed, and that people fall back on rigid heuristics to fill the gap. If it looks like a bot, machine rules apply: be concrete, stay factual, avoid moral reasoning. If it looks like a person, human rules apply: abstraction is fair game. These schema switches are fast, shallow, and decisive, and they explain far more of the persuasive outcome than anything happening at the level of message craft.
That’s the part fundraisers should pay attention to. AI persuasion is not simply a question of whether you “use a chatbot.” It’s a question of whether the donor perceives the agent as a machine or a quasi-human, because the acceptable bandwidth of communication shifts completely depending on that frame. Keep the AI visually coded as a bot and you must keep its language concrete. Push it visually toward human and you can widen the expressive range—at some risk, but with far more flexibility.
The implications are vast as a fast forward future unfolds. For instance, AI Voice is still in it’s infancy and yet the ability for it to sound human and minimize latency is has leapfrogged in the last several months. Imagine a much less expensive pricing model for charities and with the psychological upside of a human caller and the operational advantages of a machine: better consistency, no fatigue, no within-call variance, and an ability to run scripting and trait-based personalization at scale.
The conclusion isn’t that humans will fall for anything with a face. It’s that donor expectations for AI are in a fluid, transitional phase.
Kevin



Can you point me to the study please. Very interesting.