Bipartisan For Real

April 5, 2024      Kevin Schulman, Founder, DonorVoice and DVCanvass

In the vast theatre of human interest the spotlight fiercely chases the extremes and magnifies them in media, op-ed pieces, political discourse, fundraising campaigns, and advocacy efforts.  It’s the outliers who grab headlines, sparking debates and commanding clicks with their dramatic deviations from the norm. Meanwhile, the great, moderate majority who are more likely to storm the clearance rack than the Bastille get pushed into the backdrop or off stage entirely.

This silent majority, embodying the gentle slope of the bell curve, tends to agree on a lot, making them the unsung heroes of consensus and the overlooked champions of common sense.

Here’s data broken out by gender, income, ethnicity, education and the ultimate slugfest, Trump vs. Biden voters.  Slugfest turns to snooze fest, wildly uninteresting in its consensus.

There are two fundraising points to all this.  One from the founder of the firm that did this work, Todd Rose, a high-school dropout turned Harvard educated neuroscientist:

  • If nonprofits think the name of the game is persuasion, when in reality it should be about revealing unity, then “you’re going to spend lots of money thinking you’re bridging divides that actually didn’t exist. And your money and your time will make things worse.”

I’d concur, psychological reactance is powerful.  A question I’d raise however, is whether any charities catering to a minority viewpoint on Issue A, B or C is remotely interested in the consensus view and bridging divides.  And if not, is that a system feature or bug?

The second fundraising point requires meandering into the research methodological weeds.  The methodology used by Rose is not your standard, artificial rankings and ratings of issues and values in a vacuum.   Use those methods and you get more noise and artificial reads on reality.

Instead, he uses a choice-based, forced trade-off approach that is wonky behind the scenes but simple in execution.  People see two policy positions or values and pick the one they prefer.  That subtle, forced choice results in much more reliable, valid data.  

  • If you’re going to do research, hire an expert.  You do get what you pay for and garbage in, garbage out is a high probability with amateur hour. 

This choice methodology is what we use for testing fundraising packages, concepts and new product development – e.g. sustainer offer, mid-level offer.   People look at two holistic concepts side by side that feature lots of “parts” (logo, program name, key message, tangible benefit, psychological benefit) and pick the one they like best.  It’s akin to an eye-test.  We never make the mistake of asking why they picked Concept A over B.  And we don’t ask for artificial, garbage ratings or ranking of the “parts”.

The backend analysis is modeling giving the answers we need – i.e. what drove preference and choice.  It works because we match real world decision making, which isquick, heuristic judgements on the whole product.  When was the last time you stood in front of the toothpaste aisle and mentally and separately weighed the logo, color, scheme, marketing message and price point of the 83 different choices?   Never.  You made a relatively quick choice and would be very unlikely to be able to faithfully report out on why, since much of the decision was subconscious.

Traditional research often ignores reality.  This methodology, we call it the PreTest Tool, matches up with real-world and has the added benefit of being able to test thousands of concepts at once, it’s like A/B testing on steroids.

Kevin

P.S. A shout out to Robert Tigner, Regulatory Council at TNPA who flagged the article for me.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave a Reply

Your email address will not be published. Required fields are marked *