Why We Don’t Trust Data

December 13, 2013      Admin

Angie Moore, in Tuesday’s FS Online post Predicting Weather is not Like Predicting Donations, notes that a good many fundraisers deny themselves the benefits of predictive modeling simply because, just like the weather forecast, they don’t understand what’s behind it.

Given the “increasingly crowded and competitive landscape”, Angie urges fundraisers to drop their current, and generally crude, approach to segmentation and start understanding the benefits of predictive modeling. I urge you to read her post.

Which brings me to today’s question:

Why, in a trade that is so data-driven, do so many fundraisers needlessly destroy their bottom line by clinging to old approaches?

We fundraisers spend a lot of time developing the offer — writing (and sometimes incessantly re-writing) an appeal, holding four meetings to decide on red vs. gold foil labels, debating whether that cuddly polar bear premium is more compelling than another furry friend.

Sadly, only a fraction of that time is generally spent assessing who among their donors should get the offer and why.

Fact is, that when armed with a conventional spreadsheet program and a computer with a ‘copy’ function, the human brain can get us in a lot of trouble. Other professions seem to have learned the benefits of sophisticated data analysis — predictive modeling and scoring — far faster than ours.

  • Pilots spend a lot of time training to trust instruments in bad weather and on dark nights.
  • Nurses use the Apgar scoring system to quickly assess the health of newborns.
  • Mortgage brokers qualify their customers based on a FICO credit score created by Fair Isaac Company.

In short, these professions have learned that the human brain should be invited to sit on the bench when the game of meeting the optimal bottom line result is at risk.

One of the reasons so many fundraisers, their analysts and consultants needlessly damage their organizations and clients is because their human brain, even when bolstered by an Excel spreadsheet and some pivot tables, is capable of processing information in only a limited number of mental buckets. That’s why segmenting on so few data points — recency, frequency, monetary value (RFM) — has such conventional appeal.

And when this happens, the result almost always ends up with a selection or segmentation plan with far too many donors who respond poorly or not at all.

So why don’t more fundraisers trust sophisticated data? More importantly, why, for example, don’t they act on what the data from a predictive model or scoring system tells them?

Carl Richards, the “Sketch Guy” who writes on data and analytics for the Your Money section of the New York Times offers this list of reasons for why so many folks don’t trust data:

  • It’s not natural. “Despite the data telling us one thing, it seems we have a hard time ignoring cultural expectations that run counter to the numbers.”
  • We believe we’re the exception. “…We’re disinclined to think something applies to us because, of course, we’re above average.” … ”The rules only apply to everyone else.”
  • It might not work. Yes, the data suggests that a particular course of action makes more sense over the long term. However, what if our window isn’t long enough? If we aren’t convinced it will work, then why do it.
  • We don’t know the data exists. Habits, hunches, superstitions. How often do we make decisions based on one of these options? And we excuse the decisions because ‘we didn’t know better.’

Of course ‘guessing’ or relying on ‘hunches’ or ‘superstitions’ where your organization’s bottom line is concerned isn’t very good fundraising.

Equally, merely understanding the reasons why so many of us are data-phobic is no guarantee that we’ll change our behavior.

However, maybe this realization will cause more fundraisers to open their eyes and ask a few more questions and get a few steps closer to trusting the sort of data that will improve results.

Roger

 

 

4 responses to “Why We Don’t Trust Data”

  1. John Haydon says:

    Roger – The tendency towards guessing is also prevalent in email and Facebook marketing. The data is there, begging to be used to help hit the target. Maybe examples of how to use this data need to be highlighted? Maybe the process needs to be outlined? Hope to see you at NTC!

  2. Jay Frost says:

    “Trusting data” and “trusting data modeling” are two entirely different things. Raw data may be finite, but it does have specific meaning and can be employed to bring real, measurable results in fundraising. Data modeling, no matter how well fashioned, makes assumptions. Sometimes those assumptions are good, often they are not. The examples you cite in this blog entry seem to me more representative of effective uses of data than data modeling which is both why they work and why we intuitively trust them. For example, instrument flying by pilots is sensible because the data tells us exactly where we are in relation to weather, other aircraft, and the runway. If we used that instrument data to determine where we think we should fly and where we should land, more like what fundraisers often do with modeling, that would truly be flying by the seat of the pants. And a very dangerous thing to do. This is not to knock data modeling. It can be enormously valuable as a way of testing assumptions and new approaches. But modeling is much closer to a crystal ball than an altimeter.

  3. Caity says:

    There’s no doubt that poor data modeling can be built on poor assumptions. The same thing for conventional wisdom, myth-based fundraising strategy, like assuming mailing someone three times in one month is not a good idea.

    Any good modeler will reveal not only the assumptions behind any model, but the possible repercussions of applying those assumptions thus scientifically answering the question of whether mailing someone three times in a month is better than mailing them twice or once.

    Jay, your equating stochastic modeling to a crystal ball is quite ridiculous, and undermines the rigorous scientific process that different modeling techniques have undergone to be accepted in the scientific community.

    Contrary to your claim proper modeling is much closer to an altimeter than not. Here’s a link to an article from NASA that demonstrates how even altimeters are sometimes wrong, and still they are used. [ See http://asrs.arc.nasa.gov/publications/directline/dl9_low.htm ].

    Used for the same reason models should be used; they are tools to help you get where you need to go, not as you note, a substitution for knowing where you want to go.

    Neither an altimeter nor a model can tell you where you want to go, but both can guide you to the best way to get there.

  4. Rick Malchow says:

    Roger,

    Thanks for the article. I am hopeful that the non-profit industry is starting to get some traction with data modeling. We now have affordable demographic data for overlays, and we are figuring out our own unique implementations. The early attempts at using commercial world methodology simply did not take into account the nuances of the non-profit donor file.

    Data modeling is a powerful, precise mathematical process that allows us to move far beyond traditional RFM segmentation. Frequency in traditional segmentation is usually a simple multiple gift or single gift designation, whereas, a model can predict response rate by the entire range of gift counts. Last time I looked, donors with 12+ gifts were responding at 15% for a certain organization.

    Today we have some very affordable linear data like age, education, presence of children and income available. Modeling is the essential tool to capture the power of that data in conjunction with recency, frequency and monetary value.

    For those who still want to see their traditional segments, no problem. Regular segmentation can easily be assigned on the names from a model select.

    All the best,

    Rick