Eminence vs Evidence In Fundraising – Part 2: What is “Proper” Research

July 27, 2017      Roger Craver

Judging from readers’ Comments to Part 1 of this series there’s a truly felt need and desire for collecting — and sharing — proper fundraising research and findings.

Two questions arise. Just what is ‘proper’ research? And, ‘how’ can this information best be shared?

What Is ‘Proper’ Research?

The definition of what constitutes ‘proper’ research can get complicated  — overly complicated — very fast. Since I’m neither a philosopher, theologian nor a scientist I’ll let others with that competence attempt a definition.

My criteria for ‘proper’ research includes ‘evidence-based’, ’empirical’ findings arrived at through appropriate testing methodologies. (For more on this, see my post The Curse of Testing Illiteracy)

Others will disagree with my criteria and substitute their own. That’s fine with me because you’re going to make decisions based on what you consider ‘proper’ or not. (The Stanford Encyclopedia of Philosophy contains pages and pages on what constitutes ‘evidence’ if you’re inclined to dive deeper into the definitional pond.)

Quite frankly, this discussion on proper evidence and proper research can go on and on; round and round until we all chase our mental tails to the point of collapsing into a puddle of paralysis and inaction.

An academic researcher will view what is proper evidence one way; a rushed fundraiser searching for an answer to a practical problem another.

I believe at this stage in our trade’s evolution from myth and habit to the greater use of evidence what is important is that we keep moving in the direction of ‘evidence’. In short, we should always be skeptical of how perfectly the ‘evidence’ was arrived at, but not paralyzed into inaction as we wait for the perfect to emerge.

Let me illustrate.

From Evidence to Action with a Pause for Nitpicking       

Two years ago, at the height of the media meltdown in the UK caused by over-use of industrial solicitation tactics and punctuated by the death of Olive Cooke, two ’eminences’ of British fundraising swung into action to remedy or at least relieve a disastrous situation.

Ken Burnett and Giles Pegram, CBE  set out to change the sector’s mindset and actions where the treatment of donors is concerned. They established The Commission on The Donor Experience, launching 28 collaborative projects consisting of a series of ‘best practice’ practical reports that focus on putting the donor, not the fundraising organization’s financial targets, at the heart of fundraising strategies.

Over the next 24 months thousands of volunteers were involved … the reports were prepared … then released for use free of charge. (You can see all the Commission’s project summaries here.)

THEN … no sooner had The Agitator proclaimed that the new Commission on the Donor Experience (CDE) is A REALLY BIG Deal  then criticism of this significantly ambitious project sprang forth.

Ian MacQuillin, director of Rogare, the fundraising think tank at Plymouth University’s Hartsook Centre for Sustainable Philanthropy  led the charge in a piece titled  OPINION: THE COMMISSION ON THE DONOR EXPERIENCE—A GOOD THING IN ITSELF, BUT PHILOSOPHICALLY CONFUSED that you can read in its entirety here.  [Disclosure note: I am a board member of the Hartsook Centre for Sustainable Philanthropy]

Ian wrote, “ The Agitator article unreservedly praises the CDE’s recommendations, which it says are “backed by a cornucopia of practical tips and recommended actions that lead to better experiences for donors”.

“But at no point do Roger and Tom recommend a critical reading of the overarching blueprint and the 28 projects to evaluate whether this “cornucopia of practical tips” and the theory on which they are based will actually deliver what they claim to. They simply take for granted that what the CDE has said is correct.

“And that’s a problem, because the Commission for the Donor Experience is flawed in places, and while I said that I would have loved to have done some of their projects at Rogare, there are others that Rogare would not have published at all, frankly, because the quality of the research isn’t up to standard.”  [Emphasis by The Agitator.]

Here at The Agitator we don’t take much for granted, let alone recommending that our readers blindly “take for granted” what the Commission says. And we sure don’t accept on blind faith that a project with as much potential as the Commission is flawed because it’s academically imperfect.

As much as I respect and support academic research and the evangelical efforts of Ian to advance it, I fear the polarizing dangers to the sector that come from placing too much emphasis on elevating the quality of academic research over the experiential evidence of practitioners.

This passage from Ian’s post illustrates how many academics are likely to view the work of the Commission On the Donor Experience:

“The philosophical flaw with the Commission On the Donor Experience is that it begged the question that improving the donor experience was the right thing to do. Because of this, the CDE did not seek evidence for many of its claims, considering evidence not to be required because they were ‘self-evident’, and dismissing arguments that contradicted its pre-established general conclusions.

“This philosophical flaw leads to a methodological flaw. Some CDE research looks suspiciously like it was constructed in order to arrive at particular outcomes, rather than to genuinely test an hypothesis, thus leaving open the possibility that the hypothesis (i.e. the pre-established conclusion) could be falsified.”

On the other hand thoughtful practitioners understand that it’s too risky to stay stuck in the status quo while waiting for irrefutable evidence. As Mark Phillips put it in his response to Ian’s critique:  .

‘I haven’t read all the reports in the CDE yet and I imagine I will disagree with some of what I see. I wholeheartedly support your drive for evidence, but I would suggest that the flat lining of charitable giving over the last decade might be evidence enough that we have hit the buffers on transaction based fundraising. Relationship Fundraising most definitely offers us an alternative that in absence of any other approach is perhaps the best hope we have.”

For now, fundraisers must become far more skilled at accessing and assessing the practicalities of academic research (evidence-based). At the same time, they must become equally skilled at accessing and assessing the recommendations of experienced practitioners (experience and eminence-based).

In short, we must turn to and learn from both if we are to make advances.

For now, I’ll give the last word in this ‘debate’ to Ian because he has apt advice to how we all should be approaching information — whether eminence/experience-based or evidence-based.

“It is now the responsibility of fundraisers to read these reports critically (and all of them, certainly not just the blueprint [summary]), to identify the questions we need to ask about what evidence either proves or falsifies the CDE’s claims, how we can improve on them, and what we need to know next and do next. And there is much here to build on.”

Roger

P.S. On Monday, in the final part of this series, I’ll turn to the question of ‘how’ we access and share information and share some thoughts on the part each of us should play.

 

 

 

10 responses to “Eminence vs Evidence In Fundraising – Part 2: What is “Proper” Research”

  1. Ken Burnett says:

    Roger,

    You say, ‘My criteria for ‘proper’ research includes ‘evidence-based’, ’empirical’ findings arrived at through appropriate testing methodologies’. I agree. But though I read Ian MacQuillin’s critical post about CDE – all 3,300 words – it makes such a basic error I could not take it seriously. It may be true that I’m ‘philosophically confused’ but I dislike being so accused for flawed reasons. The main point of Ian’s polemic is that CDE did not first test the assumption that brought it into existence, that something had to be done in response to the barrage of criticism in the public prints about intrusive aggressive, excessive, inappropriate, unwelcome approaches from fundraisers, particularly targeted at the elderly and vulnerable.

    Had Ian asked us (he has all my contact details and Giles’s too) he could have spared the sector at least half of those 3,300 words, and done us all a favour. Ian says

    ‘…it had decided on its main conclusions at the outset, and then set out to collect evidence that supported those, rather than genuinely investigating the issue with an open agenda.’

    This is nonsense. At no time did CDE ever set out to prove or justify whether improving the donor experience is, or isn’t, a good thing. We assumed that from the beginning, because the evidence that it wasn’t working as well as it should was, in our view, perfectly clear. It’s a point of view we’ve been advocating consistently for decades. And hundreds of fundraisers flocked to endorse it. So, this is and always has been clear. CDE set out from its outset to document how best charities might set about improving the donor experience.

    Why would we – ageing volunteers taking on a huge task with miniscule resources – waste our time proving something that we see as so obvious? Mercedes Benz don’t waste time wondering if their cars should be pleasant things to own and drive. Why would we doubt this simple truth – If supporting our causes is an enjoyable, rewarding experience donors will do more of it. If it isn’t a pleasant experience, donors will soon stop.

    If anyone wants to attempt to prove or disprove that simple viewpoint, good luck to them. And to any sponsor who wants to sink good money in the process. But CDE’s mission, simply, was to document best practice in providing the best practical donor experience, across as many areas as we could manage – not as a fait accompli, but as a series of works in progress.

    Criticism based on such an elementary flaw is not helpful. It is not academic rigour, it’s point scoring. Not what we need right now.

  2. Lisa Sargent says:

    – Revenue growth of over 10X
    – Retention rate increase of 12+ percentage points
    – Active donor file growth of more than 8X
    – Avg newsletter response rate increase of 5+ percentage points

    Is this evidence enough for improving the donor experience? Because our case study on MQI is featured as part of the Commission on the Donor Experience, via SOFII.

    And if that isn’t evidence enough, then I don’t know what is.

    (Disclosure: myself, Denisa Casement, and Sandie Collette authored the case study. It’s 47 pages long, so pour the coffee. It came to fruition as a direct result of Ken Burnett’s persistence and patience in seeking evidence based material, and the generosity of MQI in letting us share their copyrighted work. http://sofii.org/case-study/lessons-from-merchants-quay-ireland)

  3. Hi Roger,

    “If you give a donor a great experience, she will give more and give for longer”

    We have no ‘evidence’ for this that would stand academic scrutiny. But there is masses of ‘evidence’ in the everyday sense. The Commission’s outputs have evidence based on best practice running through them like a stick of rock.

    I regard Ian MacQuillin as a respected colleague and good friend. We have been having this debate on several fora in the UK.

    You write: “As much as I respect and support academic research and the evangelical efforts of Ian to advance it, I fear the polarizing dangers to the sector that come from placing too much emphasis on elevating the quality of academic research over the experiential evidence of practitioners.

    Quite. I hope your readers agree.

    Best, Giles

  4. A theme running through this debate, which Ken Burnett raises in part 1 and Roger elaborates on in part 2, is that it is polarized between academics and practitioners

    I don’t think it is polarized between academics and practitioners (simply because most academics couldn’t give a stuff about professional practice in fundraising).

    However, it is polarized between those who see a need for more robust evidence in fundraising than we currently have, and those who consider a lower standard of robustness of evidence to be acceptable.

    And this is complicated by a mistaken assumption that those calling for more evidence want that to be of an ‘academic’ standard.

    Throughout this debate, ‘evidence’ is being conflated with ‘academic’. Yet there are many standards of evidence. For example, the criminal justice system in the UK operates two standards of evidence: ‘beyond reasonable doubt’ for criminal cases; and ‘balance of probablities’ for civil cases.

    So we need to change the terms of the debate to remove this confusion.

    This is really about epistemology – how we justify our beliefs.

    There are many standards of evidence we can advance for justifying our beliefs, not just ‘academic’ evidence. Evidence means being able to provide the best supporting arguments (whatever they are and whatever their source – including the experiences of practitioners), bearing in mind that someone might always say “sorry, that’s just not good enough”.

    At which point, you have to either convince them that it is good enough, or find better evidence with which to justify your belief to them. What’s not an option is shrugging your shoulders and saying: “Well, it’s self-evident, mate, innit!”

    Roger says above that it is too risky to stay stuck in the status quo while waiting for “irrefutable” evidence.

    Yet the evidence does not need to be “irrefutable”; it only has to be the best evidence you currently have, allowing for the fact that better evidence might come along or your current evidence might actually be refuted at some point.

    I’ve seen a few conversations in the Fundraising Chat group on Facebook in which fundraisers have been asked for evidence by their boards to demonstrate the efficacy of a particular piece of fundraising they’re proposing. Of course, they can’t provide it, either because it doesn’t exist or they don’t know where to look for it (Roger describes what a pain-in-the-ass it is to track down such evidence in part 3).

    These fundraisers are often quite upset that the boards won’t just take their word for it that it works.

    Yet if certain stakeholders, such as boards, don’t TRUST fundraisers because they have little knowledge (i.e. they don’t have the evidence) of what fundraisers do, fundraisers need to give them the CONFIDENCE to trust them by providing that knowledge (evidence), and that means giving it to them it the form and to the standard of evidence they (the skeptical stakeholders) want or need, not the form or standard we want to give them. If they want academic proof, then give them academic proof, because they’re clearly not accepting “experiential evidence of practitioners” (aka folk wisdom).

    Roger writes: “As much as I respect and support academic research and the evangelical efforts of Ian to advance it, I fear the polarizing dangers to the sector that come from placing too much emphasis on elevating the quality of academic research over the experiential evidence of practitioners.”

    OK, two things. I am not evangelicalising ‘academic research’. I am, however, unashamedly evangelicalising supporting your beliefs with the best available evidence and theory you can, and accepting that you might just be wrong in that.

    Second, even if we are talking about academic v practitioner standards of research, think carefully before you relegate academic research into second place behind the “experiential evidence of practitioners”. Fundraising lays great claim to being a ‘profession’. So think about how other professions might incorporate academic research and the balance they strike with established best practice.

    As for the two sides in this supposed polarization (academic v practitioners) not connecting, as Ken suggests in part 1 – my think tank Rogare is the engine that sits astride the academic-practitioner boundary, with the job of translating academic ideas into professional practice, and we have a 100-strong International Advisory Panel (growing all the time) packed with fundraisers who want more theory and more evidence underpinning their professional practice.

  5. I must put on record a counter to Ken Burnett’s implication that I have not been in contact with the CDE during the course of its existence.

    I have in fact met with CDE officials and members on a number of occasions and exchanged emails directly with Ken and Giles Pegram (as have my colleagues at the Hartsook Centre for Sustainable Philanthropy), and debated, in an extremely courteous and mutually-respectful way, with Giles via social media, email, phone and in person, as he says in his comment.

    The criticisms of the CDE I made in my blog (which Giles called and “excellent critique”, BTW) can have come as little surprise to anyone intimately connected with the commission, as I have shared them all previously.

    If I continued with a criticism that Ken considers to be “basic flaw” in my argument that he cannot take seriously, it’s because I did not receive a response or rebuttal that I considered to be sound or persuasive.

  6. Having been both a practitioner and an academic – I don’t subscribe to the view that academic research is ‘better’ than practitioner research – and I know Ian doesn’t either. I do however, subscribe to the view that there is good quality research and poor quality research and both academics and practitioners are very capable of producing either. Sadly, when something makes it into print or the news there can be a tendency to accept it as fact – when in fact it is nothing of the kind. I’ve seen some abysmal work gain traction over the years. Why? Because it was covered in Third Sector or some other outlet.

    I’ve managed to acquire a reputation for being ‘grumpy’ over the years because in the past I’ve spoken out when I’ve seen someone drawing erroneous conclusions from the evidence, drawing quantitative conclusions from qualitative evidence, including leading questions in a survey, claiming to measure something that in fact they are not measuring, or worse – wasting funding by adding precisely zip to the current state of our knowledge. I get angry at academics too – when they focus on mindless minutia – or try to research something that practitioners have long had the answer to. And as I’ve just said academic research is quite capable of being poorly designed and executed too.

    Ian is saying that before we use the findings of new research we don’t accept it simply because of who the authors are, whether their hearts are in the right place or where it was published. I’ve lost count of the times I’ve been told to ‘zip it’ because John or Alison meant well.
    Really?

    Somehow we need to get the a point where we routinely assess the quality of the work before we dive in and apply it – but how one does that is not immediately intuitive. It’s for that reason that we now teach folks how to do it in the IoF and AFP suites of professional qualifications.The quest for ‘perfection’ Roger alludes to is illusory – but there is nothing wrong with equipping folks with the skills they need to assess evidence both for its quality and its relevance to the task at hand. Too many folks leave professional education with a knowledge of research and theory – but when they encounter a real world problem haven’t the first clue where to start. They have ten theories or pieces of evidence that they think might help them and ten relevant personal experiences that could offer solutions– but they haven’t been taught how to prioritize and thus decide which one to try out first. Getting that right would massively improve decision making and we build to that in the Advanced Diploma

    To Lisa’s point about evidence I don’t think Ian was saying that focusing on the donor experience would not be a good thing. We teach these things in class. I suspect that he is re-articulating a concern that we had when the Commission first got started. Rather than step back and identify WHY the profession had lost its way (sending our seniors over 300 solicitations in a month – pressurizing folks with dementia to give – and swopping the names and addresses of sometimes very vulnerable people) they appeared to assume from the outset that the core of the ‘problem’ was that the donor experience had to be improved.

    Well at one level that makes perfect sense right?

    The more interesting question though is how did we get to the point where we allowed those abuses to occur? What caused the symptoms we are now attempting to treat? Fix those issues and you prevent the ugly distortion of the donor experience from ever happening again. Fail to address them and when the dust settles in five years we’ll be right back where we started because those unaddressed forces will have triumphed once again.

    Yes, storytelling might need to be improved – but perhaps a bigger issue is the ‘business’ and (separately) ‘short term’ culture that pervades many of our Boards… A total lack of interest in the fundraising body of knowledge on the part of directors of development… That fundraising is still a profession that you need to know nothing to join … A failure to think systematically about the ethics that (should) govern our sector… A failure to learn from the experiences of other sectors (notably around the concept of relationships) … A failure on the part of society to recognize fundraising as a profession … A failure to attract the finest talent into our profession to shake up current thinking… And fundamentally a failure to understand what that beautiful thing called philanthropy is and what is (or rather should be) its relationship with fundraising …

    If we could assemble funding what would you spend your money on?

    Exactly ….

  7. Lisa Sargent says:

    To Adrian’s comment, short-term culture is — from what I’ve seen — the single most destructive underlying issue in all of this. It’s what gave us the firebomb that is ‘churn and burn’ and left donors stumbling around in the smoldering remains.

  8. Tom Ahern says:

    See how brief Lisa’s comment is? Practitioner. Vs. the others.

    The distinction I value isn’t between “bad” research and “good” research, judged by impeccable academic standards of excellence. Penny Burk’s research might not hold up in a court of academic appeals (so I’m told) … but it’s helpful to the practitioner. It clarifies a few things.

    Whereas long research documents purporting to help, which require an interpreter, a dictionary and a bloodhound to decipher, are useless.

    So, my most valued distinction is: Can I use it or can’t I? If I can’t even understand it, I can’t use it.

  9. Pamela Grow says:

    🙂 Just the other day, I responded to The Agitator’s feedback request widget with a comment that “I wish you had ‘*like* buttons for comments.” Lisa, that culture aspect is why all of our courses include up to eight team members, in an effort to get everyone on the same page.

    Tom, amen!

  10. With the greatest respect, Tom, if you don’t distinguish between good and bad research (why the scare quotes?), then you should.

    Good research draws robust and reliable conclusions based on the evidence it amasses to support or falsify its hypotheses. Bad research does not. As Adrian Sargeant has already pointed out, that’s nothing to do with whether the research is done by an academic or a practitioner; it’s whether it’s done by a good or a bad researcher.

    You say you want research that you can use. Well, you cannot use bad research, because bad research is useless.

    PS, short enough answer for you?