Nudging to Improve Human Experience or Simply to Sound Smart?
There seems to be an obsession with biases lately and by extension, a flurry of consultants and agencies slapping “behavioral science” on their letterhead. While acknowledging biases as real and important to understand, considering biases as the one and only answer is dangerous.
When examined solely through distinct biases, human behavior appears to be concrete and predictable. Just slap a social proof message on your ask string or use a single victim story instead of statistics or reduce ‘friction’ with fewer form fields or make monthly your default giving option. Fundraising in a behavioral science box.
The reality is human decision making is heterogeneous, fluid and fuzzy. But, the allure of biases is so strong that when there’s a new effect that doesn’t fit with any of the existing biases, we just create a new one. Our complex decision-making process is reduced down to party tricks.
This leads to the overly simplified view you’ll often hear preached at conferences (usually by people with no academic training in this field). Chances are you’ve read or heard that we heavily rely on biases, they cause behavior, they explain everyone’s decisions, and they mostly lead to systematic errors.
The serious risks of such a view (we tried to bust it here) are already showing:
1. We miss all the nuance. Treating biases as laws that apply to everyone, everywhere makes us ignore their intricacies. It’s so appealing to consider them as ubiquitous that we ignore the beauty and the importance of individual or contextual differences.
2. We’re quick to dismiss them. Because we’re unaware of their boundaries or complexities, we’re quick to dismiss biases after single failed replications or applications. We misinterpret that failure as an unsuccessful bias contributing to a climate of skepticism towards scientific research. But, human behavior is sensitive to circumstances. If we failed to replicate an effect in a given context, this doesn’t mean that the originally observed effect was false. It’s unwise to dismiss a whole effect after one failure as it’s unwise to expect the same effect in everyone, everywhere.
3. We don’t experiment. We believe biases are laws that cover every situation and every person unconditionally. We’re unaware that they’re merely tendencies which are true for some people, under certain circumstances. So, we skip a vital step before applying them to a new context: testing. This results in ill-informed, or failing, interventions which in turn increase the skepticism towards their effectiveness and research.
4. We all are experts. Biases spread like fire making people feel familiar with them. But, while most know the biases’ names, very few can claim true knowledge of their meaning, or their complexities. Just because we can talk about them, or can identify examples of them around us, it doesn’t mean we can analyze a particular issue or design and implement an effective intervention. Reading a book on behavioral science doesn’t make us a scientist! We might think we know enough when, in reality, we’ve barely scratched the surface.
When it comes to understanding and influencing human behavior, there is no substitute for experience and deep knowledge. Half knowledge is far more dangerous than utter lack of knowledge as it can lead to misguided applications or testing (see Myth #3).
Behavioral economics is not magic and human behaviour is complex. There are many contextual influences or simultaneous effects that need to be considered. We get the appeal of oversimplifying and overgeneralizing. But it won’t get results. To allow this science the chance to make a real impact we need to start studying human behavior with the respect it deserves.
Kiki
Really solid treatment of a somewhat complex issue in a very succinct manner.