Richard Thaler, the father of ‘nudge theory’, has been awarded the Nobel economics prize. But what is nudge theory? Does it actually work? And why should we care?
The concept is a relatively subtle policy shift that encourages people to make decisions that are in their broad self-interest. It’s not about penalising people financially if they don’t act in certain way. It’s about making it easier for them to make a certain decision.
“By knowing how people think, we can make it easier for them to choose what is best for them, their families and society,” wrote Richard Thaler and Cass Sunstein in their book Nudge, which was published in 2008.
A good recent example can be found in UK pension policy. In order to increase worryingly low pension saving rates among private sector workers the Government mandated employers to establish an “automatic enrolment” scheme in 2012.
This meant that workers would be automatically placed into a firm’s scheme, and contributions would be deducted from their pay packet, unless they formally requested to be exempted. The theory was that many people actually wanted to put more money aside for retirement but they were put off from doing so by the need to make what they feared would be complicated decisions.
The idea was that auto enrolment would make saving the default for employees, and thus make it easier for them to do what they really wanted to do and push up savings rates. Since auto enrolment was introduced by the Government in 2012, active membership of private sector pension schemes has jumped from 2.7 million to 7.7 million in 2016.
Organ donation is another example of an area where nudge policy is seen to have worked. Spain operates an opt-out system, whereby all citizens are automatically registered for organ donation unless they choose to state otherwise.
This is different from the UK where donors have to opt in. The Spanish opt-out system is one of the reasons Spain is a world leader in organ donation. France also switched to an opt-out regime this year. Theresa May said at the Tory Party conference that the UK would do the same.
The theory is the same as with pensions: deep down most people want to be donors if they die in an accident and their organs could be used to save someone else’s life but for various reasons never get around to registering.
Now, 10 years after Nudge was published, it’s clear that many nudges turned out to be highly cost-effective compared to more traditional policies. It’s also clear that things do not always go as planned. Sometimes nudges fail, bring about unintended side effects, or even backfire.
As we enter the second post-Nudge decade, policymakers should consider and evaluate how their nudges are being interpreted to ensure they have the intended effects.
Another example of a nudge that did not work out as expected: when employees at four major U.S. universities were offered the chance to precommit to future savings, their savings went down in the nine months that followed.
To understand why changes in choice architecture may have unintended effects, it is crucial to realize that the people who are making the decisions—potential organ donors and university employees, for instance—are not always naive and passive targets. It is true that sometimes people may simply choose a default option without giving it a second thought, but that’s not always the case—sometimes they will try to make sense of how a choice is presented, and their interpretation can profoundly influence their behavior. As we enter the second post-Nudge decade, policymakers should consider and evaluate how their nudges are being interpreted to ensure they have the intended effects.
People are often unsure about what option to choose. Whether to consent to organ donation or not, how much to save for retirement; those are difficult and important decisions, clouded by uncertainty. But this does not mean that people will always follow the easiest path.
Instead, people may look for cues in the choice architecture that can help them come to a decision. They may try to make sense of who is presenting the choice to them and of why this choice architect is presenting options in a particular way. Finally, people may consider what their own response will signal to the choice architect and to other people.
Researchers have long recognized that defaults can be particularly powerful when they are interpreted as implicit endorsements or recommendations. Thaler and Sunstein noted in Nudge that “in many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.” But this kind of sensitivity to social cues implicit in choice architecture can also bring about unwanted or unexpected responses to nudging attempts.
People were attempting to glean information from subtle cues in choice architecture—information about the goals and beliefs of the policymaker and information about what would be a fitting response.
As for the dip in retirement savings observed after universities introduced a precommitment option, a small detail in the plan’s design seemed to be the culprit. The original Save More Tomorrow plan presented the precommitment option only to people who had previously failed to enroll in a 401(k) plan. In contrast, the more recent implementation provided employees with a direct choice between initiating saving today versus initiating saving later. Employees may have inferred that contributing to the plan was considered not urgent by their employer. Why else would they offer the option to delay it?
What these examples have in common is that the decision makers were not passive targets; they were active sensemakers—in other words, people were attempting to glean information from subtle cues in choice architecture—information about the goals and beliefs of the policymaker and information about what would be a fitting response. They were actively looking for signals to help them figure out what was going on and how they should act.
In a recent article in Behavioral Science & Policy, David Tannenbaum, Craig Fox, and I argued that it is time for an updated framework of choice architecture (choice architecture 2.0); this update incorporates an explicit analysis of the implicit social interactions between decision makers and policymakers. Choice architecture 2.0 recognizes that people sometimes, though not always, act as social sensemakers when confronted with a decision and that this factor can be critical to the success or failure of nudges and other behavioral policy interventions.
Once you know where to look, it’s easy to find more examples in which people seem to act as social sensemakers in response to changes in choice architecture. Credit card customers lowered their average monthly payments after minimum-repayment information was introduced to their credit card statements, possibly because they interpreted the minimum repayment number as a suggested amount. Physicians prescribed less aggressive pain medication when the menu from which they selected had the aggressive options lumped into a single category, possibly because patients inferred that options listed separately were more popular among peer doctors. Shoppers in stores that introduced a small surcharge on the use of plastic bags were more likely to bring their own reusable bags from home, in part because the surcharge implicitly communicated social norms about waste reduction.
Choice architecture 2.0 recognizes that people sometimes, though not always, act as social sensemakers when confronted with a decision.
Sometimes nudges work because people are concerned about what their behavior signals to the policymaker, or to others around them. Think of airline captains who started flying more fuel efficiently, or of doctors who reduced their rate of inappropriate antibiotics prescription. In both cases, the improvement in behavior occurred after the decision makers—the airline captains and the doctors—had simply learned that they were part of a research study. Instead of interpreting these so-called Hawthorne effects as a nuisance in empirical research, we should view them as potent new tools in the nudging toolbox.
At the same time, nudges sometimes fail or backfire for similar reasons; people care about what their behavior signals.
These and other cases indicate the need for a more systematic understanding of social sensemaking in choice architecture. As a first step toward that goal, policymakers and researchers should routinely engage in a social-sensemaking audit, which would identify potential triggers of social sensemaking to help design more effective policy.
Auditors would ask, in other words: When do decision makers take on the role of naive, passive targets, and when do they act as sensemakers who look for social cues in the choice environment? Although empirical research on this question is scarce, it seems reasonable to think that people are more likely to engage in sensemaking when they are uncertain about their preferences, when they distrust the person or institution they hold responsible for the design of a choice environment, or when they notice a change in the choice environment.
A social-sensemaking audit also aims to anticipate the type of inferences that decision makers are most likely to make about the beliefs and intentions of the choice architect. If sensemaking occurs, will it lead to greater compliance with the intended goal, as with the implicit social norms driving reuse of plastic bags, or is there potential for backfiring, as with the option to precommit to retirement saving that was provided to university employees?
Incorporating these considerations into our understanding of choice architecture can prevent the implementation of unsuccessful nudges as well as promote the implementation of more effective nudges, including ones that have been overlooked in the past. An a nudge is what it is, it is about steering human behaviour, not changing it in an about turn format.
Check out my related post: How to get your child to eat his/her vegetables?