By Cass Sunstein and Richard Thaler

Rating: 8/10

Best Line #1: Some kind of nudge in inevitable. Choice architecture, whether public or private, must do something.

Best Line #2: Never underestimate the power of inertia; and remember that this power can be harnessed.

If economics is the dismal science, behavioral economics is the human one. I think it might be the most predictive, too. A firm grasp of behavioral models, especially at the micro scale, can work very well at predicting behavior with very little information. I think this discipline, combined with psychology and literature (fiction and non-fiction), are the core components for understanding our species. Study these three things and you’ll be an anthropologist/sociologist/philosopher in no time. Without realizing it.

And though there are many books on the topic, I think the best place to start developing proficiency in the concepts is with our feature of the week, Nudge. Because what, exactly, do you do with this information once you understand the biases, frameworks, and cognitive dynamics that Kahneman and others give us? I think you start with an investment in choice architecture. Which is what this book is all about.

Over the week, we covered a few critical concepts with the articles linked below.

Choice Architecture At The Grocery Store

A Nudge Towards Better Information Diets

The Matters Most Sequence

The Best Hundred Dollars You Won’t Spend

Now we’ll review the remaining core elements and hopefully have a new view on how choice is styled, nudges are made, and what we can do the make the best of those things.

You Nonconformists Are All Alike

To conform or not conform. That is the question. We face it every day. Social proof compels us to follow the crowd, do what others do, and feel the safety and validity that comes from doing so. This is especially true with creative endeavors where the desire to conform is so great that we actually just copy what everyone else is doing.

We sometimes think copycat behavior is born of incompetence but I think it’s more often about insecurity. A musician who mimics the style other people established? A content creator doing the same stuff others do on Youtube? A company ripping off Apple’s design philosophy? Blame it on social proof. Herd instinct. Imitation is the highest form of flattery and, in this case, the copycat wants to be like the original because it feels secure.

Take me for example. When I’m unsure what to write next, I just ask “What would Seth Godin do?” because whatever he’s done, it’s socially acceptable, it’s admirable to me, and thus it makes me feel secure in the choice I’ll make. Is that copycat? Sure. Nothing’s new under the sun. But it’s also a certain kind of enlightened conformity. Everyone has a leader. Everyone conforms to something.

This is important because it explains a bedrock dynamic within behavioral economics and the applied technique of the “nudge”. As our authors explain,

Why are we easily nudged by others? We like to conform.

Again, it’s an insecurity. Someone might question us if we don’t conform. Because someone (I don’t know who) is watching. Always watching. This points to the spotlight effect and it, again, leads to our conforming behavior. From the authors:

One reason why people spend so much effort conforming to social norms is that they think that others are closely paying attention to what they are doing.

Conversely, if you want to be more resistant to nudges, develop the antithesis of the spotlight effect. Take the view that no one is watching and no one cares. If you develop that sense of long enough, you’ll probably find yourself wearing flip-flops to work and eating pasta with your hands.

Which would be awkward.

So again, behavioral economics and nudges, on the whole, are driven by this desire for conformity. Even the weirdos are weird together. In the same way. So is there a way to use this power for good? I think leadership can benefit greatly. Management, too.

The Open, Transparent Leader

A little nudge, if expressed confidently, can have major consequences for a group’s conclusions or opinions.

There are people who believe the Earth is flat. Why? Because someone told them and they believed it? I don’t think so. First, let’s remember that conformity plays a role here. Some people aren’t really fervent believers in a flat earth; they are just anti-establishment and are looking for a home. So they don’t believe it deeply but they happen to follow someone who does. I imagine that makes up some portion of the flat earth group.   

Then there are others who have effectively been nudged in this direction because they’ve been invited to see the evidence for themselves. They’ve flown on a plane and failed to recognize any curvature in the earth. They’ve gone down internet rabbit holes and been offered patterns to connect for themselves. They’ve been given the most seductive and powerful challenge of all: “Don’t believe me? See for yourself.”

And so they have.

Open, transparent confidence is a very powerful nudge in two ways: one, it gives people license to conform (I’m following the leader because they sure seem confident); second, it gives the hesitant ones a way to persuade themselves. “I’m not sure so I better take a look for myself.” That transparency doesn’t capture everyone but, when people start examining and the confidence doesn’t sway under the scrutiny, it captures quite a few.

Whole movements have ebbed and flowed on this sort of impassioned openness. For good or for ill. So why does this matter? Because the same dynamics that lead people to believe in a flat earth (confident, transparent thought leadership from someone) can lead people to believe they can lose weight, get a college degree, build a company, or give to charity.

In all these instances, what makes the nudge powerful is the lack of a “sales factor” and the lack of force. Confident transparency invites others to choose. No arguments, no “money back guarantees”. Only an invitation. This open choice, when offered with confidence and transparency, will be attractive to some. Not all. Never all. But some. Sometimes many.

And that’s just within the realm of leadership. Not everyone wants or needs to be a leader at all times. We will all be architects, though. We offer up choices to others all the time. We are also subject to the choices others offer us. So the remaining sections of the review have a broader appeal, perhaps. Choice architecture is all around us. Here’s what it’s made of and how we can use it to the best possible end.  

The Components of Choice Architecture

Our authors provide five components to the framework, listed here as actions:

  1. Provide incentives
  2. Map consequences and outcomes
  3. Provide defaults
  4. Give feedback
  5. Expect error

Provide Incentives

So what do I get if I choose this? Or that? What do I get? Incentives, be they negative or positive, drive everything we do. Their power is quite obvious in that we consider them before we do just about anything. But the real nature of incentives isn’t so obvious or appreciated in a given moment.

For example, when offered a chocolate bar or an apple, people regularly choose to eat a chocolate bar if they can have it today and regularly choose an apple if they can’t eat the free offering until tomorrow. The incentives for either choice are the same. The chocolate bar is just as unhealthy today as it is tomorrow and yet we choose it today. Because the incentive of its sweet enjoyment carries more weight with us, here and now, regardless of its unhealthiness.

So while we all understand incentives are important, what we occasionally forget is that incentives compete. Indulgent, short-term incentives constantly battle against healthier, but less pleasurable, long-term incentives. The better we understand this, the better we can make choices through the next component of choice architecture.

Map Decisions To Outcomes

Imagine if the aforementioned test were done slightly differently. Imagine people were offered an apple or a chocolate bar. But the offer wasn’t …

“Do you want an apple or a chocolate bar?”

but instead …

“Do you want a healthy apple that will satiate you and leave you feeling good about your choice or an unhealthy chocolate bar that will spike your blood sugar and leave you feeling guilty?”

This is simplistic and wildly suggestive, of course, but it’s also not untrue. Expanding choices to correlate them to likely outcomes is a critical part of designing the right architecture.

Provide Defaults

In some ways, this third component is more about the chooser than the architect. If a person adopted a rule to never eat chocolate or any other refined sugar, then the choice of the apple vs chocolate is no contest. A default condition to the long-term benefit is largely considered a great thing for everyone to adopt.

In the policy realm, this is why we now have deferred compensation (retirement) as a default benefit to employees. This is also why we have default regulations, required minimums, for the way many things are produced.

Defaults play to the status quo bias and broader loss aversion biases. It’s far easier to accept what is already established than to expend the mental energy to make a reversal. The classic example of this, as it relates to choice architecture, is the classic study of opt-in versus opt-out organ donation programs. In 2003, research published in Science magazine reported that opt-in systems in Germany had a 12% consent rate whereas opt-out systems in Austria had an astonishing consent rate of 99.98%. There’s more to the story, I’m sure, but it’s as classic and powerful an example of the “nudge” effect as anything we could come across. Best of all, this is a useful way of thinking about how to harness the power of inertia to your advantage.

Give Feedback

What good is a good decision if you don’t know it’s good? Awkward phrasing aside, this is probably the most important component of the choice architecture system. As our authors put it,

Feedback is crucial to improving performance. Well-designed systems tell people when they are doing well and when they are making mistakes.

When it comes to nudges, I find this feedback mechanism most common in software notifications. How many of us have been prompted … er, nudged to use our software because of some thingie that pops up on the status bar? In the case of software, the nudge offers an incentive—you’ll acquire new information—and notice, too, that these notifications are “always on” by default. Think about that.

Expect Error

Choices need to be made as simple as possible. I struggle with this because I occasionally give detailed instructions in circumstances and it never goes well. The idea to “expect error” is to expect that people will make bad choices or misunderstand their options regularly. Even with the best architecture behind the design. Whether dealing with children or executives (same thing?), clear choices and outcomes reduce the chance for error. But they don’t eliminate the possibility. So expecting error and anticipating a response is crucial.

For example, if you were a dietician attempting to help a person choose a diet, I don’t think the options should include, say, a ketogenic regimen. That’s likely prone to massive error. Unless, of course, it was slowly developed as an option over time so that the person starts with the choice of “ketogenic two meals a day”. A smaller choice, with less likelihood of error, is a nudge towards future success.


Some kind of nudge in inevitable. Choice architecture, whether public or private, must do something.

This might be the best line of the book. Go ahead. Use these techniques. You’re going to do some of this regardless and you’re going to be affected by these things, too. To be mindful of the nudges we all must face, and be deliberate in ways we nudge others, is to operate on a higher level.

As much as this book is a foundation for behavioral economics, we’ve not really given a lot of focus on specific behavioral economic concepts. Instead, we’ve looked at applied methods that stem from the discipline. So I’d feel terrible if I didn’t at least give some core principles that I think our authors do a great job summarizing. Beneath the five items below are a number of psychological and behavioral concepts that aren’t stated but are very present in each idea. To learn more, reading this excellent book! And then read many more books after it, such as Thaler’s follow-up Misbehaving, Cialdini’s Influence and Pre-Suasion, and Daniel Gilbert’s Stumbling on Happiness.

To satiate in the meantime, here are five psychological principles that underlie human behavior, as quoted from our authors:

  • People say they will do something but seldom follow through (it’s easy to pledge, hard to act)
  • Self-control restrictions are easier to adopt if they take place some time in the future.
  • Loss aversion: people hate losing something more than they love gaining something.
  • Money illusion: people mistake the value of a dollar today to be the same as the value of a dollar tomorrow.
  • Inertia is a powerful, powerful force.

There is a lot of wisdom in these five humble sentences. And a lot more within this fine work. You can buy it here on Amazon.

Mental Models and Principles

  • A choice architect has the responsibility for organizing the context in which people make decisions.
  • There is no such thing as a “neutral” design.
  • The key is that choices are not blocked off or significantly burdened. If people want to do stupid things, the policy won’t make it hard for them. The policy just tries to nudge them the right way.
  • Never underestimate the power of inertia.
  • Kahneman and Tversky identified three major heuristics that translate to biases of judgement: anchoring, availability, and representativeness.
  • Anchors serve as nudges. The more you ask for, the more you tend to get.
  • Availability is the manner in which people can easily recall examples of something which in turn makes the example seem much more possible than it may be.
  • Self-control can be illuminated by thinking about an individual as containing two semiautonomous selves: a far-sighted “planner” and a myopic “doer”. The planner speaks from your reflective system; this is Mr. Spock. The doer is your automatic system and is Homer Simpson.
  • Social influences come in two basic categories: information and peer pressure.
  • Humans make mistakes. A well-designed system expects its user to err and is as forgiving as possible.
  • Attitudes toward risk depend on the frequency with which investors monitor their portfolio.
  • The invisible hand works best when products are simple and purchased frequently.