Research shows that most people follow simple rules or heuristics about half the time or more. Making sound decisions is challenging, and achieving the best possible ("optimal") outcomes is even harder. Instead, individuals often rely on shortcuts and basic rules.
From business leaders and policymakers to doctors and judges, people frequently choose what feels right rather than attempting to figure out the best possible actions. This often occurs with complex, important decisions where new information must be considered.
It is well understood among psychologists and other scientists that people are not always rational. The late Daniel Kahneman, a renowned psychologist and winner of the 2002 Nobel Prize in Economics, termed the mental shortcuts people use and their consequences as heuristics and biases.
However, some people are more rational than others, or at least some people are rational sometimes. Certainly, no one is suggesting that we all go through life acting solely on gut feelings, even if it sometimes appears that way while watching the news. The question is, how often are we rational, and what are the common pitfalls we fall into?
Simple Rules We All Use
In an article published in the journal Management Science last year, my coauthor Michele Garagnani and I investigated precisely that. We conducted a large study with problems chosen to represent significant decisions where new information is important.
Generally speaking, you have some idea about whether an action might be good or bad (a business strategy, a medical treatment, a financial policy), then you receive some information about how it's going so far (first-quarter profits, results of a medical test, early economic indicators), and then you must decide whether to continue or do something else.
You might think this is simple. If it's going well (high profits, good test results, lower inflation), stick with it. If not, do something else, right?
Wrong. That’s a rule of thumb: "win-stay, lose-shift," or simple reinforcement. That rule is often correct, but not always (here’s why it can be problematic). Sometimes, the best thing to do after positive feedback is to change.
For instance, a doctor might see that you are responding well to mild medication and thus could achieve even better results with more aggressive treatment. Or unexpectedly high profits could indicate that the market is ready for introducing other, new products. In decisions where new information is involved, rational choices require updating beliefs accurately (technically, using a principle called Bayes' rule), and people generally struggle to do so.
People also frequently use other simple rules. A major one is decision inertia, which happens when they ignore new information and stubbornly repeat the previous decision, for example, always buying the same brand of cereal or the same brand of car, even when they are not satisfied with it (although sometimes there may be reasons to be loyal to a brand).
Most People Follow Simple Rules Most of the Time
In our experiment, 268 people made numerous decisions involving new information, in two groups with sequential pay based on successful outcomes. We used problems where reinforcement and decision inertia were sometimes correct and sometimes wrong. Thus, we could see whether people made rational choices or merely followed rules of thumb which sometimes happened to be right.
We then used a type of statistical model called finite mixture models to estimate which rule people used most of the time. Nearly half the people (48%) were "mostly rational," meaning that they generally made the correct choice, even when some rules offered the wrong answer. A little over a quarter of the people (28%) were primarily reinforcers, chasing wins and losses (here’s a tip: if you’re like that, don’t buy stocks).
About 15% of people consistently showed decision inertia, repeating their initial choice without any changes. And about 8% adhered to other rules which also ignored belief updating.
This could be seen as a half-full, half-empty glass: half the people mostly follow rules of thumb (in our study), but half the people are generally rational even in somewhat difficult decisions.
The reality is more complicated. We also used a more sophisticated statistical method, called hidden Markov models, to estimate whether people were switching rules across decisions and how often they used each rule. A perspective on this is that half the people might be "mostly rational," but we don’t yet know whether "mostly" means 51% or the vast majority of the time.
The answer, at least overall and for our study, is 70%. Generally. So even if half of our participants were rational most of the time, a significant portion of their decisions weren’t.
Worse, when we paid them more (higher monetary incentives), they used simple rules more, up to 40 percent. This is an instance of the reinforcement paradox, where paying more for a win makes people react more strongly to it and use reinforcement more often.
It also turns out that nearly everyone is rational sometimes. Even those labeled as mostly following a rule of thumb also engaged in rational thinking 20% of the time on average. And they mixed it up across rules, for example sometimes following inertia even if they were primarily following reinforcement.
This shows that reducing people to a single type of behavior (e.g., "reinforcers") is always too simplistic. But putting it all together, we estimated, across all participants and all decisions, that the rational perspective was used 43% of the time (and reinforcement was at 35%).
What We Learned About Decision-Making
What does this all mean, beyond this particular study?
First, there are indeed "types" of decision-makers among us, at least for the problems we used: one size doesn’t fit all. Some people fall more often for reinforcement or inertia, and some people most often try to figure things out when given a chance.
Second, and perhaps most importantly, no one is purely rational, or a pure reinforcer, or acting solely out of inertia, even for specific issues. People who mostly follow their gut (reinforcers) also try to figure things out sometimes, and people who generally think things through also fall for simple rules a significant portion of the time.
It isn’t just that one size doesn’t fit all, but rather, that often, one size doesn’t fit one.