Debiasing Decision Making: How Hard Can it Be?
One of our obsessions at BehavioralSight is how we, as business people, make decisions. What leads us to think one path is better than another? How are we affecting each other’s evaluations? Are we any good at this? Are our decisions just random chance?
In recent years, the broader behavioral science universe has also been interested in this, and has helped us out with many solid strategies to help us mitigate bias and noise in our decision making. Some of these include: 1) involve decision makers with diverse domains of expertise; 2) define the list of attributes that are important to consider, before the decision-making process begins; and 3) have all decision makers independently assess their options across those attributes before discussing them as a group. In a 2019 article, authors Daniel Kahneman, Dan Lovallo, and Olivier Sibony bake these best practices and others into a decision-making process they developed called the Mediating Assessment Protocol (MAP). Recommendations such as these excite us (yes, excite us!), as they seem promising, straightforward, and relatively easy. But what happens when we, as oh-so-enlightened behavioral scientists, try to use these decision strategies ourselves? We are humbly reminded that implementing them can be more difficult than it seems.
Here are some of the barriers we’ve experienced:
It feels like a waste of time.
The first barrier is that, quite simply, it can feel like an unnecessary timesuck. Why invest all this time into creating a detailed, structured, rigid process, when we can just ... not?
The trick is to realize, this process isn’t actually brand new— it’s replacing or updating whatever process is already in place. There is already necessarily a process of some kind being used, whether it’s one executive acting as dictator, a roundtable discussion, or multiple, unstructured meetings. These less formal approaches to decision making also take time, even if no one’s keeping track of it. Moreover, in the article by Kahneman et al., the authors point out that MAP has been shown to actually take less time than these default processes, as it creates an “orderly work flow.”
I was reminded of this recently when our team was making an internal hiring decision. At the outset, the task of creating decision criteria, aligning on them, defining their metrics and completing independent assessments for each phase seemed like a LOT of extra work, and I felt we might be overthinking it. But when it took a sub-20 minute discussion for the team to decide on which candidates to move forward, it was clear that the initial investment paid off — in both time saved and confidence in our decision.
It can be tempting to poke holes.
Even after recognizing the value of creating a formalized decision process, it can sometimes still be tempting to veer off course. It’s relatively easy to be clinical about decisions from the outside, but once inside, our vision can get foggy and it can be hard to remember why sticking with the structure is so important.
For example, one of the best practices of structured decision making that I mentioned is making a list of attributes before evaluation begins, and focusing on how each option performs on each of those attributes throughout the decision process. In the case of the hiring decision my team was making, the ‘options’ were the candidates, and the ‘attributes’ were things like experience using experimental methods and knowledge of behavioral science theory. There were a couple of times when I would see a candidate who didn’t seem, by our measures, to perform particularly highly on one of those attributes, but she did come in with some other attribute that made me want to keep her as a top contender anyway. Maybe she went to my alma mater, or shared a similar sense of humor. And if we had those characteristics in common, surely we’d make good teammates ... right?
Having a list of relevant attributes in front of me didn’t all the sudden make me blind to these other desirable traits, or completely prevent me from trying to reason with myself that they were equally as important as the attributes we had listed. However, it did force me to take a more honest, rigorous look at whether the candidate was truly a good fit for what we were looking for, rather than just going with my gut reaction that I liked her and wanted her on my team. And having a nudge like that alone is likely to lead to better outcomes — less biased, less arbitrary — than what you had before.
You might be forgoing some of the ‘fun’.
As this past year has perhaps made more obvious than ever before, half of what sometimes makes work bearable is going into an office and venting to ... ahem, I mean, having fruitful discussions with... your colleagues whenever you so choose. But while this might feel gratifying, it’s also not great for unbiased decision making. As someone naturally inclined towards talking through my every thought and feeling, this is perhaps the hardest part of good “decision hygiene” for me.
In the case of our hiring decision, after each candidate we interviewed, all I wanted to do was text my colleague about that one answer I thought was shockingly good, or that potential red flag I thought might be hidden in a response to a different question. But again, unfortunately, decades of decision-making research tell us that it is best to wait until each team member has come up with — and written down — their own thoughts, before discussing together.
While this took some momentary discipline, in return, I got to see our “group mind” in action, in data. When the results of our independent assessments of candidates were revealed, I got to see what our team really thought, not just what someone said in a one-off response to my text. My whole team got to learn about each other and our group, providing a different sort of gratification that our prior process never enabled.
So what's the gist?
It’s easy to think that theory will provide us with an easy fix to the trials and tribulations of decision-making. But alas, having the theoretical knowledge might be the easy part — it’s real-world implementation that’s tricky. Of course, that doesn’t mean the effort isn’t worth it, but it does mean we need to once again reckon with the fact that behavioral science isn’t immediately transformative pixie dust. It takes our own sweat and good faith efforts for the magic to happen in real life.
Latest Blog Posts
What Virtual A Cappella Taught Me About Feedback
Turns out a cappella is not exactly pandemic-friendly. Here's how my university a cappella group leveraged behavioral science to keep singing together...and advance to international finals for the first time ever.