The Confidence Trap: When Data Leads Us Astray

Knowledge is power, but information is not
—David Lewis

There’s a certain type of leader I think of as the “metric maniac.” They are quite common in the age of AI and the Internet of Things. The metric maniac loves data of all kinds. They take Peter Drucker’s smart quote—“What gets measured gets managed”—a step too far, thinking that by measuring everything, they can manage everything.

What Drives the Metric Maniac?

The motives behind the metric maniac’s love of information can vary from person to person. Some love data because it makes them feel confident, in-the-know, and in control. Others may love an abundance of data because it makes them look good—they may collect a bunch of what The Lean Startup author Eric Ries calls vanity metrics, putting together a portfolio of attractive but misleading measures that create a rosy picture.

Whenever I sell a business, I always get long lists of questions from the potential buyers who are metric maniacs. Nine times out of ten, it’s pointless data collection, usually around the areas of sales and finance. The buyers want to know how many widgets we sell in Puerto Rico and on and on and on. But the long list of information they want won’t tell them what they need to know: Is the business growing? Is it making more or less money than it used to?

Without the proper context, metrics mean nothing. In fact, that truth is a driving force behind CEO Software—a tool that helps CEOs single out the metrics that most matter and stay on top of them weekly.

A Lesson from the Racetrack

The metric maniac fails to see that reams of additional information does little but cloud their own grasp of the situation. One of my favorite demonstrations of this situation comes from a study done at a race track in the 1970s.

The psychologist Paul Slovic of the Oregon Research Institute asked a set of expert horse-race handicappers to predict the outcomes of forty races. To help them in the task, he presented them with a list of eighty-eight variables pulled from charts of certain horses’ past performance—things like the weight of the rider in the race, the number of days since the horse’s last race, the horse’s age, its total wins in the previous year, etc. From this list of eighty-eight, Slovic asked the experts to pick out the five pieces of information they would most want to know as they predicted the outcomes of the forty races. He then asked the same experts to choose further variables they would want to know if they were only allowed ten, twenty, or forty.

Each handicapper judged all forty horse races under four different conditions: they were given their top five variables for the first prediction, then given their top ten for the second prediction, and so on. Slovic was interested in the “stresses caused by information overload.” Would these experts make more accurate, confident, and consistent predictions when they had forty data points at their fingertips as opposed to five?

A typical metric maniac might be surprised by the results. The experts’ accuracy in ranking the top five finishers in the race remained about flat when they had five, ten, twenty, or forty variables available (in fact, the experts whose accuracy decreased with more information outnumbered those whose accuracy increased). What did change was the experts’ confidence. When they had more data points available, their confidence steadily increased—despite, as we’ve seen, no increase in accuracy. “These results,” writes Slovic, “should give some pause to those of us who believe we’re better off by getting as many items of information as possible, prior to making a decision.”

Source

 

More recent research has echoed Slovic’s finding, such as a paper by researchers from the Universities of Chicago and Toronto, which reported on three previous studies and concluded that “when judges receive more information, their confidence increases more than their accuracy, producing substantial confidence–accuracy discrepancies.” In other words, the results of Slovic’s horse-racing study have been replicated many times.

More data does make you more confident. More data does not make your decisions more accurate.

The Perils of Overconfidence

You may already have an objection in the chamber: “Yes, but the handicappers’ accuracy stayed flat, so what’s the harm of tracking lots of information?”

Consider two things. One, the high confidence the additional data gave the experts is not necessarily a good thing. In fact, I’d call it a risk. When the horse-race handicapper has forty variables in front of him, his confidence is sky-high—and he is therefore likely to place a much higher and riskier bet. Really, though, he should feel no more confident than when he had those five variables available. 

The same is true of metric maniacs, who frequently operate with a false sense of certainty thanks to all the comforting charts and spreadsheets they have. That can cause them to make costly mistakes and live in a sort of fantasy world.

Second, the collection of data takes up a lot of time and effort that could be spent elsewhere. The effective manager helps people spend these finite resources to move the needle on the right metrics, not to collect every conceivable bit of information about all the metrics. 

Consider one striking study conducted by three Bain partners. They found that at one large company, a single weekly meeting of the executive leadership team ate up 300,000 hours of employee time annually. The bulk of that time was accrued as the two or three layers below the executive team held their own meetings, where they would collect and present information “needed” by executives. And the 300,000 figure represents only the time spent in actual meetings—not the time employees spent preparing for them.

The point of the study is not, of course, that these handicappers could have issued accurate predictions with any old set of five data points. Instead, they were at their most consistent and accurate when using the small set of data they carefully selected from the much longer list of eighty-eight.

As leader, the metrics you could come up with to track your team’s overall performance could be just as long as that list of eighty-eight, and possibly much longer. Your task is to think up that much longer list and then, using your best judgement, narrow it down to the handful that represent true success for your team.

If you’re prone to falling into the trap of data collection for its own sake, just remember: More data does make you more confident. More data does not make your decisions more accurate.

Further reading:

Facebook
Twitter
LinkedIn

Comments are closed.