BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Big Data Debates: Machines Vs. Humans

Following
This article is more than 10 years old.

Are machines better than humans at making decisions? Andrew McAfee has formulated in the Harvard Business Review Blog yet another M-Law for the big data age: “As the amount of data goes up, the importance of human judgment should go down.”

McAfee reviews years of studies of algorithms vs. human judgment by various experts and concludes that we should not rely on experts anymore:  “The practical conclusion is that we should turn many of our decisions, predictions, diagnoses, and judgments—both the trivial and the consequential—over to the algorithms. There’s just no controversy any more about whether doing so will give us better results." (Italics mine; explanation below).

But Tom Davenport argues, also on the HBR Blog, that “intuition has an important role to play… Neither an all-intuition nor an all-analytics approach will get you to the promised land.” Jeffrey Heer (University of Washington and Trifacta) is a bit more assertive: “Human judgment is at the center of successful data analysis. This statement might initially seem at odds with the current Big Data frenzy and its focus on data management and machine learning methods. But while these tools provide immense value, it is important to remember that they are just that: tools…  To ‘facilitate human involvement’ across ‘all stages of data analysis’ [John Tukey’s words] is a grand challenge for our age.”

Kate Crawford warned us last year: “Numbers can't speak for themselves, and data sets—no matter their scale—are still objects of human design… Biases and blind spots exist in big data as much as they do in individual perceptions and experiences.”  Laszlo Bock, senior vice president of people operations at Google, has concluded based on data from tens of thousands of job interviews and subsequent employee performance that “Big Data… has tremendous potential to uncover the 10 universal things we should all be doing. But there are also things that are specifically true only about your organization, and the people you have and the unique situation you’re in at that point in time. I think this will be a constraint to how big the data can get because it will always require an element of human insight.”

Providing a great example of human fallibility (we see only what we want to see), McAfee links to the interview from which this quote is taken to prove his point (in his second post on the subject) that an algorithmic approach to talent management “works so much better.”  Similarly, he links to the relevant Wikipedia article for the first expert he relies on to debunk experts, psychologist Paul Meehl. In his 1954 work, Clinical vs. Statistical Prediction, which compared the clinical or case-study method prevalent at the time to the statistical or “mechanical” method, Meehl argued­—the Wikipedia article tells us—that “the mechanical tool will make a prediction that is 100% reliable. That is, it will make exactly the same prediction for exactly the same data every time. Clinical prediction, on the other hand, does not guarantee this.”

So the “better results” in the McAfee quote above turn out to be the reliability of precise and accurate repetition. In other words, automation and the many improvements it had brought to our lives. If we narrow down “human judgment” to decisions such as when to shift the gear in your car, then McAfee is absolutely right and indeed, in most situations, we will have “better results” if we trust the machine.

For anything beyond shifting gears, however, why remove humans from the decision-making process? As a matter of fact, you can’t and you shouldn’t (as Meehl was very eager to repeat for more than 40 years).  Humans design the algorithms to begin with and embed in them all their biases and, in many cases, wrong assumptions. Humans are also the ones that are needed to change the algorithms, when they understand better (or they think they understand better) their biases and the answer to the question “why,” a question that machines do not tend to ask.

Both the statistical classification of mental patients and the alternative clinical method resulted in lobotomy for many of these patients in the 1940s and early 1950s. It was not only medically wrong, but also morally wrong. Humans, as in the case of lobotomy, may conclude that the decisions that were based on their algorithms were wrong. Can machines do that?

Finally, the trouble with decisions (of the non-gear-shifting kind) is that many times it is difficult to determine whether we got “better results” because the decision itself changes everything. The decisions by Google, the most data- and algorithm-driven company around, to first buy and then sell Motorola, cannot be evaluated because these decisions have changed (twice) the market for cell phones (and possibly other markets). What would have happened if Google has not made these moves? We simply don’t know.

Google’s shareholders should just trust Larry Page’s gut instincts. Yes, gut instincts, the ones that make or break CEOs and their companies. We may or may not get excited about Google’s grand plan to make our brains irrelevant, but could a machine have made the decision to acquire DeepMind? Does anyone believe that “as the amount of data goes up,” we should expect Larry Page to be replaced by a machine?