Truth in the digital age: Blame people (not algorithms) for the filter bubble

Truth in the digital age: Blame people (not algorithms) for the filter bubble
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

What is the meaning of truth? Even a quick glance at the dictionary, which is supposed to provide at the minimum some superficial degree of clarity, will reveal how challenging the concept of truth is:

(a) that which is in accordance with facts or reality

but also…

(b) a fact or belief that is accepted as truth

Without embarking on a big metaphysical detour, we can see what the problem is. In an ideal world, (a) and (b) would be identical, yet in the real-world they hardly overlap. In other words, many of the things we believe are not truth in the sense of (a). And an even greater number of truths exist independently of our beiefs, in the sense of (b). Mark Twain famously noted that the former was more problematic than the latter: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” Wrong convictions are more detrimental than ignored facts, not least because they are harder to rectify.

As early as 360 BC, Plato denounced the artificial nature of subjective experience as something different from truth. Humans, he argued, are like prisoners in an underground cave. What they perceive as reality is merely the reflection of reality projected onto the cave’s wall. Only philosophers, Plato thought, are able to escape the cave and experience truth as it really is, rather than its distorted appearance. Fast forward 2,320 years or so, and we can see similarities between Plato’s cave and the image of a modern family gathering in front of a TV set during TV’s golden age. Watch anything for too long and you may equate it to reality, detaching yourself from any alternative versions of the world.

Long before we started worrying about the digital filter bubble, newspapers and television had been exposed for their ability to misinform and conspire against the truth. Ironically, media’s deceiving capacity is captured best by the media itself: from Citizen Kane’s (1941) “I’ll provide the war” (in reference to William Randolph Hearst making up a war in Cuba to bump up newspaper sales), to the underrated Wag the Dog (1997), in which a spin doctor (Robert De Niro) secretly hires a famous Hollywood producer (Dustin Hoffman) to invent a war against Albania in order to cover up the president’s affair with an intern to ensure he gets re-elected. While these examples illustrate intentional attempts to deceive the audience – reality is deliberately distorted – they highlight the thin line between reality and fiction. As William James observed: “Truth is something that happens to an idea”. Put differently, veracity is not a stagnant feature of ideas, so the status of truth is always questionable. Whether true or not, though, ideas have consequences, most notably the power to influence our behaviours and belief system.

This is not to say that the digital revolution has not exacerbated the problem. In fact, it has made truth even more fragile by introducing two significant changes. The first is the volume of information, which has grown exponentially. As Herbert Simon noted, “a wealth of information creates a poverty of attention”. The only way our brains can cope with this sensory overload is by making attention selective. And despite the unconscious and automatic nature of selection, it is still driven by motivation. Both what we ignore and what we attend to is influenced by our intentions, which means that we mostly see what we want to see, and hear what we want to hear – not just metaphorically speaking.

The second new element introduced by the digital revolution is personalisation. Rather than broadcasting, the web can narrowcast, creating one-to-one communications with consumers. This is where data and algorithms join forces to help our brains select information, giving us more of what we allegedly want. Indeed, personalisation is to attention what morphine is to pain relief: effortless, soothing, and addictive. And like morphine it provides an artificial sense of comfort, albeit intellectual. Indeed, personalisation conveys the illusion of familiarity and predictability. It reinforces our existing ideas, attitudes, and beliefs, enhancing our egos. It is as if we lived surrounded only by our best friends or loving family members. People whose views are identical to ours’, and whose primary function is to validate our opinions (e.g., through likes, retweets, and empathic comments).

However, make no mistake: blaming the algorithms for the filter bubble is as naïve as blaming social media (or the selfie stick) for digital narcissism. It is quintessentially human to prefer a self-enhancing version of truth – one which aligns with our existing beliefs and therefore makes us feel smart – to one that creates cognitive dissonance. That is, given the choice, most of us would opt for a version of reality that aligns with our belief system and core values than for one that questions them. The alternative is intellectually demanding, for it requires the constant revision of our models of the world, and existentially draining, for it creates a sense of uncertainty and moral relativism that would make most people extremely anxious and depressed. In short, people are happy in their own bubbles – they are not forced to live in them by technology. The best we can hope for is that they acknowledge the existence of other bubbles, while persuading themselves that their own bubble contains a higher proportion of truth.

Popular in the Community

Close

What's Hot