Subscribe now

Humans

Our biases get in the way of understanding human behaviour

Brain Scanner is Simon Oxenham's weekly column that sifts the pseudoscience from the neuroscience

By Simon Oxenham

17 August 2016

silhouette of a person next to a sign pointing the way to a polling station

We like to think people with different political views are psychologically different

Simon Dawson/Bloomberg via Getty Images

Can we ever study ourselves without our expectations affecting our conclusions? A damning report suggests that bias on the part of researchers has made vast numbers of studies in social psychology unreliable.

Social psychology is the study of how human behaviour is affected by other people, and it seems to be particularly vulnerable to unreliable findings and conflicting explanations. Part of the problem is acknowledging failed replications. Findings about how stereotypes affect a person’s attainment, for example, continue to be cited in new research studies even after other teams have failed to replicate the results.

Publication bias is partly to blame, as many journals are more likely to publish interesting findings than careful studies showing a previous result may not be true. But researchers have their own expectations to blame too, suggests an analysis by Lee Jussim at Rutgers University in New Brunswick, New Jersey, and his team.

Telling a story

The team concluded this after examining a number of prominent papers that are still often cited despite the fact that successive experiments had failed to replicate their findings. Taking a closer look, Jussim’s team found that in many cases, the original researchers had come to a conclusion that fitted the data, but had not eliminated alternative conclusions that could have explained their data equally well.The conclusions that researchers favour seem to be ones that fit a compelling narrative, telling a neat and interesting story about ourselves. Subsequent experiments that show these narratives may be wrong are less likely to be cited – researchers prefer papers that support the story.

Innocent? Bias could still land you in jail: How biased judges and juries can make you a murderer

The problem seems to be particularly pronounced when it comes to politically motivated research. In 2003, a review of previous studies famously found that people who vote conservatively have more rigid and dogmatic personalities.

The study made waves and is often still referred to by the media, even though a larger review in 2010 found these psychological differences between voters of different political leanings are minimal.

The original study has been cited by other research papers 1093 times since 2011, but the more rigorous and less compelling study has been cited only 60 times over the same period. We like to think that there are huge, sweeping differences between people of different political persuasions – a more nuanced picture doesn’t fit such an engaging narrative.

Blinding bias

The effects of such bias can be even worse if researchers fail to take certain, important measures when designing their experiments. We’ve known for a very long time that if a researcher isn’t blind to an experiment’s conditions – for example, they know if they are giving someone a drug or the placebo – then this can influence the study, making it more likely to produce the outcome that the researcher is hoping for.

Blinding researchers to their experimental conditions is particularly important in social psychology studies, because researchers usually interact directly with the participants. Failure to blind experimenters has led to a distorted picture of what happens when people sniff the hormone oxytocin, for example.

Jussim has found that researchers in this field routinely fail to properly blind themselves to conditions. Analysing 63 experiments, Jussim’s team found that only 15 of these declared that the researchers had been blinded to the conditions of their participants.

Racial prejudice?

Even when data is collected correctly, and the right statistical analyses appear to have been applied, researchers can still draw conclusions that are completely wrong. Put simply, we often fail to see answers that we aren’t looking for.

An example of this is a study in 2001 that found that a commonly used psychological test could show if people are racist. The test involved showing volunteers words in rapid succession, and asking them to quickly classify them as desirable or undesirable. If the volunteers associated the word “black” with a negative-sounding word, this was taken as a sign that they were prejudiced, and contributed to a score for implicit bias based on various factors in the test.

The researchers found that those who scored highly for implicit bias were also more racist in person – as judged by the researchers themselves, based on behaviour such as how friendly participants were towards black members of the research team. But an independent reanalysis eight years later found that a high implicit bias score more often meant the opposite – that a participant was showing a pro-black bias. The researchers had not spotted this because they had been misled by their own expectations.

This illustrates how alternative interpretations of data can overturn initial conclusions that seem, in the first instance, to be clear-cut. But it is essential that, when our expectations are overturned, we accept them and make them public, rather than sweeping these findings under the carpet because they get in the way of a nice story.

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up