Science, as a process of discovering knowledge, is approximately 450 years old (Wootton, 2015) and is one of the best ways we can understand how the world works. It also allows scientists to challenge widely-held assumptions that might be incorrect and, for those especially interested in its application, improve the human condition (Trafimow & Osman, 2022).
Attempts to Limit Subjectivity
One problem with science is that it is human (Skinner, 1965). As much as it puts systems in place to reduce all the aspects that make us human, such as emotions, desires, and values, they are still present to varying degrees. They influence choices as to what research question to address, how to address it, how to analyze and report the findings, and what conclusions are drawn. Sometimes these human elements are critical to the dogged pursuit of discoveries that have benefited the world. But sometimes they get in the way. So, as a community, scientists try to adhere to conventions that limit the impact of subjectivity. For instance, they use peer reviewing of the methods used and the analyzes conducted, whether they are in line with conventions that should be adhered to, and whether the conclusions are valid.
These processes don’t always work. The scientific community knows that problems arise, which come to light when we find that replication of standard effects fail on a monumental scale. The replication crises in psychology (Shrout & Rodgers, 2018), economics (Page, Noussair, & Slonim, 2021), and the medical sciences (Coiera et al., 2018) have brought this to the fore. But, that said, science is a self-correcting system, and while it might be slow to do this, communities of researchers generally try to improve on past failings. Without being honest about its problems, science wouldn’t be in a position to try to correct them, and this is a prized value.
Scientists Pursuing a Particular Agenda
The focus here is to highlight another issue that has always existed, but is increasing in magnitude, and that is scientists as advocates pursuing a particular agenda (eg, Eagly, 2016; Pielke, 2004). Why might more of this be happening? The speculation here is that if you want to be relevant and recognized by your university for what you do, you need to get funded, and that funding is often contingent on doing research that is of societal relevance. None of this in and of itself is a problem, but it starts to become one when a researcher mixes their own political views, values, and emotional investment into the way they conduct research, and then uses this to advocate strongly for one position over another . At that point, they aren’t doing science.
As messy and noisy as it is, science tries to be objective, with the ideal being that the priority is to find out how things are, or exclude what they aren’t. Whatever the implications of that discovery are, it is for others in society to layer it with value judgments. Of course, this is also naïve, since no scientific research is absent of value judgments for the reasons mentioned earlier (Kincaid, Dupré, & Wylie, 2007; Trafimow & Osman, 2022). But some protection mechanisms are in place to at least show just how much values dilute objectivity. Recognizing the role of values isn’t a major problem, as long as scientists explicitly state their intentions as advocates because of their personal stake in promoting a politically motivated claim.
The deeper problem lies when scientists acting as advocates use science as a shield to hide behind so that they can comfortably say that the claims they make are objective. Maybe they do this inadvertently, or maybe knowingly; in either case, it is unethical. But even this wouldn’t be as significant a problem if it weren’t for one other factor: the sleight of hand used to stifle challenges to claims made by scientists acting as advocates.
The Objectivity Illusion
The sleight of hand is the objectivity illusion (Robinson et al., 1995), and it goes something like this:
- I believe I am objective, and, so, when I make a claim and refer to evidence in support of it, the claim and the evidence, in turn, are objective.
- If someone disagrees with my claim, then as long as they are open-minded and rational, I can persuade them to accept my claim.
- If someone still disagrees, then they are unreasonable and possibly irrational because their reasoning is flawed from the errors and biases in their thinking (Ross, 2018).
There is no easy route to talking on a level playing field with anyone who is working under this illusion. What we have is an impasse. The problem is that science is dependent on challenge and critique to self-correct and improve, and that can’t be done if disagreement is treated as flaw.
The points I’m making here aren’t new (eg, Armstrong, 1979; Richardson & Polyakova, 2012; Treves, 2019), but I hope that they are worth making again. The problems highlighted here haven’t gone away, which means that whatever processes are in place to address them aren’t working and may instead be encouraging them to happen more.
There is a message for all of us: No one is immune from the objectivity illusion, and, for scientists, it is especially harmful.