Science, as a process of discovering knowledge, is approximately 450 years old (Wootton, 2015) and is one of the best ways in which we can understand how the world works. It also allows scientists to challenge assumptions that are widely held that might be incorrect and, for those especially interested in its application, it can be used for improving the human condition (Trafimow & Osman, 2022).
Attempts to Limit Subjectivity
One problem of science is that it is human (Skinner, 1965). As much as it puts systems in place to reduce all the aspects that make us human—such as our emotions, desires, and our values, they are still present to varying degrees. They influence choices as to what research question to address, how to address it, how to analyze and report the findings, and what conclusions are drawn from the findings. Sometimes these human elements are critical to the dogged pursuit of discoveries that the world has benefitted from. But, sometimes they get in the way. So, as a community, scientists try to adhere to conventions that limit the impact of subjectivity. For instance, they use peer reviewing of the methods used, the analyzes conducted and whether they are in line with the conventions that should be adhered to, and whether the conclusions are valid.
These processes also don’t always work. The scientific community knows that problems arise, which come to light when we find that replication of standard effects fail on a monumental scale. For example, the replication crisis in psychology (Shrout & Rodgers, 2018), economics (Page, Noussair, & Slonim, 2021), and the medical sciences (Coiera et al., 2018) has brought this to the fore. But, that said, science is a self-correcting system, and while it might be slow to do this, communities of researchers generally try to improve on past failings. Without being honest about the problems, science wouldn’t be in a position to try to correct them, and this is a prized value.
Scientists Pursuing a Particular Agenda
The focus here is to highlight another issue that has always existed, but one that is increasing in magnitude, and that is scientists as advocates—those pursuing a particular agenda (eg, Eagly, 2016; Pielke, 2004). Why might more of this kind of thing be happening? The speculation here is that if you want to be relevant and recognized by your university for what you do, you need to get funded, and that funding is often contingent on doing research that is of societal relevance. None of this in and of itself is a problem, but it starts to be a problem for the following reason: When a researcher mixes their own political views, values, and emotional investment into the way they conduct research, and then uses this as a way to advocate strongly for one position over another, then they aren’t doing science.
As messy and noisy as it is, it tries to be objective, with the ideal being that the priority is to find out how things are, or exclude what they aren’t. And, whatever the implications of that discovery are, it is for others in society to layer with value judgments. Of course, this is also naïve, since no scientific research is absent of value judgments for the reasons mentioned earlier (Kincaid, Dupré, & Wylie, 2007; Trafimow & Osman, 2022). But some protection mechanisms are in place to at least show just how much values dilute objectivity. Recognizing the role of values isn’t a major problem, as long as scientists explicitly state their intentions as advocates because of their personal stake in promoting a politically motivated claim.
The deeper problem lies when scientists acting as advocates use science as a shield to hide behind so that they can comfortably say that the claims they are making are objective. Maybe they do this inadvertently, or maybe they knowingly do this; in either case, this is unethical. But even this wouldn’t be as significant a problem if it weren’t for one other factor: the sleight of hand used to stifle challenges to claims made by scientists acting as advocates.
The Objectivity Illusion
The sleight of hand is the objectivity illusion (Robinson et al., 1995), and it goes something like this:
- I believe I am objective, and, so, when I make a claim and refer to evidence in support of it, the claim and the evidence, in turn, are objective.
- If someone disagrees with my claim, then as long as they are open-minded and rational, I can persuade them to accept my claim.
- If someone still disagrees, then they are unreasonable and possibly irrational because their reasoning is flawed from the errors and biases in their thinking (Ross, 2018).
There is no easy route into talking on a level playing field with anyone that is under this illusion because what we have is an impasse. The problem is, science is dependent on challenge and critique for it to self-correct and improve, which can’t be done if disagreement is treated as flaw.
The points I’m making here aren’t new (eg, Armstrong, 1979; Richardson & Polyakova, 2012; Treves, 2019), but I hope that they are still worth making again. The reason is that the problems highlighted here haven’t gone away, which also means that whatever processes are in place to address them aren’t working and may instead be encouraging them to happen more.
Finally, there is a message for all of us: No one is immune from the objectivity illusion, and, for scientists, this is especially harmful.