September 2015

The Objectivity Thing

Written by Janet D. Stemwedel

(or, why science is a team sport)

One of the qualities we expect from good science is objectivity. And, we're pretty sure that the scientific method (whatever that is) has something to do with delivering scientific knowledge that is objective (or more objective than it would be otherwise, at any rate).

I'm here to tell you that it's more complicated than that -- at least, if you're operating with the picture of the scientific method you were taught in middle school. What we'll see is that objectivity requires more than a method; it takes a team.

(I'll briefly note that my discussion of objectivity, subjectivity, and scientific knowledge building owes much to Helen E. Longino's book, Science as Social Knowledge.

But let's start at the beginning. What do we mean by objectivity?

It may be useful to start with the contrast to objective: subjective. If I put forward the claim "Friday Night Lights is the best television series ever!" you may agree or disagree. However, you might also point out that this looks like the kind of claim where it seems wrong to assert there's a definite truth value (true or false). Why? Because it seems unlikely that there's a fact of the matter "in the world" about what is the best television series ever -- that is, a fact outside my head, or your head, or someone else's head. "Friday Night Lights is the best television series ever!" is a subjective claim. It isn't pointing to a fact in the world, but rather to a fact about my experience of the world. There is no reason to think your experience of the world will be the same as mine here; it's a matter of opinion what the best TV show is.

Of course, if we want to be more precise, we can note that facts about my (subjective) experience of the world are themselves facts in the world (since I'm in the world while I'm having the experience). However, these are not facts in the world that you could verify independently. This means if you want to know how the world seems to me, you'll have to take my word for it. Moreover, social scientists and opinion pollsters (among others) work very hard to nail down an objective picture of a population's subjective experience, trying to quantify opinions about TV shows or political candidates or new flavours of potato chips.

Generally speaking, though, we look to science to deliver something other than mere opinions. What we hope science will find for us is a set of facts about the world outside our heads. This brings us to one sense of the word objective: what the world is really like (as opposed to merely how it seems to me).

Another sense of objective heightens the contrast with the subjective: what anyone could discover to be so. We're looking for facts that other people could discover as well, and trying to make claims whose truth other people could verify independently. That discovery and verification is generally taken to be conducted by way of some sense organ or another, so we probably need to modify this sense of objective to "what anyone with reasonably well-functioning sense organs could discover to be so".

There's a connection between these two senses of "objective" that captures some of the appeal of science as a route to knowledge.

One of the big ideas behind science is that careful observation of our world can bring us to knowledge about that world. This may seem really obvious, but it wasn't always so. Prior to the Renaissance, recognized routes to knowledge were few and far between: what was in sacred texts, or revealed by the deity (to the select few to whom the deity was revealing truths), or what was part of the stock of practical knowledge passed on by guilds (but only to other members of these guilds). If you couldn't get your hands on the sacred texts (and read them yourself), or have a revelation, or become a part of a guild, you had to depend on others for your knowledge.

The recognition that anyone with a reasonably well-functioning set of sense organs and with the capacity to reason could discover truths about the world -- cutting out the knowledge middleman, as it were -- was a radical, democratizing move. (You can find a lovely historical discussion of this shift in an essay by Peter Machamer, "The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy," in the book Scientific Controversies: Philosophical and Historical Perspectives.)

But, in pointing your sense organs and your powers of reason at the world in order to know that world, there's still the problem of separating how things actually are from how things seem to you. You want to be able to tell which parts of your experience are merely your subjective impression of things and which parts of your experience reflect the structure of the world you are experiencing.

Can the scientific method help us with this?

Again, this depends on what you mean by the scientific method. Here's a fairly typical presentation of "the scientific method", found on the ScienceBuddies website:

The steps of the scientific method are to:

  1. Ask a Question
  2. Do Background Research
  3. Construct a Hypothesis
  4. Test Your Hypothesis by Doing an Experiment
  5. Analyze Your Data and Draw a Conclusion


Except for the very last bullet point (which suggests a someone to whom you communicate your results), this list of steps makes it look like you could do science -- and build a new piece of knowledge -- all by yourself. You decide (as you're formulating your question) which piece of the world you want to understand better, come up with a hunch (your hypothesis), figure out a strategy for getting empirical evidence from the world that bears on that hypothesis (and, one hopes, that would help you discover whether that hypothesis is wrong), implement that strategy (with observations or experiments), and know more than you did before.

But, as useful as this set of steps may be, it's important to remember that the scientific method isn't an automatic procedure. The scientific method is not a knowledge-making box where you feed in data and collect reliable conclusions from the output bin. More to the point, it's not a procedure you can use all by yourself to make objective knowledge. The procedure is a good first step, but if you're building objective knowledge you need other people.

Here's the thing: we find out the difference between objective facts and subjective impressions of the world by actually sharing a world with other people whose subjective impressions about the world differ from our own. (Given the opacity of what's in our minds, there also needs to be some kind of communication between us and these people with whom we're sharing the world.) We discover that some things don't seem the same to all of us: Not everyone likes Friday Night Lights. Not everyone finds knock-knock jokes hilarious. Not everyone hates the flavor of asparagus. Not everyone finds a'66 Mustang beautiful.

But, if you had the world all to yourself, how would you be able to tell which parts of your experience of the world were objective and which were subjective? How, in other words, would you be able to distinguish the parts of your experience that were more reflective of actual features of the world you were experiencing from the parts of your experience that were more reflective of you as the experiencer?

It's not clear to me that you could.

If you had the world to yourself, maybe making this distinction just wouldn't matter. By definition, your experience would be universal. (Still, it might be helpful to be able to figure out whether some bits of your experience were more reliable in identifying real features of the world that mattered for your well being -- judging "This fire feels great!" as you were sitting down on the blaze wouldn't elicit an opposing view, but it might present problems for the continued functioning of your body.)

Our confidence that our experiences are tracking features of the world outside our head depends on our interaction with other people. And let's be clear that we don't just need other people to help us identify squishy "value judgments" about what feels good, tastes bad, is the best album, etc. Those senses we use to get knowledge about the world can deceive us, and the sensory information they deliver can be influenced by expectations and by past experiences. However, if we can compare notes with someone else, pointing her sense organs at the same piece of the world at which we're pointing ours, we have a better chance of working out which parts of that experience are forced by features of the world bumping against human sense organs (i.e., the parts of our experiences of the world where there's agreement) and which are due to the squishy subjective stuff (i.e., the parts of our experiences of the world where there's a lot of disagreement).

Comparing notes with more people should get us closer to working out "what anyone could see (or smell, or taste, or hear, or feel)" in a particular domain of the world. Finding the common ground among people whose subjective experiences vary greatly doesn't guarantee that what we agree about gives us the true facts about how the world really is, but it surely gets us a lot closer than any of us could get all by ourselves.

It's worth noting that even if the textbook bulleted list version of the scientific method makes it look like you could go it alone, real scientific practice builds in the teamwork that makes the resulting knowledge more objective.

One place you can see this is in the ideal of reproducible experiments. If you're to be able to claim that a particular experimental set-up produces a particular observable outcome (where you'll probably also want to provide an explanation for why this is so), you first want to nail down that this set-up produces that outcome more than once. More than this, you'll want to establish that this set-up produces that outcome no matter who conducts the experiment, and whether she conducts the experiment in this lab or some other lab. Without some kind of check that the results are "robust" (i.e., that they can be reproduced following the same procedure), there's always the worry that the exciting results you're seeing might be the result of an equipment malfunction, or a mislabeled chemical reagent -- or even of your eyes deceiving you. But if others can follow the same procedures and produce the same results, the odds are better that the results are coming from the piece of the world you think they are.

Peer review, whether of the formal pre-publication sort or the less formal post-publication conversations scientific communities have, is another element of scientific practice that depends on teamwork. Here's how I described peer review in a post of yore:

Peer review describes the formal process through which manuscripts that have been submitted to journal editors are then sent to reviewers with relevant expertise for their evaluation. These reviewers then reply to the journal editors with their evaluation of the manuscript -- whether it should be accepted, resubmitted after revision, or rejected -- and their comments on particular aspects of the manuscript (this conclusion would be more solid if it were supported by this kind of analysis of the data, that data looks more equivocal than the authors seem to think it is, this part of the materials and methods is confusingly written, the introduction could be much more concise, etc., etc.). The editor passes on the feedback to the author, the author responds to that feedback (either by making changes in the manuscript or by presenting the editor with a persuasive argument that what a reviewer is asking for is off base or unreasonable), and eventually the parties end up with a version of the paper deemed good enough for publication (or the author gives up, or tries to get a more favorable hearing from another journal).

This flavour of peer review is very much focused on making sure that papers published in scientific journals meet a certain standard of quality or acceptability to the other scientists who will be reading those papers. There's a lot of room for disagreement about what sort of quality is produced here, about how conservative reviewers can be when faced with new ideas or approaches, about how often reviewer judgments can be overturned by the judgment of editors (and whether that is on balance a good thing or a bad thing). As we've discussed before, the quality control here does not typically include reviewers actually trying to replicate the experiments described in the manuscripts they are reviewing.

Still, there's something about peer review that a great many scientists think is important, at least when they want to be able to consult the literature in their discipline. If you want to see how your results fit with the results that others are reporting in similar lines of research, or if you're looking for promising instrumental or theoretical approaches to a tenacious scientific puzzle, it's good to have some reason to trust what's reported in the literature. Otherwise, you have to do all the verification yourself.

And this is where a sort of peer review becomes important to the essence of science...

The scientist, looking at the world and trying to figure out some bit of it, is engaged in theorizing and observing, in developing hunches and then testing those hunches. The scientist wants to end up with a clearer understanding of how that bit of the world is behaving, and of what could explain that behavior.

And ultimately, the scientist relies on others to get that clearer understanding.

To really trust our observations, they need to be observations that others could make as well. To really buy our own explanations for what we observe, we need to be ready to put those explanations out for the inspection of others who might find some flaw in them, some untested assumption that doesn't hold up to close scrutiny.

Science may be characterized by an attitude toward the world, an attitude that gets us asking particular kinds of questions, but the systematic approach to answering these questions requires the participation of other people working with the same basic assumptions about how we can engage with the world to understand it better. Those other people are peers, and their participation is a kind of review.

In both the ideal of reproducibility and the practice of peer review, we can see that the scientist's commitment to producing knowledge that is as objective as possible is closely tied to an awareness that we can be wrong and a desire not to be deceived -- even by ourselves.

Science is a team sport because we need other people in order to build something approaching objective knowledge.

However, teamwork is hard. In a follow-up post, I'll take up some of the challenges scientists face in playing as a team, and how this might bear on the knowledge building scientists are trying to accomplish.