Saturday, August 4, 2018

Writing Workshop: Define Your Research Question




[Note: Most of the material from these writing workshop blog posts, plus a lot more never-blogged material, is now available in my book, The Writing Workshop: Write More, Write Better, Be Happier in Academia.]

One of the central tasks in any research program is to define the research question. This happens gradually as you review the literature. To continue with the hamster example from the last post, let’s say that you started your literature review with a vague interest in rodents and emotions. But as you read, you discovered a particular interest in hamsters, and also in the question of how animals’ environment affects their moods. So you came up with the idea of studying how body temperature affects mood in hamsters.

Then you started thinking about methods: What’s the best way to manipulate a hamster’s body temperature, and what’s the best way to measure its mood? Continuing to review the literature, you decide to shave the hamsters and measure their moods with mood rings.

Or perhaps the methods came first: Your advisor happened to have a bunch of rodent cages and a bag of tiny mood rings lying around, and you were looking for a way to use them. So you started reading the literature on rodents and mood rings, and you decided to study the effect of body temperature on hamster anger.

Or maybe your project was conceived by your advisor or senior members of your lab, and given to you to work on. This is a common way for a new researcher to get a project. But regardless of who came up with your research question, it must meet three criteria in order to be viable:

Criterion 1. The question must be interesting. (Don’t worry, I’ll explain what that means in a second.)

Criterion 2. The question must not already be answered (unless the study is an intentional replication).

Criterion 3. The question must be answerable with the methods available to you.

A caveat to No. 3 is that some projects (especially in the sciences) are actually about developing new methods. In that case, you could restate the criteria as (1) The new method must help to answer some interesting question; (2) The new method must represent some kind of improvement over existing methods; (3) The new method must be feasible to develop given the time and resources available to you.

For most early-career researchers, Criterion 3 is the easiest to meet. Ideas that are obviously impractical tend to be weeded out early on. Criterion 2 is also relatively easy to meet, provided that you do your homework by reviewing the literature and get feedback on your project ideas from someone who has expertise in the area. Unfortunately, researchers don’t always do these things. Earlier this year, I reviewed a study (a manuscript submitted to a journal) where the authors had probably spent about two years collecting data to answer a question that (unbeknownst to them) had already been thoroughly answered. This was not an intentional replication-- the authors seemed completely unaware of the dozen or so previous studies, starting in the 1970s, which had been done in a variety of labs to address this same question.

Again, to be clear: This was not a replication. Replications are good and important and necessary. This was just authors doing a project without first reviewing the literature. (Or reviewing it very poorly.) It’s a shame that the authors didn’t propose this study as a registered report. At least then, they could have gotten feedback from us (the reviewers) earlier, and they could have used that feedback to improve their research design rather than wasting months or years collecting unpublishable data. Happily, situations like that are rare. If you’ve done a thorough literature review as described in the previous post and gotten feedback from experts, you should be able to fulfill Criterion 2 just fine. The real challenge for new researchers is Criterion 1.

When you fail to meet Criterion 1, you often hear it in the form of the criticism, So what? People say, “This research doesn’t answer the ‘so what’ question.” Or reviewers may complain that the work is ‘atheoretical’ or 'lacks impact.' Often the word ‘interesting’ comes up. Reviewers say, “Why is this interesting?” or “The author hasn’t told me why I should be interested in this.”

When I was a new researcher, I found this criticism frustrating in the extreme. What do you mean, ‘so what’? I would fume. What do you mean, ‘why is it interesting?’ How should I know what you find interesting? If you’re not interested, don’t read it! I don’t think your work is interesting either!

I see now that I misunderstood the criticism. Criterion 1 isn’t actually about interest, in the subjective sense that I might be interested in knitting, while you are interested in military history or cooking or video game design. Criterion 1 is actually about how your narrow, specific research question connects to some bigger, broader theoretical question that people in your field generally care about.

For example, I’ve done a lot of studies of preschoolers’ counting knowledge. If I write a grant proposal in which I describe a study comparing children’s performance on two counting tasks, a reviewer might say, “So what? Why is this interesting?”

The reviewer doesn’t mean that early counting itself isn’t an interesting topic. In fact, the reviewer probably does research in the same area. (That’s why they were invited to review the proposal.) What they mean by ‘so what’ is that I (the author) haven’t explained how this study I’m proposing will help to answer to some bigger theoretical question that developmental psychologists care about.

Every research question is actually a set of related questions, ranging from the broadly theoretical to the narrowly operational. Inexperienced researchers too often focus exclusively on the specific, methodological questions-- talking about the work at the lab or subfield levels in the figure below-- and fail to describe their work at the disciplinary or public levels, which interest more people.


Anyone who proposes original research for funding or publication must define their research questions at all of these levels. The most common reason that grants don’t get funded is that the reviewers look at them and say, Meh, who cares? That’s a sign that the authors didn’t pay enough attention to the higher-level questions.

This lesson was clearly illustrated to me a couple of years ago, when I served as a reviewer on a panel for a federal funding agency. We reviewed a few dozen grant proposals, two of which described essentially the same work. One of these came from a well-known senior researcher whom I will call Prof. Hotshot. The other came from Prof. Hotshot’s own former PhD student, who had recently been hired into a tenure-track job at another university. I’ll call this person Prof. Newbie.

Prof. Hotshot’s proposal described the work in very general terms--at the public and disciplinary levels--as well as including just enough methodological detail (described at the subfield level) to reassure reviewers that the work would be done right. Prof. Newbie’s proposal described the work in too much methodological detail, focusing on the subfield and even lab levels.

The work was close enough to my own that I could understand Prof. Newbie’s proposal, and I could see that both authors were talking about the same set of studies. I gave Prof. Newbie’s grant a higher score just because I like to root for the underdog, and Prof. Hotshot had plenty of grants already. But everyone else on the panel rated Prof. Hotshot’s proposal much higher-- they said it was clear and compelling, whereas Prof. Newbie’s proposal was boring and overly technical. In the end, Prof. Hotshot’s proposal was funded and Prof. Newbie’s was not. (True story.) The moral of this story is that you must be able to describe your research in words that non-specialists can understand, or they’ll shrug their shoulders and say, Meh, so what?

The Elevator Pitch


It’s also good to have an elevator pitch. This is a brief description of your research at the public or disciplinary level. The term ‘elevator pitch’ comes from a hypothetical scenario wherein you are at a conference, and you find yourself riding up in the elevator with Dr. Famous-In-Your-Field. You introduce yourself to Dr. Famous, who politely asks you what you work on. Knowing that you only have a couple of minutes before the elevator ride is over, what do you say?

A good elevator pitch has two parts. First is the headline: A fairly concrete, one-sentence summary of the work you do. This is your first answer to Dr. Famous’s polite question about your work. Then you stop talking, and let Dr. Famous indicate whether they want to hear more or not. If Dr. Famous encourages you to continue, then go ahead and give the elaboration, which should take no more than one minute. Again, after you say your piece, be quiet. Let Dr. Famous ask you questions to guide the rest of the conversation.

Define Your Research Question and Elevator Pitch


The activity for this post is to define your own research question at the public, disciplinary, subfield and lab levels, and to compose and practice your elevator pitch. To get you started, here are some examples from Chemistry, Developmental Biology, English, Neuroscience, PhilosophyPolitical Science and Psychology.

When you are ready to draft your own descriptions and elevator pitch, here is a worksheet to help you. I suggest you spend 10 minutes or so drafting, and then spend the rest of the meeting time sharing and revising your descriptions and elevator pitch based on feedback from your writing buddies. This is a case where outside feedback is indispensable. If you try to use your own intuition to judge how your descriptions sound to people outside your research area, you will only be talking to yourself.

No comments:

Post a Comment