Tuesday, August 28, 2018

Writing Workshop: Articles

Featuring IMRad, the friendly hourglass! 

[Note: Most of the material from these writing workshop blog posts, plus a lot more never-blogged material, is now available in my book, The Writing Workshop: Write More, Write Better, Be Happier in Academia.]

Across the sciences, research is shared in the form of empirical journal articles. An article has four sections: Introduction, Method, Results and Discussion, sometimes abbreviated IMRaD. Personally, I think of Imrad as the name of a friendly animated hourglass. The top half of the hourglass is about the question you asked in the study; the bottom half is about the answer you found.

Traditionally, researchers didn’t start writing an article until after they had finished the whole study. The problem was that if the findings weren’t exciting enough, journals didn’t want to publish them. (You can read about the weird and exploitative economics of subscription-based journals here, and listen to me rant about them here.) In recent years, the psychology research community has addressed the problem of publication bias (i.e., the problem where journals don’t want to publish boring findings) with a new type of article, the registered report.

For a registered report, you write the introduction and method sections before you collect any data. You send these sections (called a ‘Stage 1 manuscript’) to a journal, where they get reviewed. Once the reviewers and editor are happy with your plan, you get an in-principle acceptance (IPA, sometimes called a ‘pre-acceptance’) from the journal. Then you go ahead and do the study according to the plan you laid out. As long as you follow the agreed-upon plan, your article is guaranteed acceptance in the journal. Writing a registered report is sort of like writing a PhD dissertation: You make a plan (the introduction and method sections), submit your plan to a committee (the journal editor and reviewers) for approval, and then do the work.

Even if you aren't writing an actual registered report to send to a journal, you can still preregister your study. You might do this if you like the idea and the process of preregistration, but want to send your work to a journal that doesn't offer registered reports. (For a list of journals that do offer them, click on the 'Participating Journals' tab, here.) Writing a preregistration is similar to writing a registered report, except that no one reviews it. Also, it is likely that no one will hold you accountable if your published study deviates from your original plan, because not many reviewers or readers take the time to go back and read preregistrations. Still, preregistration has the potential to make your thinking clearer and your decision-making process more transparent, so I applaud you for doing it.

My favorite place for preregistrations is the Open Science Framework (OSF), but there are lots of other places to register studies, including ClinicalTrials.gov and other government-run clinical registries; SocialScienceRegistry.orgEgap.orgRIDIEResearchRegistry.com, and others. Most of what I say in this post about registered reports is also true for unreviewed preregistrations.

But regardless of whether you write the introduction and method section before or after you collect the data, you will definitely have to write them. You will also have to write a results section, a discussion, a title, and an abstract. This IMRaD format is often drawn in the shape of an hourglass because articles start out broad and general, get narrower and more specific in the middle, and then return to broad and general discussion at the end.

If you read the earlier post on defining your research question, you may remember that the diagram of different levels (public level, disciplinary level, etc.) had the shape of a funnel: Wide at the top and narrow at the bottom. That funnel fills most of IMRaD’s top half. Its purpose is to show how the broad (public-level or disciplinary-level) question at the top is connected (through the brief literature review) to the narrower, subfield-level question that appears at the end of the introduction. The lab-level description goes in the method section. (If you didn’t read that post yet, go back and read it now. I’m going to be talking about those levels throughout this post.)

Because of this handy structure, authors know what information to include and where to put it, and readers know where to look for the information they want. Remember that most people don’t read scientific papers like novels, from beginning to end. Instead, they scan the article to find the information they need. Your job as an author is to help them by putting information where they expect to find it. Fig. 1 (below) summarizes the anatomy of IMRaD. Let’s consider each part in more detail.


This is the stuff that helps people find the paper in online searches, and gives them initial information about the paper so that they can decide whether they want to read it.


I like titles that state either the central finding of a study (e.g., Saturated fat consumption is not linked to heart disease) or the study’s contribution (e.g., Ab initio calculations for the E2 elimination reaction mechanism in ricinoleic acid). Readers searching the literature are basically shopping for information, just like they would shop for a backpack. They enter some search terms and then scroll through the search results, clicking on things that look promising. For backpacks, they scroll through thumbnail photos; for papers, they scroll through titles.

So when you choose a title, try to make it as clear and informative as possible. Also try to pitch your title at the most general level you can without misrepresenting its content. How easy this is to do will depend on how technical your work is, but hopefully you can come up with a title that uses words people in your discipline know if not words the public knows. The title is the face of your paper to the world, and it should be as accessible as you can make it.


The abstract is a brief (usually under 250 words) summary of the whole paper. The abstract is one of the most important parts of a paper— more people will read it than anything else except the title, so it’s worth working hard on. I don’t think I’ve ever written a paper where I revised the abstract fewer than ten times. The structure of the abstract mimics the hourglass structure of the whole paper, except that there’s just a sentence or two for each section: A sentence or two for the broad, public- or disciplinary-level question; then a sentence or two each for the narrow (subfield-level) question, the method, the results and the discussion. Here’s a useful annotated example of an abstract.


Choosing keywords requires a balance. If you choose only overly general search terms (e.g., ‘ ‘fat’ and ‘heart attacks’) your paper will be buried in a pile of ten thousand search results. If the terms you choose are overly specific (‘electrocardiographically defined clinical endpoints’) they won’t help anyone find the paper because no one types those terms into a google scholar search.

I usually choose a combination of keywords at the public level (e.g., ‘children,’ ‘preschool’, ‘numbers,’ ‘counting’) and disciplinary level (‘cognition,’ ‘development’, ‘cardinality,’ ‘magnitudes’). If there’s something new, unusual, or interesting about the methods used in the paper, I might add a keyword for that (e.g., ‘fNIRS’ or ‘latent mixture model’). 


This is the first section of the actual paper. Here, you introduce the topic of the paper and the questions asked in the study, both broad and narrow. This is also where you review the literature to show how you got from the broad, theoretical, disciplinary-level question to your narrow, specific operational (subfield-level) question. 

For a traditional article, you write the introduction after you already know what the results are. For a registered report, you write the introduction before you’ve started collecting data, so you don’t know yet what the results will be. Once you get an in-principle acceptance for your registered report, you don’t change the introduction anymore— the one that the reviewers approved is the one that ends up in the final paper, no matter how the results come out. 


The first few sentences of your introduction should raise the general topic or problem of your study. Pitch your opening at the public level if at all possible. Even if your work is very technical, strive to make the first and last sentence or two understandable to a broad audience. If your work is less technical (e.g., if you do behavioral experiments in psychology, like I do), you should definitely be able to write an opening that non-scientists can follow.

Anecdotes can make effective openings, as can hypothetical situations (e.g., ‘Imagine that you are hiking in the alps . . .’); so can interesting facts or references to current events (e.g., ‘The world was amazed in 2016 to see video of antarctic penguins playing ice hockey with fish skulls. But how did they learn the rules?’) The point is just to invite readers into the topic in a friendly and accessible way. 

In psychology, people sometimes give the advice to, ‘Start by talking about real people, not about psychologists.’ What that means is that you should have an opening and introduce your big question before jumping into your literature review. An example of an opening (i.e., talking about real people) is, ‘Imagine that you are in line at a cafĂ©. You have to decide whether you will order something familiar, or try something new. This is known as an explore/exploit problem.’ An example of jumping into the lit review (i.e., talking about psychologists) is, ‘Pearl and Nunn (2013) found that 85% of explore/exploit decisions could be described by their Penguin Labyrinth model . . .’ Note that this rule is violated very often in published articles, many of which have no opening and no statement of a big problem, jumping directly into the literature review. It’s one of the many ways that most published articles are badly written. 

Big question

After introducing the general topic of your study in your opening, your next task is to raise the broad, general, theoretical question behind the study. This is a public-level or disciplinary-level question that readers of the paper are likely to understand and care about. One of the most common reasons that scientific papers are boring (even to people who care about the work in principle) is because the authors fail to identify the big question motivating the study. This makes the study seem trivial.

Literature review

The literature review (often shortened to ‘lit review’) creates the logical path between the big question and the little question. For any big question, there are any number of ways it could be investigated. Say your big question is, “How is penguin cognition like or unlike human cognition?” This is an interesting question, but unanswerable in that form. You have to find something specific, something you can measure.

That’s where the literature review comes in. It explains the history of studies in the area, the methods that exist, and the interesting ideas to be tested. It explains how you arrived at the specific question, “When Emperor Penguins (Aptenodytes forsteri) play chess, do they favor the Sicilian defense or the queen’s gambit?” 

Note that the brief lit review that goes in the introduction to a paper is a very different thing than an extended, article-length lit review. A common mistake by early-career scientists is to try to pile everything they know about a topic into the lit review of a paper. This reflects a misunderstanding of how the two kinds of lit review are different.

The purpose of an article-length lit review is to summarize a whole body of literature. If you’re a student, the purpose may be to prove to a faculty committee that you’ve learned enough about that literature to get on with your research project. The purpose of the brief literature included in the introduction to an IMRaD article is just to provide enough context for the readers to bring everyone up to speed on why you operationalized your big question in a particular way. Everything in a brief literature review must be ‘need to know’— that is, you only mention it if the reader needs to know it in order to understand this particular study.

One of the most common mistakes I see in the writing of early-career scientists is that they try to put everything they know about a topic, everything that they've ever read that could possibly be relevant, into the lit review of their article. Don't do that. Think about what the reader needs to know.

Little question

At the end of the literature review comes narrow, specific operational question that you tested in this study. It should be at approximately the subfield level. (Lab-level details about exactly what you measured and how go in the method section.) 


The fork (picture a fork in a road, not a dinner fork) is a few sentences at the end of the introduction identifying at least two plausible outcomes for the study, and what each outcome would mean for the questions of interest.

For example, let’s say that the prevailing theory of penguin cognition is that penguins are quiet, pro-social creatures who go out of their way to avoid conflict. But another theory predicts that they will become aggressive when playing chess, because each penguin sees the black and white pieces as little enchanted penguins who will only be freed from their spell when the penguin playing on their behalf wins the chess game.

You have established through your literature review chess openings are a reasonable measure of emotional state in penguins, with use of the Sicilian defense indicating that a penguin is feeling hostile and aggressive, and use of the queen’s gambit indicating that the penguin prefers a quieter, more positional game. 

The fork is the part where you say, ‘If more penguins choose to play the Sicilian defense, this will support the claim by Grossman & Liljeholm (1992) that penguins become aggressive in defense of their tiny, enchanted brethren (the chess pieces). If more penguins choose the queen’s gambit, this will be consistent with the established theory of Chernyak, Dosher et al (1941) that penguins prefer to avoid social conflict across a range of situations.” In other words, you show that there are (at least) two different ways your findings could plausibly come out, and they would have different implications for our understanding of penguin cognition (or whatever it is you are studying).

The case of the missing fork. Just like a lot of published articles lack openings, so do a lot of them lack forks. But a fork is even more important than an opening, because it shows that the authors have thought through this design and aren’t wasting time studying a question whose answer is obvious. Too often, I’ve reviewed papers where the authors put time and resources into producing a finding that, in retrospect, seems inevitable: Tennis racket ownership is related to playing tennis; anxious parents have anxious children; kids who read a lot get better at reading; and so on. 

I understand how this happens. Researchers, especially inexperienced ones, struggle to come up with study ideas that are doable and yet haven’t already been done. Their thinking goes something like, ‘X and Y are both things we can measure. Maybe they’re related. Has anyone shown that they’re related? No? Great! That’s our study!’ They don’t stop to ask about the fork: Are there really two different plausible outcomes here? Realistically, is it possible that X and Y are not related? If we show that they’re related, what will we have learned that we didn’t know before? Identifying your study’s fork requires you to think through these questions. 

One reason I like reviewing registered reports is that I can catch fork-less studies and encourage the authors to revise their designs before they waste precious time collecting data that don’t teach us much. But you can also use the notion of a fork to make your own work better. In general, when a study design seems kind of boring, not exactly bad but just sort of meh . . . ask yourself whether it’s another case of a missing fork.


This is the section where you describe what you measured in the study, and how you measured it. If you analyzed data in a way that was complicated or innovative or otherwise special, you describe that here, too. Your method section should include enough detail for a reader of your article to evaluate whether you did the study correctly and whether your findings can be trusted. The exact format of a method section differs from field to field, as does the information that will be required in it. In psychology everyone uses the APA (American Psychological Association) format; other fields have other formats. 

Sometimes people say that your method section should include enough detail to allow a reader to replicate the study, but I disagree—method sections are not long enough for that. In order for someone to replicate your study, they really need copies of your stimulus materials, video of your research team doing the procedure, your analysis code, and so on. That stuff can be made available in the supplemental materials, and links to it can appear in the method section. 

If you are writing a registered report, your method section must be extremely detailed and specific, because a registered report requires you to make as many methodological and analytical decisions as possible before data collection starts. This is an important safeguard against researcher degrees of freedom, the flexibility that lets researchers try a bunch of different ways of collecting and analyzing data until they hit on something that seems to work or shows a significant effect. (Doing this is fine when you identify it as post-hoc exploration; it’s not fine when it’s presented as a series of decisions made ahead of time in order to test a particular hypothesis.) Here’s an example of what journals typically require for the method section of a registered report.


This is the section where you describe what you found. Figures usually summarize findings better than text alone; I’ll say more about figures in a later post. If you have some (not too many) numbers to present, a table can work well. Personally, I don’t love giant tables of statistical output, but different analysis methods produce data that lend themselves to different types of presentation, so you do you. Take your cue from the whatever you consider to be the clearest and most readable papers in your own subfield. 

It is sometimes said that the results section should contain only results and no interpretation, because all the interpretation should be reserved for the discussion section. I don’t quite agree. I think the results section should include enough context for each result so that readers can follow what’s being reported. For example, I would not say only this:

Two-tailed binomial test:16/19 participants, p=.004; 95% CI: .604-.966; probability of success=.842; BF 27.05.

Instead, I would say this: 

Results from the replication matched those of Experiment 1.0, with 16 of 19 toddlers choosing the non-yielding or ‘winner’ puppet (two-tailed binomial test p=.004; 95% CI: .604-.966; probability of success=.842). The Bayes Factor was 27.05, which is strong evidence in favor of the hypothesis that toddlers chose the non-yielding puppet either more or less than 50% of the time.

But again, conventions differ from one field to another, so you do you.

For a registered report, there are two parts to your results section. First you report the outcomes of all registered analyses—all the things you said you were going to measure and compare. Second, you can report ‘exploratory analyses’—analyses that were not included in the introduction and method section you submitted previously.

For instance, a new analytic approach might become available between the time you got your in-principle acceptance and the time you completed the work. Or a particularly interesting and unexpected finding may emerge from your data. You can definitely include these in your results, but you have to clearly label them as exploratory (i.e., you shouldn’t pretend that they were predicted ahead of time) and you should be careful not to base their conclusions entirely on them, especially if it means ignoring the results of your registered analyses.


Brief recap

As I discussed in the post on reviewing literature, many readers come to the general discussion after reading only the title, the abstract and the figures. So I like to start the discussion by briefly recapping, in just a couple of clear, nontechnical sentences, the big and little questions of the study and the main finding(s). I feel like it brings everyone up to speed. Not everyone does this, and sometimes reviewers complain about it, and sometimes I argue with them, and sometimes I take it out. But I usually start with a recap because as a person who usually skips the introductions to papers that I read, I appreciate when authors recap for me at the beginning of the discussion.


Next you explain how the results point to one direction or the other of the ‘fork’ identified at the end of the introduction. In an ideal world, your results would always support one side of the fork or the other; in real life, results are often inconclusive, mixed, weak, boring or unsatisfying in some other way. If you have done this study as a registered report, those things won’t prevent the work from being published. If you are doing a regular article, you will probably face pressure from reviewers to do additional data analyses, perhaps collect more data, and perhaps rewrite the introduction in order to tell a story with a more satisfying arc. Or you just won’t be able to publish the work because reviewers will say that it lacks impact. 

This is a big reason that I like registered reports. Of course, even without registered reports, journals are supposed to be records of science, not supermarket tabloids. So in theory, they shouldn’t care about good stories or exciting news. But in practice publishers are looking to make money, and journals want to be cited, because people care about journal impact factors. (Don’t get me started on how statistically illiterate it is to apply journal impact factors to papers or people—Stephen Curry summed it up best: The stupid, it burns.

Discussion of unexpected/serendipitous/exploratory findings. As mentioned above, even in a registered report you can add analyses that only occurred to you after registration, and you can report discoveries that you didn’t go looking for. Depending on the kind of statistics you use, you may not enjoy the same level of certainty with post-hoc analyses as you would with tests of a priori predictions, but you can still report everything and talk about what you think it means. Often, unexpected findings become the inspiration for future studies. 

Note that this distinction between looked-for and un-looked-for results typically arises only in registered reports. That’s because only registered reports force authors to make their predictions clear ahead of time. When you report unexpected results in a regular article, you are likely to face pressure from reviewers to rewrite the article to provide more ‘context’ for the unexpected results (i.e., to write the paper as though the thing you found was actually what you set out to look for.) 

Limitations and caveats

No study is perfect; no study answers all questions; most studies include some caveats that readers should keep in mind when interpreting the results; every study leaves many questions yet unanswered. Some papers end by discussing a study’s limitations, which is a shame. The end of a paper is a highly visible position, and it should be used to highlight the study’s most important findings and implications.

The way that Joshua Schimel puts it in his excellent book Writing Science is that instead of saying ‘yes, but’, you should say ‘but, yes.’ In other words, rather than talking about your study’s most important take-home points and then ending the paper with its limitations, you should reverse the order. State your findings along with any limitations, and then end the paper by talking about the biggest implications and main take-home points that you want the reader to remember.

Take-home points

At the end of the paper, you get to state what you think are the most important implications or ‘take-home points’ (i.e., what you want readers to learn or remember) from the study. Ideally, your hourglass will be the same width at the bottom as at the top, meaning that the end of the paper discusses the study at the same level of generality as the beginning. If the paper sets out to address a big, broad question and ends up with only a narrow answer, it means your hourglass is wider at the top than at the bottom, and the reader will feel like you promised more than you delivered. If the paper sets out to answer a modest question but arrives at an answer with much bigger implications, the hourglass will be narrow at the top and wide at the bottom, and the reader will feel like the introduction undersold the importance of the results.

When you’re doing a regular article, you’re under pressure to avoid both these problems and re-write the introduction after you see the data so that the top and bottom of the hourglass are the same width. With a registered report, the introduction is written ahead of time and doesn’t change based on the data, so papers may not have the same poetic balance. The stories won’t be as good. Which is fine, because you’re not writing a poem or a novel—you’re writing science.


These are formatted according to the journal’s guidelines. If you use a tool like BibTex or  Zotero, then formatting the in-text citations and references is easy. If you do it the old-fashioned way, then by the time you get to the final draft of the paper, there are usually some missing references or extra references from in-text citations that were added or deleted in later revisions. Make sure to double-check these as part of the final proofread.

Supplemental materials

An empirical study will include materials that aren’t described in the actual text of your paper, but that you want to make available when your paper is published. For example, in order to help people replicate the study, you might want to share the stimuli, the full text of surveys or assessments, or video of the researchers doing the procedure. 

What information should go in the supplementary materials depends on what is important and non-obvious in the study. For example, if you gave kids a standardized, published test to measure their vocabulary, you can just cite it like you would any other source. But if you designed your own survey or test or online task and gave it to participants, you should include it in the supplementary materials so that people can see exactly what you did.

The same is true for video of the procedure: If all the researcher did was tell the participant to sit down at a computer and do a perception task, you don't need video of that. But if you had 3 research assistants performing a puppet show (as we do in many of our studies with infants and toddlers), then you should include videos of all the puppet shows so that people can reproduce them. 

Another thing you can put in the supplementary materials is different permutations of the data analysis that aren't seem central enough or important enough to put in the text of the paper.  For example, say that you administered a 100-question survey to participants, and you decided to exclude all the participants who completed your survey in under five minutes. You might do this because you don’t think it was possible to really read and answer 100 questions thoughtfully in under five minutes. So (you assume) the under-five-minutes people were probably just clicking through the survey and marking answers without reading them, and excluding their answers from the data analysis is a reasonable thing to do.

But what if a reader thinks, "Hey! That might be cheating! How do we know you didn't just exclude those participants in order to get the results you wanted?" In order to answer that kind of question you can include, in the supplemental materials, a version of the analysis that included all participants, so that readers who are curious about it can see how that would have changed the results. 

Of course, if you have made your data publicly available along with the paper (an excellent open-science practice that many journals and funders now require) you can also skip the alternative analyses, because people who want to know what would have happened if you had done the analysis differently can just download the data and do the analysis themselves.

In addition to sharing data, it's good to share your code (the scripts that you used to program the experiment and to analyze the data) so that any reader who wants to can reproduce your analysis. (For this reason, it's good to use statistical software like R or Matlab, which saves a record of the scripts you used, rather than one like SPSS, which uses drop-down menus. Or you can use the free, friendly and all-around awesome JASP, which gives you drop-down menus but also saves a complete record of everything you did.) 

Some journals may ask you to archive all these supplemental materials on their website, but most will be happy if you archive the materials on a separate site (e.g., Open Science Framework, GitHub, etc.) and just include a link in the article. 

Variations on the Standard Hourglass

A paper presenting multiple experiments

Customs differ from one discipline to another, but I’ll describe the format in my own field of psychology. When we are presenting more than one experiment, we modify the hourglass slightly (see Fig. 2). 

The title, abstract, keywords, introduction, references and supplementary materials are all the same as in a single-experiment paper. The difference is in the method, results and discussion. First, there is a general method section. This describes the parts of the method that were the same for the whole series of experiments. For example, if participants for all experiments were recruited in the same way, that recruitment process is described in the general method section. 

Then there is a separate method and results/discussion section for each individual experiment. It starts by briefly explaining what question this experiment was meant to answer (i.e., the rationale for the experiment), and then describes whatever aspects of the method were unique to this experiment. (Often this results in wording like, “The procedure was the same as in Experiment 1, except that participants were shown pictures of animals rather than vehicles.”) This method section is followed by a combined ‘results and discussion’ section for just that experiment. 

Finally, after all of the experiments have been presented, there is a general discussion section that discusses all of them as a group. This follows the format of the discussion in a standard hourglass paper.

A paper where the authors did not collect new data

Not every article presents new data. Some papers do something new with data that have already been published. For example, meta-analyses do a new analysis of data from multiple previous studies; secondary analyses use existing datasets to answer new questions; and modeling projects use existing datasets to build and test new formal models. 

Today, researchers can make their data public on sites such as Open Science Framework and GitHub, allowing other researchers to re-use them. This is a good thing, because it lets us get more value (as a society) out of the public investments we made in those original research projects where the data were collected.

These papers are usually still structured like an IMRaD hourglass, but instead of explaining in detail how the data were collected, they must describe the datasets they used. For papers where the empirical results are really important (such as secondary analyses and meta-analyses), the datasets are described in detail. This is usually done in the first part of the method section, in the same place where the collection of new data would be described. 

Some modeling papers use widely available datasets, such as census data or sports results, to build and test new statistical models. In this case the main contribution of the paper is the model itself, not the empirical results. In this case, the authors just need to say enough about the dataset to give people a general idea of the information it contains, along with a citation to the source. For example, they might say, “We used the Economist Intelligence Unit dataset (2012) which provides country, risk and industry analysis for 200 countries worldwide.” The full reference would appear in the reference list at the end of the paper: Economist Intelligence Unit. (2012). EIU CountryData. [Data file]. London: Economist Intelligence Unit. (http://www.eiu.com/home.aspx). In these papers, the method section is devoted to presenting the model.

A paper presenting both new empirical data and a new statistical model

Often when I present this post in our writing workshop, a student asks me how to structure a paper where they have collected new data answering some empirical question, and they have also come up with an innovative way to model those data. In this case, the variation on the standard hourglass is similar to a multi-experiment paper, where the first ‘experiment’ describes the data collection and the empirical results (description and standard analysis of the results) and the second ‘experiment’ presents the model and modeling results.

Try not to write this kind of paper. It just confuses everyone. Most reviewers are either experts in a content area or experts in modeling. Very few can evaluate both parts of a paper like this, which complicates the peer review process. Most readers are in the same boat—they care either about the empirical findings or about the model, but not both. So no one is really happy with the paper. If possible, write two papers instead. First publish a standard empirical paper presenting the new data; then publish a separate modeling paper. That way people are less confused, and you get two publications on your CV instead of one.


  1. Thank you so much for making this great book freely available! I'm very much looking forward to using it with my lab writing group this semester, and hope it will save much repeating of myself with my grad students. One suggestion for this chapter-- you may want to include preregistration alongside registered reports. Many of the same things apply (e.g., what you submit for a registered report is basically the same as what you preregister, or at least that's how we do it, and you want to make the same distinction between preregistered and exploratory analyses in writing up the results and discussion). We preregister everything, but most journals we submit to don't do registered reports, and we are sometimes working with existing datasets. It would be nice to highlight preregistration as another good tool for more open science in cases where registered reports aren't a viable option for these or other reasons.

    1. Oh, good idea! I've added a couple of paragraphs on preregistration near the top. Thanks for your kind words about the book. Good luck with your writing group!