Why and how to use this handbook

What is open neuroimaging? Neuroimaging research is notoriously plagued by flexible analysis pipelines [Carp, 2012], insufficient or opaque reporting [Nichols et al., 2017], and low statistical power [Button et al., 2013] [Turner et al., 2018]. These pathologies undermine the reliability of our collective findings, and make transparency and reproducibility all the more important [Poldrack et al., 2017]. Broadly, the aim of open neuroimaging is to adopt better practices to counteract these problems and allow ourselves greater confidence in our work.

The primary goal of this handbook is to promote practices for improving the transparency and reproducibility of neuroimaging research. For example, we promote data management standards that facilitate data sharing and re-use, which has tremendous humanitarian and financial benefits [Milham et al., 2018]; we promote practices like software containerization that allow others to more easily re-run your analyses [Ghosh et al., 2017]. The handbook doesn’t touch on many aspects of open neuroimaging, but we point to other resources for a more complete picture; e.g., the ReproNim training course [Kennedy et al., 2019].


“My definition of open science is probably not the same as yours. And that’s okay.” —Kirstie Whitaker [Whitaker, 2019]

The term “open science” means different things to different people: for example, open source software, data sharing, open access publishing, accessible educational resources, diversity, inclusivity, equitable power structures, and so on [Whitaker, 2019]. Some practices can be particularly challenging for early career researchers (ECRs) [Allen & Mehler, 2019] [Poldrack, 2019]; ECRs are often incentivized to get results as quickly as possible, and senior scientists sometimes do not appreciate the benefits of open science practices. Luckily, open science is not all-or-nothing. You shouldn’t feel pressured to adopt “all” open science practices at once. Adopt whatever practices seem most useful and accessible, and integrate them into your workflow. Every little bit helps.


Make open science work for you. Incorporating best practices into your everday workflow is the best way to learn.

Open science practices are not just ideals, but will have practical, concrete benefits for you, your colleagues, and the field. For example, you may need to revisit a dataset or analysis after a month or a year. Taking the time to make you workflow as transparent and reproducible as possible now will save you a lot of time down the road. Future-proof your workflow by adopting best practices from the start and making your analysis open “by design” [Halchenko & Hanke, 2015]. Your future self will thank you.