Resources

Here are some resources I found useful for developing this workshop. I’ll do my best to keep it updated. If you come across a resource you feel is helpful, submit it on the GitHub discussion board for this workshop (https://github.com/rcalinjageman/esci/discussions) and I’ll add it to this list and feature it in a blog post.

Background Reading

  • For neuroscientists, the most lucid explanation of the importance of sample-size planning is a recent commentary by Yarkoni (2009). This paper explains why small sample sizes are problematic even if results are statistically significant.

  • The fact that sample sizes are too small in the neurosciences is now well-documented. Here are three eye-opening readings:

    • Button, K. S., Ioannidis, J. P. a., Mokrysz, C., Nosek, B. a., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews. Neuroscience, 14(5), 365–76. https://doi.org/10.1038/nrn3475

    • Szucs, D., & Ioannidis, J. P. A. (2017). When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment. Frontiers in Human Neuroscience, 11. https://doi.org/10.3389/fnhum.2017.00390

    • Carniero et al. (currently a pre-print). Effect sizes and statistical power in the rodent fear conditioning liturature: A systematic review. http://dx.doi.org/10.1101/116202

  • Run-and-check is a common practice, but not a good one. Here’s a modern source and a classic source:

    • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–66. https://doi.org/10.1177/0956797611417632

    • Anscombe, F. J. (1954). Fixed-Sample-Size Analysis of Sequential Observations. Biometrics, 10(1), 89. https://doi.org/10.2307/3001665

  • Understanding effect sizes can be challenging. Here’s an excellent source that makes everything clear:

    • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4(NOV), 1–12. https://doi.org/10.3389/fpsyg.2013.00863
  • Finally, here are some other sources that are well-worth checking out:

Planning for Power

Dealing with Uncertainty and Publication Bias

There are already lots of sources and tools for planning for power. Although the approach is easy to adopt, it is important to remember that:

  • Effect sizes in the published literature may be biased, and

  • Effect sizes estimated from small samples are often uncertain.

Therefore, it is a good idea to hedge your sample-size estimatesagainst both bias and uncertainty.

Ken Kelley’s group has an approach that does this:

  • The R package of tools is called BUCSS– Bias and Unertainty Corrected Sample Sizes: https://cran.r-project.org/web/packages/BUCSS/index.html

  • The website designingexperiments.com has web apps that allow you to plan for power in this careful way without having to learn R. Scroll to the bottom of the page of webapps to select your design and load the appropriate web app.

  • A paper describing this approach is here: Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty. Psychological Science, 95679761772372. https://doi.org/10.1177/0956797617723724

Sequential Testing

If you are going to use planning for power, sequential testing can be more efficient, especially in the exploratory phase of research. Lakens offers an excellent tutorial:

  • Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701–710. https://doi.org/10.1002/ejsp.2023

Planning for Precision

Planning for precision is also known as Accuracy in Parameter Estimatation (AIPE). In this approach, the researcher’s goal is to control the noise/error in the result–a sample size is selected that will give a reasonable margin of error relative to the research question and the scale of measurement.

One set of readings and tools are from Geoff Cumming and his collaborators:

Another set of readings and tools are from Ken Kelley and his collaborators.

  • MBESS - Is a free R package that contains many different useful functions. Among those are functions for planning for precision (which Kelley terms AIPE). The functions in MBESS are complex, but they can be used for a wide variety of experimental designs. https://cran.r-project.org/web/packages/MBESS/index.html

  • DesigningExperiments.com has free web applications that allow planning for precision with the functions embedded into MBESS. This makes them easier to use. Given that they can handle complex designs, it is not surprising that the learning curve is a bit steep even for the web application.

  • Kelley and his colleagues have a number of papers and sources on the AIPE approach. Also reccomended is his excellent book Designing Experiments and Analyzing Data.

    • Maxwell, S. E., Delaney, H. D., & Kelley, K. (2018). Designing Experiments and Analyzing Data: A Model Comparison Perspective (3rd ed.). New York: Routledge.

    • Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation. Annual Review of Psychology, 59(1), 537–563. https://doi.org/10.1146/annurev.psych.59.103006.093735

    • Kelley, K. (2007). Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach. Behav Res Meth, 39(4), 755–766. https://doi.org/10.3758/BF03192966

    • Kelley, K., & Maxwell, S. E. (2003). Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant. Psychological Methods, 8(3), 305–321. https://doi.org/10.1037/1082-989X.8.3.305

    • Kelley, K., & Maxwell, S. E. (2003). Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant. Psychological Methods, 8(3), 305–321. https://doi.org/10.1037/1082-989X.8.3.305

SAS has tools that enable planning for precision:

Planning for precision is also perfect for Bayesians. John Kruschke has written a book and provides excellent tools for what her terms the Bayesian New Statistics:

Other Approaches to Planning

There are lots of other good ways to plan your studies..I couldn’t cover them all. Here are three noteworthy papers and approaches.

  • Planning for Evidence - If you like Bayesian hypothesis testing, a very good approach is to plan for evidence rather than sample size. That is, you can commit to collecting data until you achieve clear evidence for your hypothesis or for the null hypothesis. It sounds scary because data collection is therefore open-ended, yet simulations show this can actually be a very efficient approach.

Planning for Stability - not too different from planning for precision, this approach is to select a sample-size that will provide enough information that subsequent replications will achieve similar results within a set level of similarity.

  • Lakens, D., & Evers, E. R. K. (2014). Sailing From the Seas of Chaos Into the Corridor of Stability: Practical Recommendations to Increase the Informational Value of Studies. Perspectives on Psychological Science, 9(3), 278–292. https://doi.org/10.1177/1745691614528520

  • Gelman’s approach: