# Common Mistakes in Using Statistics - Spotting Them and Avoiding
Them

## 2010 Summer Statistics Institute Course, University of Texas at
Austin

## May 24 - 27, 2010

### External Links

Rice
Virtual Lab in Statistics Sampling Distribution Simulation

Bioconsulting
Confidence Interval Simulation

R.
Webster's Confidence Interval Simulation

W.
H Freeman's Confidence Interval Simulation

The
Rice Virtual Lab in Statistics Confidence Interval Simulation

Rice
Virtual Lab in Statistics Robustness Simulation

Claremont
University's Wise Project's Statistical Power Applet

Jerry Dallal's
Simulation of Multiple Testing

This simulates the results of 100
independent hypothesis tests, each at 0.05 significance level. Click
the "test/clear" button to see the results of one set of 100
tests (that is, for one sample of data). Click the button two more
times (first to clear and then to do another simulation) to see the
results of another set of 100 tests (i.e., for another sample of data).
Notice as you continue to do this that i) which tests give type I
errors (i.e., are statistically significant at the 0.05 level) varies
from sample to sample, and ii) which samples give type I errors for a
given test varies from test to test. (To see the latter point, it may
help to focus just on the first column.)

Negative
Consequences of Dichotomizing Continuous Predictor Variables

Content similar to the content of the course notes,
but includes embedded links and more information. Continually under
construction.