**Chapter 1: Descriptive Statistics**

Frequency distributions scatter diagrams, averages— mean, median and mode, linear regression, populations vs. samples, histograms, range, percentiles, deciles, quintiles, quartiles variance, sigma notation, standard deviation for populations and for samples distributions— skewed, platykurtic, leptokurtic, bimodal

**Chapter 2: Probability**

Outcomes, sample space, events— independent, complements, mutually exclusive Venn diagrams

**Chapter 3: Conditional Probability**

\(\mathcal{P}(A\mid B)\) notation, definition of conditional probability Bayes’ Theorem and its proof generalized Bayes’ Theorem

**Chapter 3½: Looking Forward to the Next Four Chapters**

The Future— zero samples, the Past— one sample, the Present— two samples, the Present— three or more samples

**Chapter 4: The Future—Zero Samples**

Poisson distributions, e, factorial, continuous vs. discrete variables, exponential distributions — three forms permutations and combinations, Bernoulli variables binomial distributions, hypergeometric distributions, multinomial distributions, extended hypergeometric distributions, normal distributions — Gaussian distributions, normal curves to approximate binomial distributions

**Chapter 4½: The Art of the Sample**

Null hypothesis — H_{0}, the problem of induction— Hume’s problem, the problem of small samples, type I and type II errors, levels of significance, The Ten Rules of Fair Play, data mining, cherry picking, data snooping, pilot samples, alternative hypotheses, one-tail vs. two-tail propositions, dealing with sensitive questions in a survey, dealing with bad luck in surveys, simple random surveys, systematic samples, cluster sampling, stratified samples outliers, statistical significance vs. actual significance, 13 alternatives to saying “H_{0} is tenable.”

**Chapter 5: The Past—One Sample**

Why no one knows what time it is, Normal Distributions— large samples, but a small part of the population, z-scores, determining sample size confidence intervals, Central Limit Theorem, point estimates, Wald confidence intervals vs. Agresti-Coull confidence intervals, finite population correction factors, Normal Distributions— large samples that are a large part of the population, Student’s t-Distribution, Lilliefors test for normality, standardizing data, cumulative normal frequency, Wilcoxan Signed Ranks test— the Median test, uniform distributions, symmetric distributions, Sign test, power of a test, data— nominal, ordinal, interval, ratio, parametric vs. nonparametric statistics, Sign test for nominal data, Kolmogorov-Smirnov goodness-of-fit test for uniform distributions, for normal distributions, Chi-Squared test, for goodness-of-fit test the Lie Detector test, is-the-sample-too-variable test sequences— random, cyclical, trends, Runs test

**Chapter 5½: Secrets of the Binomial Proportion**

Starting with a small sample of a Bernoulli variable, we determine the confidence interval for \(\pi\), the proportion, of “good” items in the underlying population, a small history of the problem, Monte Carlo method, the journal article (from* The Journal of Fredometrika*) which describes a new approach to the problem

**Chapter 6: The Present—Two Samples**

Paired samples, Two Paired Samples \( (\mu_1-\mu_2) \) test, Wilcoxon Signed Ranks test for two paired samples, Signs test for two paired samples, Signs test for paired samples of Hot & Cold, Two Proportions with 2 samples in 2 categories, independent samples, Two Large Independent Samples test, Two Independent samples when \(\sigma_1\) and \(\sigma_2\) are known, F-Distribution test, Two Small Independent Samples test where the populations are normal and the standard deviations are roughly equal, Two Small Independent Samples test where the populations are roughly normal but the standard deviations are quite different from each other (a. k. a. the Smith-Satterthwaite test), Mann-Whitney test, Chi-Squared test for 2 samples in many categories contingency tables, one sample with two variables, Chi-Squared test with Yates Correction for 2 samples in 2 categories, Fisher’s Exact test for 2 samples in 2 categories,

**Chapter 7: The Present—Many Samples**

One-Way ANOVA test for independent samples weighted averages, Post-test for One-W ay ANOVA for independent samples, One-W ay ANOVA for matched samples (blocked samples), Post-test for One-W ay ANOVA for matched samples, Two-Factor ANOVA with one observation per cell, Post-test for Two-Factor ANOVA, ANOVA tables, Two-Factor ANOVA with several observations per cell, Kruskal-Wallis test, Post-test for Kruskal-Wallis, Chi-Squared test for nominal data, three or more samples correlation vs. causation

**Chapter 7½: Emergency Statistics Guide**

**Chapter 8: Finding Regression Equations**

Linear regression prediction intervals, Pearson Product Moment, Correlation Coefficient (r), coefficient of determination, multiple regression, normal equations, coefficient of multiple determination (R²), adjusted coefficient of multiple determination, design variables, dummy variables, saturated models, multicollinearity method, step down method nonlinear regression, logarithmic curves, reciprocal curves power curves, exponential curves, parabolic curves, two independent variables with possible interaction, logistic regression

**The Field Guide**

Future — The population is known and you want to know what the sample will look like. You start with zero samples: Hypergeometric Distribution, Extended Hypergeometric Distribution, Binomial Distribution, Multinomial Distribution, Poisson Distribution, Exponential Distribution, Normal Distribution

Past— The sample is known and you want to know what the population, was that gave this sample. You start with one sample: Normal Distribution — \(n > 30\) and the sample is small compared with the population., Normal Distribution — \(n > 30\) and the sample is large compared with the population., Student’s t-Distribution, Binomial Distribution (large sample, \(n > 30\)),Binomial Distribution (small sample, \(n \leq 30\)), Kolmogorov-Smirnov goodness-of-fit test, Lilliefors test, Wilcoxon Signed Ranks test, Sign test — Does the population have that median?, Sign test for Nominal Data, Chi-Squared test (goodness of fit), Chi-Squared test (Lie Detector), Chi-Squared test (Is the population too variable?), Runs test

Present — You start with two samples and want to know how do they compare with each other: Two Paired Samples \((\mu_1-\mu_2)\), Wilcoxon Signed Ranks test, Sign test for two paired, Sign test for two paired samples of nominal data., Two Proportions in two categories, Two Large Independent Samples, \(n \geq 30\), Two Independent Samples (\(\sigma_1\) and \(\sigma_2\) known), F-Distribution test, Two Small Independent Samples, roughly equal standard deviations, Two Small Independent Samples (Smith-Satterthwaite) with very different standard deviations., Mann-Whitney test (a.k.a. Wilcoxon Rank-Sum test), Chi-Squared test (\(\chi^2\)), two samples of nominal data in multiple categories., One Sample with Two Variables, Chi-Squared test (\(\chi^2\)) — Yates Correction, Fisher’s Exact test,

Present— You start with three or more samples and want to know how do they compare with each other: One-W ay ANOVA (independent samples), Post-test for One-W ay ANOVA (independent samples), One-W ay ANOVA (matched samples), Post-test for One-W ay ANOVA (matched samples) Two-Factor ANOVA (one observation per cell), Post-test for Two-Factor ANOVA (one observation per cell) Two-Factor ANOVA (multiple observations per cell), Kruskal-W allis test, Post-test for Kruskal-W allis, Chi-Squared (\(\chi^2\)), three samples of nominal data

**Tables**

Table A: Binomial Coefficients

Table B: Kolmogorov-Smirnov (one sample)

Table C: Standard Normal Curve (area from 0 to z)

Table D: Standard Normal Curve (area from \(-\infty\) to z)

Table E: Standard Normal Curve (area from –z to z)

Table F: Student’s *t*-Distribution

Table G: Lilliefors

Table H: Wilcoxon Signed Ranks

Table I: Sign test

Table J: Chi-Squared (\(\chi^2\))

Table K: Runs test

Table L: Mann-Whitney (Wilcoxon Rank-Sum)

Table M: Fisher’s Exact test

Table N: *F*-Distribution

Table O: Kruskal-Wallis test

Table P: Binomial Proportion Intervals

Index