You are here

Understanding Advanced Statistical Methods

Peter H. Westfall and Kevin S. S. Henning
Publisher: 
Chapman & Hall/CRC
Publication Date: 
2013
Number of Pages: 
543
Format: 
Hardcover
Series: 
Texts in Statistical Science
Price: 
79.95
ISBN: 
9781466512108
Category: 
Textbook
There is no review yet. Please check back later.

Introduction: Probability, Statistics, and Science
Reality, Nature, Science, and Models
Statistical Processes: Nature, Design and Measurement, and Data
Models
Deterministic Models
Variability
Parameters
Purely Probabilistic Statistical Models
Statistical Models with Both Deterministic and Probabilistic Components
Statistical Inference
Good and Bad Models
Uses of Probability Models

Random Variables and Their Probability Distributions
Introduction
Types of Random Variables: Nominal, Ordinal, and Continuous
Discrete Probability Distribution Functions
Continuous Probability Distribution Functions
Some Calculus–Derivatives and Least Squares
More Calculus–Integrals and Cumulative Distribution Functions

Probability Calculation and Simulation
Introduction
Analytic Calculations, Discrete and Continuous Cases
Simulation-Based Approximation
Generating Random Numbers

Identifying Distributions
Introduction
Identifying Distributions from Theory Alone
Using Data: Estimating Distributions via the Histogram
Quantiles: Theoretical and Data-Based Estimates
Using Data: Comparing Distributions via the Quantile–Quantile Plot
Effect of Randomness on Histograms and qq Plots

Conditional Distributions and Independence
Introduction
Conditional Discrete Distributions
Estimating Conditional Discrete Distributions
Conditional Continuous Distributions
Estimating Conditional Continuous Distributions
Independence

Marginal Distributions, Joint Distributions, Independence, and Bayes’ Theorem
Introduction
Joint and Marginal Distributions
Estimating and Visualizing Joint Distributions
Conditional Distributions from Joint Distributions
Joint Distributions When Variables Are Independent
Bayes’ Theorem

Sampling from Populations and Processes
Introduction
Sampling from Populations
Critique of the Population Interpretation of Probability Models
The Process Model versus the Population Model
Independent and Identically Distributed Random Variables and Other Models
Checking the iid Assumption

Expected Value and the Law of Large Numbers
Introduction
Discrete Case
Continuous Case
Law of Large Numbers
Law of Large Numbers for the Bernoulli Distribution
Keeping the Terminology Straight: Mean, Average, Sample Mean, Sample Average, and Expected Value
Bootstrap Distribution and the Plug-In Principle

Functions of Random Variables: Their Distributions and Expected Values
Introduction
Distributions of Functions: The Discrete Case
Distributions of Functions: The Continuous Case
Expected Values of Functions and the Law of the Unconscious Statistician
Linearity and Additivity Properties
Nonlinear Functions and Jensen’s Inequality
Variance
Standard Deviation, Mean Absolute Deviation, and Chebyshev’s Inequality
Linearity Property of Variance
Skewness and Kurtosis

Distributions of Totals
Introduction
Additivity Property of Variance
Covariance and Correlation
Central Limit Theorem

Estimation: Unbiasedness, Consistency, and Efficiency
Introduction
Biased and Unbiased Estimators
Bias of the Plug-In Estimator of Variance
Removing the Bias of the Plug-In Estimator of Variance
The Joke Is on Us: The Standard Deviation Estimator Is Biased after All
Consistency of Estimators
Efficiency of Estimators

Likelihood Function and Maximum Likelihood Estimates
Introduction
Likelihood Function
Maximum Likelihood Estimates
Wald Standard Error

Bayesian Statistics
Introduction: Play a Game with Hans!
Prior Information and Posterior Knowledge
Case of the Unknown Survey
Bayesian Statistics: The Overview
Bayesian Analysis of the Bernoulli Parameter
Bayesian Analysis Using Simulation
What Good Is Bayes?

Frequentist Statistical Methods
Introduction
Large-Sample Approximate Frequentist Confidence Interval for the Process Mean
What Does Approximate Really Mean for an Interval Range?
Comparing the Bayesian and Frequentist Paradigms

Are Your Results Explainable by Chance Alone?
Introduction
What Does by Chance Alone Mean?
The p-Value
The Extremely Ugly "pv ≤ 0.05" Rule of Thumb

Chi-Squared, Student’s t, and F-Distributions, with Applications
Introduction
Linearity and Additivity Properties of the Normal Distribution
Effect of Using an Estimate of s
Chi-Squared Distribution
Frequentist Confidence Interval for s
Student’s t-Distribution
Comparing Two Independent Samples Using a Confidence Interval
Comparing Two Independent Homoscedastic Normal Samples via Hypothesis Testing
F-Distribution and ANOVA Test
F-Distribution and Comparing Variances of Two Independent Groups

Likelihood Ratio Tests
Introduction
Likelihood Ratio Method for Constructing Test Statistics
Evaluating the Statistical Significance of Likelihood Ratio Test Statistics
Likelihood Ratio Goodness-of-Fit Tests
Cross-Classification Frequency Tables and Tests of Independence
Comparing Non-Nested Models via the AIC Statistic

Sample Size and Power
Introduction
Choosing a Sample Size for a Prespecified Accuracy Margin
Power
Noncentral Distributions
Choosing a Sample Size for Prespecified Power
Post Hoc Power: A Useless Statistic

Robustness and Nonparametric Methods
Introduction
Nonparametric Tests Based on the Rank Transformation
Randomization Tests
Level and Power Robustness
Bootstrap Percentile-t Confidence Interval

Final Words

Index