Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 10 April 2020

(p. 759) Index

(p. 759) Index

A
Advanced mixture modeling, 603–606
Aiken, Leona S., 26–51
Alternating least squares scaling (ALSCAL), 238
Alternative models for binary outcomes, 35–36
Analysis of variance (ANOVA)
computations, 12–13
introduction of concept, 8
Anderson, Rawni A., 718–758
Anselin, Luc, 154–174
Approximate discrete model, 427–428
Assumptions, violations of, 28–29
Autocorrelation, 459–460
global, 156–158
local, 158–159
B
Baraldi, Amanda N., 635–664
Bayesian configural frequency analysis, 86–87
Bayesian hierarchical models
hierarchical spatial models, 167–168
and imaging analytics, 188–189
Bayesian methods of analysis
and mediation analysis, 343
probability and inference, 186–187
Bayesian models for fMRI, 189–191
Beauchaine, Theodore P., 612–634
Binary classification tree, 682
Binary logistic regression, 33–37
Binary variables, 58–59
Binomial test, 108–109
Blokland, Gabriëlla A. M., 198–218
Bootstrap methods, 126–131
Brose, Annette, 441–457
Brown, Timothy A., 257–280
Buskirk, Trent D., 106–141
C
Card, Noel A., 701–717
Case diagnostics, 47–49
Casper, Deborah M., 701–717
Categorical methods, 52–73
categorical variables, 52–61
measuring strength of association between, 58–61
testing for significant association between, 52–58
conclusions and future directions, 71–72
effect sizes, 64–66
key terms, 55
logistic regression, 66–71
binary response, 66–69
proportional odds model, 69–71
symbols used, 53–55
Categorical variables
measuring strength of association between, 58–61
testing for significant association between, 52–58
Cell frequencies, and configural frequency analysis, 87
Chi-Square test, 119–122
Classical statistical approaches, overview of, 7–25
Classification and regression trees, 683–690
cut-point and variable selection bias, 686–687
examples of, 691, 692
instability of trees, 690
interpretation, 688–690
prediction and interpretation, 688
recursive partitioning, 683–684
split selection criteria, 684–686
stopping and pruning, 687
Classification techniques
Cluster analysis and MDS, 239–240
Clustering and classification techniques, 517–550
chapter notation, 522
concluding remarks, 543
finite mixture and latent class models, 530–543
absolute fit assessment, 542
class-specific item response probabilities, 533
constrained latent class models, 535–537, 539–540
diagnostic classification models, 535, 536–537, 539
instrument calibration vs. respondent scaling, 533
investigating relative performance, 540
item-fit assessment, 542–543
item response probabilities for five assessment items, 539
latent classes as attribute profiles, 535
local/conditional independence assumption, 533
model-data fit at different levels, 540–541
for multiple quantitative response variables, 532–533
parameter constraints via the Q-matrix, 535–536
parameter values for five assessment items, 538
person-fit assessment, 543
relative fit assessment, 541
for single quantitative response variables, 531–532
software packages for, 540
statistical structure of unrestricted latent class model, 534–535
unconstrained latent class models, 533–535
foundational terminology, 519–522
exploratory vs. confirmatory techniques, 520–521
nonparametric vs. parametric model-based techniques, 520
observations vs. variables, 519
variable types vs. measurement scales, 519–520
glossary of key terms, 544–546
introduction to, 517–518
nonparametric techniques, 522–530
additional example, 530
agglomerative vs. divisive approaches, 522
basic concepts, 522
distance measures for multivariate space, 525–526
graphical representation, 523
K-means clustering, 527–529
measures of intercluster distance, 526–527
numerical representation, 522–523
partitioning cluster methods, 527–529
pre-processing choices for hierarchical techniques, 523–527
software for, 529–530
range of applications, 518
standardization formulas for cluster analysis, 524
suggested readings, 521–522
books, 521
peer-reviewed publications, 521–522
professional associations, 522
(p. 760) Cochran’sQ, 115–117
Coefficient interpretation, 37–38, 41
The column problem, 145–146
Configural frequency analysis (CFA), 74–105
appropriate questions for, 75–76
base models, 78–80
future directions, 102
null hypothesis, 80–81
sample models and applications, 89–102
longitudinal CFA, 93–97
mediation configural frequency analysis, 97–102
prediction configural frequency analysis, 89–91
two-group CFA, 91–93
significance tests for, 81–86
protecting V, 81–82
sampling schemes, 82–86
six steps of, 86–89
symbols and definitions, 103
technical elements of, 76–81
Contextual variable fallacies, 720–725
avoiding hierarchically nested data structures, 724–725
confusing moderation with additive effects, 724
direct effect and evidence of mediation, 721–722
mistaking mediation for moderation, 720–721
testing mediation with constituent paths, 721
using cross-sectional models to test mediation, 722–724
Continuous time, models of, 416–426
conclusions and future directions, 426–427
first-order differential equation model, 420–423
second-order differential equation model, 423–426
Control variables, associations with, 61–64
Coombs’ contribution to MDS, 238
Correspondence analysis, 142–153
application to other data types, 151
canonical correspondence analysis, 151–152
correspondence analysis displays, 146–147
introductory example, 143
measure of fit, 150–151
multiple correspondence analysis, 147–150
principal component analysis and multidimensional scaling, 143–147
statistical inference, 152
Coxe, Stefany, 26–51
Curve estimation methods, 131–137
D
Data mining, 678–700
binary classification tree, 682
conclusion, 698
exemplary techniques, 681–697
classification and regression trees, 683–690, 691, 692
ensemble methods, 690–697
introduction to, 678–681
classical statistics, 678–679
neo-classical statistics, 679–681
other techniques, literature, and software, 697–698
Deboeck, Pascal R., 411–431
Dellinger, Anne, 4
Density estimation, nonparametric, 131–135
Deviation, concepts of, 87–88
Diagnostics, model and case, 47–49
Differential item functioning, 65–66
Diggle-Kenward selection model, 649, 660
Ding, Cody S., 235–256
Discrete-time survival factor mixture model, diagram, 605
Distance measures, 48–49
Donnellan, M. Brent, 665–677
Dowsett, Chantelle, 4
Dynamic causal models, 192
Dynamic factor analysis, 441–457
background, 442–444
five steps for conducting
between-person differences, 449, 451
empirical illustration, 446, 447, 448, 449, 451
person-specific models, 448–449
research questions, 446
study design and data collection, 446–447
variable selection and data preprocessing, 447–448
future directions, 451–454
adaptive guidance, 453–454
idiographic filters, 454
non-stationarity, 452–453
glossary, 455
synopsis, 454–455
technical background, 444–445
Dynamical systems, 411–431
approximate discrete model, 427–428
attractors and self-regulation, 414–416
concept of, 412–413
conclusions and future directions, 426–427
language of, 413–414
latent differential equation modeling, 428–430
models of continuous time, 416–419
first-order differential equation model, 420–423
second-order differential equation model, 423–426
E
Edgeworth, Francis Y., 8
Edwards, Michael, 4
Effect sizes
and categorical methods, 64–66
introduction of concept, 9
recommendations for best practice, 23–24
Eigenvalues, 21–22
Electroencephalography, and statistical parametric mapping, 177
Enders, Craig K., 635–664
Ensemble methods, of data mining, 690–697
bagging, 690–691
predictions from ensembles, 693–695
random forests, 691–693
randomness, 696–697
variable importance, 695–696
Error covariance matrix, 184–185
Estimation theory, 12–13
Event history data analysis, 486–516
conclusion, 514
continuous state space, 511–514
continuous time formulation, 493–499
basic concepts, 493–494
examples, 497–498
rate and probability, 499
specifications and estimation, 496–497
discrete state space, 509–511
discrete time formulation, 492–493
hazard-rate framework, 492–493
motivation, 488, 490–492
censoring and time-varying covariates, 488
illustration of the censoring problem, 488, 490–491
initial statement of the solution, 491–492
observability of the dependent variable, 506–507
problems created for standard techniques, 489
repeated events, 507–509
time-dependent covariates, 502–506
basic ideas, 502–504
data management, 505–506
exogeneity of covariates, 504
illustration, 506
survivor function, 504–505
time-independent covariates, 499–502
coefficients, 500–502
illustration, 502
Excess zeros, concept of, 43–44
Extensions to space-time, 160–162
F
Factor analysis
fallacies, 739–743
default use of orthogonal rotation, 741
misuse of principal components, 739–740
number of factors retained in EFA, 740–741
other issues in factor analysis, 742
summary, 742–743
using CFA analysis to confirm EFA analysis, 741–742
and MDS, 239
Finite mixture modeling, 551–611
advanced mixture modeling, 603–606
conclusion, 606–607
future directions, 607
history of mixture modeling, 554–557
finite mixture modeling, 554–555
latent class analysis, 555–556
the more recent past, 556–557
as latent variable models, 552
list of abbreviations, 607–608
as a person-centered approach, 552–554
Fisher, Ronald A., 8
Fisherian school of statistics, 8
Fisher’s exact test, 119–122
Frequentist configural frequency analysis, 86–87
Friedman’s test, 115–117
Functional magnetic resonance imaging
(p. 761) and analytic models and designs, 183–184
and statistical parametric mapping, 177
Functional magnetic resonance imaging (fMRI)
Bayesian models for, 189–191
G
Gaussian processes, 460
General linear model (GLM)
overview of, 9
three classes of, 13–20
times series model at the voxel level, 185
Generalized linear models (GLiM), 26–51
common examples, 31, 33–46
binary logistic regression, 33–37
multinomial logistic regression, 31, 37–38
ordinal logistic regression, 31, 38–39
other GLiMs, 46
Poisson regression, 31, 39–44
two-part models, 44–46
diagnostics, 47–49
introduction to, 26–27
maximum likelihood estimation, 30–33
multiple regression, 27–29
pseudo-R-squared measures of fit, 46–47
summary and conclusions, 49–50
three components of a GLiM, 29–30
Genes, quantitative analysis of, 219–234
association analysis, 227–233
case-control association tests, 227–229
family-based association tests, 230–232
genome-wide association studies, 232–233
population stratification, 229
quality control and prior data cleaning, 227
linkage analysis, 221–226
background, 221–222
types of, 222–226
overview of genetic data, 219–221
DNA variation, 220–221
obtaining genotypic data, 220
significance of linkage, 226–227
summary, 233
Genetics, twin studies, 198–218
classical twin model, 202–215
assumptions of the model, 205–208
extensions to the model, 208–211
multivariate modeling, 211–215
structural equation modeling, 203–205
introduction and overview, 198–202
twin studies and beyond, 215
Global autocorrelation, 156–158
Global configural frequency analysis, 79
Gossett, William S., 8
Gottschall, Amanda C., 338–360
Greenacre, Michael J., 142–153
Growth mixture model, diagram, 605
H
Harshman, R.A., 8
Hau, Kit-Tai, 361–386
Heteroscedasticity, 28
Hierarchical linear model, 185–186
History of traditional statistics, 8–9
Hox, Joop J., 281–294
Hurdle regression models, 44–45
I
Imaging data, analysis of
analytic methods, 180–182
foundational issues in neuroimaging, 181
model basics, 181–182
analytic models and designs
functional magnetic imaging models, 183–184
positron emission tomography, 182
Bayesian methods of analysis, 186–187
Bayesian models for fMRI, 189–191
classic frequentist probability, 187–189
conclusion and future directions, 195
dynamic causal models, 192
early approaches based on general linear method, 176–177
functional connectivity, 191–192, 193–195
history of imaging methods and analyses, 176
modeling serial correlation, 184–185
time series general linear model at the voxel level, 185
multilevel models, 185–186
expectation maximization, 185–186
multivariate autoregressive models, 192–193
parameter estimation, 182
spatial normalization and topological influence, 177–182
statistical parametric mapping, 179–180
steps from image acquisition to analysis, 180
statistical parametric mapping, 177
structural equation modeling, 193–195
Imaging data, analysis of, 175–197
Individual differences MDS models, 238
Individual differences scaling (INDSCAL), 238, 246–247
Influence measures, 49
Intensive longitudinal data
Interaction
Interpretation, recommendations for best practice, 23–24
Introduction, 1–6
J
Johnson, David, 4
K
K-means clustering, 527–529
Kadlec, Kelly M., 295–337
Kendall’s t, 117–119
Kisbu-Sakarya, Yasemin, 338–360
Kruskal-Wallis test, 113–115
Kruskal’s contribution to MDS, 237–238
L
Land use planning models, 170–171
Latent class analysis, 557–584
a brief history of, 555–556
mediation model, 605
missing data, 573–584
model building, 565–573
model estimation, 561–565
model formulation, 557–558
model interpretation, 558–561
Latent differential equation modeling, 428–430
Latent mixture modeling, diagram, 605
Latent profile analysis, 584–606
example of, 592–600
example of latent class regression, 602–603
latent class regression, 600–601
model building, 590–592, 601–602
model estimation, 590
model formulation, 584–587, 601
model interpretation, 587–590
post hoc class comparisons, 603
Latent transition model, diagram, 605
Latent variable interpretation, 35
Latent variable measurement models, 257–280
conclusion, 276–277
confirmatory factor analysis, 260–266
exploratory factor analysis, 258–260
extensions of confirmatory factor analysis, 269–273
future directions, 277–279
higher-order models, 273–276
hybrid latent variable measurement models, 266–269
selected output for confirmatory factor analysis, 263
selected output for exploratory structural equation modeling, 268–269
Lee, Jason and Steve, 4
Leverage measures, 48
Limited dependent variables, 28–29
Linear regression model, 163–165
Linearity, 28–29
Linkage analysis
model-based linkage, 222–223
model-free linkage, 223–226
Little, Todd D., 1–16, 387–410, 718–758
Local autocorrelation, 158–159
Location models, 169–170
Logistic regression, 66–71
binary response, 66–69
model fit, 66–67
parameter interpretation, 68–69
proportional odds model, 69–71
Longitudinal configural frequency analysis, 93–97
Longitudinal data, intensive, 432–440
challenges and opportunities, 438–439
idiographic-nomothetic continuum, 434–437
reactivity, 436–437
review of, 433
recurring themes, 433–434
sources of data, 433
statistical models, 437–438
Longitudinal data analysis, 387–410
advances in modeling, 406–407
conclusion and discussion, 406–407
(p. 762) importance of, 387–389
multilevel modeling approach, 389–397
curvilinear growth curve model, 392–393
error structures, 396
linear growth curve model, 389–392
nonlinear growth curve model, 393–394
spline curve models, 395
time-constant covariates, 396–397
time-varying covariates, 397
structural equation modeling approaches, 397–406
autoregressive cross-lagged models, 401–402
curvilinear latent curve model, 398–399
general assumptions, 405–406
latent difference score models, 403
linear latent curve model, 397–398
nonlinear latent curve model, 399–401
parallel process latent curve model, 403–404
second-order latent curve model, 404–405
Longtitudinal mediation, 351–353
Lucas, Richard E., 665–677
M
MacKinnon, David P., 338–360
Magnetic resonance imaging
and analytic models and designs, 183–184
Bayesian models for, 189–191
and statistical parametric mapping, 177
Mair, Patrick, 74–105
Marsh, Herbert W., 361–386
Masyn, Katherine E., 551–611
Maximum likelihood
estimation, 30–33
McArdle, John J., 295–337
McNemar’s test, 110–113
Mean, estimation of, 460
Measure of fit, 150–151
Measurement error fallacies, 725–730
ignorance of latent mixture and multilevel structure, 728–729
individual items and composite scores, 726–728
the myth about numbers, 725–726
reliability and test length, 728
unreliability and attenuated effects, 729–730
Mediation analysis, 338–360
causal inference in, 348–351
experimental designs, 350–351
principal stratification, 350
sequential ignorability assumption, 348–350
estimating the mediated effect, 340–342
assumptions, 341
coefficients approach, 340–341
covariates, 341
multiple mediators, 341–342
point estimation, 340–342
standard error, 342
history, 339
longtitudinal mediation, 351–353
autoregressive models, 352
latent change score models, 352–353
latent growth curve models, 352
person-centered approaches, 353
three (or more)-wave models, 351–353
two-wave models, 351
mediation analysis in groups, 345–348
moderation and mediation, 346–347
multilevel mediation, 347–348
modern appeal, 339–340
significance testing and confidence interval estimation, 342–345
Bayesian methods, 343
categorical and count outcomes, 343–344
effect size measures, 343
non-normality, 344
small samples, 344–345
summary and future directions, 353–354
Mediation configural frequency analysis, 97–102
decisions concerning type, 98–102
four base models for, 97–98
Medical imaging
Bayesian models for fMRI, 189–191
connectivity of brain regions, 191–192
issues in neuroimaging, 181
and statistical parametric mapping, 177
Medland, Sarah E., 198–218, 219–234
Meta-analysis, 701–717
advanced topics, 715–716
alternative effect sizes, 715
artifact corrections, 715–716
multivariate meta-analysis, 716
analysis of mean effect sizes, 710–713
fixed-effects means, 711
heterogeneity, 711–712
random-effects means, 712–713
coding effect sizes, 707–710
computing effect sizes, 709–710
correlation coefficient, 708
odds ratios, 709
standardized mean differences, 708–709
coding study characteristics, 707
introduction to, 701–702
moderator analyses, 713–715
categorical moderators, 714–715
limitations to, 715
single categorical moderator, 713–714
single continuous moderator, 714
problem formulation, 702–705
appropriate questions for meta-analysis, 702–703
critiques of meta-analysis, 703–704
identifying goals and research questions, 703
limits of, 704–705
strengths of, 705
searching the literature, 705–707
defining a sampling frame, 705
identifying criteria, 705
search techniques and resource identification, 705–707
Metric MDS model, 237
MIMIC data, 323–334
Missing data fallacies, 730–732
attempting to prepare for missing data, 732
missing-data treatments and notion of “cheating,” 730–732
Missing data methods, 635–664
artificial data example, 636–637
atheoretical missing data handling methods, 639–641
averaging available items, 639–640
last observation carried forward imputation, 640
mean imputation, 639
similar response pattern imputation, 640–641
conclusion, 661–662
data analysis examples, 653–657
complete data, 653–654
missing at random data, 654–655, 656
missing completely at random data, 654, 655
not missing at random-based approaches revisited, 656–657
not missing at random data, 655–656
improving missing at random-based analyses, 650–653
dealing with non-normal data, 652–653
role of auxiliary variables, 650–651
missing at random (MAR), 642–648
maximum likelihood estimation, 645–648
multiple imputation, 642–645
stochastic regression imputation, 642
missing completely at random (MCAR), 641–642, 654
deletion methods, 641–642
regression imputation, 641
missing data mechanisms, 637–639
not missing at random, 648–650
planned missing data designs, 657–661
longitudinal designs, 661
three-form design, 659–661
two-method measurement, 658–659
Model diagnostics, 47
Modeling
See individual types of modeling
Models
See individual model types
Moderation, 361–386
analysis of variance, 364–365
classic definition of, 362–363
confounding nonlinear and interaction effects, 379–380
distribution-analytic approaches, 377–378
further research, 379
graphs of interaction effects, 363–364
interactions with more than two continuous variables, 381–382
latent variable approaches, 374–375, 378
and mediation, 346–347, 380–381
moderated multiple regression approaches, 365–373
disordinal interactions, 371–372
interactions with continuous observed variables, 369–371
multicollinearity involved with product terms, 372–373
power in detecting interactions, 372
standardized solutions for models with interactions terms, 368
tests of statistical significance of interaction effects, 368–369
(p. 763) multilevel designs and clustered samples, 383
multiple group SEM approach to interaction, 375
non-latent approaches
for observed variables, 364
traditional approaches to interaction effects, 373–374
SEMs with product indicators, 375–377
latent interaction, 376–377
separate group multiple regression, 365
summary, 378–379
tests of measurement invariance, 382–383
vs. causal ordering, 380–381
Molenaar, Peter C. M., 441–457
Morin, Alexandre J. S., 361–386
Mosing, Miriam A., 198–218
Mulaik, S. A., 8
Multidimensional scaling (MDS), 235–256
basics and applications of MDS models, 240–254
computer programs for MDS analysis, 251–252
individual differences models, 246–250
metric model, 242–243
new applications, 252–254
nonmetric model, 243–246
using maximum likelihood estimation, 250–251
variety of data, 240–242
a brief description of MDS(X) programs, 253
and cluster analysis, 239–240
conclusion, 254
future directions, 254–255
historical review, 237–240
four stages of MDS development, 237–239
and principal component analysis, 143–147
terminology and symbols, 236
Multilevel models
diagram of latent class model, 605
the hierarchical linear model, 185–186
Multilevel regression modeling
conclusion, 291
future directions, 291, 293
introduction, 281–282
key terms and symbols, 292
methodological and statistical issues, 289–291
assumptions, 289–290
further important issues, 290–291
sample size, 290
typical applications, 282–286
individuals within groups, 282–283
measurement occasions within individuals, 283–286
Multilevel structural equation modeling
conclusion, 291
future directions, 291, 293
introduction, 281–282
key terms and symbols, 292
methodological and statistical issues, 289–291
assumptions, 289–290
further important issues, 290–291
sample size, 290
typical applications, 286–289
latent curve modeling, 286–287
Multinomial logistic regression, 31, 37–38
Multiple regression
assumptions, 27–28
limited dependent variables, 28–29
Multivariate statistics, 20–23
Mun, Eun-Young, 74–105
Murray, Alan T., 154–174
N
Nagengast, Benjamin, 361–386
Negative binomial regression, 31, 42–43
Neyman, Jerzy, 8
Neyman-Pearson school of statistics, 8
Non-normality, 28, 344
Nonmetric MDS model, 237–238
Nonparametric statistical techniques, 106–141
classical nonparametric methods, 108–122
comparing more than two samples, 113–117
comparing two dependent samples, 110–113
comparing two independent samples, 109–110
methods based on a single sample, 108–109
nonparametric analysis of nominal data, 119–122
nonparametric correlation coefficients, 117–119
curve estimation methods, 131–137
density estimation, 131–135
extensions to multiple regression, 137
simple nonparametric regression, 135–137
future directions, 138–139
glossary of terms, 139–140
modern resampling-based methods, 122–131
applying permutation tests to one sample, 122–
bootstrap confidence interval methods, 129–130
bootstrap methods, 126–129
bootstrap methods and permutation tests, 131
general permutation tests, 122
other applications of bootstrap methods, 130–131
statistical software for conducting, 137
vs. parametric methods, 137–138
Null hypothesis
in configural frequency analysis (CFA), 80–81
O
Optimization modeling, 168–171
Ordinal logistic regression, 31, 38–39
Ordinal variables, 59–61
Organization of Handbook of Quantitative Methods, 2–4
Orthogonal rotation, 741
Overdispersed Poisson regression, 42–43
Overdispersion, 36–37, 41–42
P
P calculated values
introduction of, 8
Parameter estimates and fit statistics, 450
Pearson, Karl and Egon S., 8
Pearson’s computations, 10–11
Permutation tests, 122–126
applying to one sample, 122–123
applying to two samples, 123, 125–126
full enumeration of eight samples, 124
Person-specific process, 443, 448–449, 450, 451, 455
Petersen, Trond, 486–516
Poisson regression, 31, 39–44
Population stratification, 229
Positron emission tomography
and analytic models and designs, 182–183
and statistical parametric mapping, 177
Practical significance effect sizes, 9, 13–15
Preacher, Kris, 4
Prediction configural frequency analysis, 89–91
Preference MDS models, 247
Price, Larry R., 175–197
Principal component analysis, 143–147
Proportional odds model, 69–71
Pseudo-R-squared measures of fit, 46–47
Q
Q-matrix, 535–536, 537
Quantitative research methodology, common fallacies in, 718–758
concluding remarks, 743, 748
contextual variable fallacies, 720–725
avoiding hierarchically nested data structures, 724–725
confusing moderation with additive effects, 724
direct effect and evidence of mediation, 721–722
mistaking mediation for moderation, 720–721
testing mediation with constituent paths, 721
using cross-sectional models to test mediation, 722–724
factor analysis fallacies, 739–743
default use of orthogonal rotation, 741
misuse of principal components, 739–740
number of factors retained in EFA, 740–741
other issues in factor analysis, 742
summary, 742–743
using CFA analysis to confirm EFA analysis, 741–742
introduction to, 718–720
measurement error fallacies, 725–730
ignorance of latent mixture and multilevel structure, 728–729
individual items and composite scores, 726–728
the myth about numbers, 725–726
reliability and test length, 728
unreliability and attenuated effects, 729–730
(p. 764) missing data fallacies, 730–732
attempting to prepare for missing data, 732
missing-data treatments and notion of “cheating,” 730–732
statistical power fallacies, 736–739
lack of retrospective power and null hypothesis, 737
nonsignificance and null hypothesis, 736–737
statistical power as a single, unified concept, 736
summary and recommendations, 737–739
statistical significance fallacies, 732–735
alternative paradigms, 735
alternatives and solutions, 734–735
p-values and strength of effect, 733
p-values reflect replicabililty, 734
relationship between significant findings and study success, 734
significance of p-value and research hypothesis, 733
statistical significance and practical importance, 733–734
summary checklist, 743–748
R
R Code, 332–334
Raju, N.S., 8
Ram, Nilam, 441–457
Regional configural frequency analysis, 79–80
Regression analysis, spatial, 162–168
Regression mixture model, diagram, 605
Regression specification, 163
Regression time series models, 475–478
Replicability, 18–20, 24
Rey, Sergio J., 154–174
Rhemtulla, Mijke, 4
The row problem, 145
Rupp, André A., 517–550
S
Scatterplot smoothing, 135–137
Secondary data analysis, 665–677
advantages and disadvantages, 667–668
conclusion, 675
measurement concerns in existing data sets, 671–673
missing data in existing data sets, 673–674
primary research vs. secondary research, 666–667
sample weighting in existing data sets, 674–675
steps for beginning, 669–671
Selig, James P., 387–410
SEM-CALIS, 325–329
SEM-Mplus, 329–332
Serial correlation, modeling, 184–185
Sign test, 110–113
Significance testing
in configural frequency analysis, 88–89
and control variables, 61–64
Significant association, testing for, 52–58
Significant difference, introduction of term, 8
Snedecor, George W., 8
Software, statistical
development of, 2
for finite mixture and latent class models, 540
for nonparametric techniques, 137, 529–530
Space-time, extensions to, 160–162
Spatial analysis, 154–174
autocorrelation analysis, 156–162
conclusion, 171–172
exploratory spatial data analysis, 155–162
spatial autocorrelation analysis, 156–159
spatial clustering, 160–162
spatial data, 155–156
spatial optimization modeling, 168–169, 168–171
spatial regression analysis, 162–168
other spatial models, 166–168
spatial dependence in the linear
regression model, 163–165
spatial effects in regression specifications, 163
specification of spatial heterogeneity, 165–166
Spatial data, 155–156
Spearman’s p, 117–119
Statistical approaches, overview of traditional methods, 7–25
ANOVA computations, 12–13
brief history of traditional statistics, 8–9
general linear model, 9, 13–20
variance partitions, 9–12
Statistical estimation theory, 12–13
Statistical inference, 152
Statistical parametric mapping, and medical imaging, 177
Statistical power fallacies, 736–739
lack of retrospective power and null hypothesis, 737
nonsignificance and null hypothesis, 736–737
statistical power as a single, unified concept, 736
summary and recommendations, 737–739
Statistical significance, 15–18
fallacies, 732–735
alternative paradigms, 735
alternatives and solutions, 734–735
p-values and strength of effect, 733
p-values reflect replicabililty, 734
relationship between significant findings and study success, 734
significance of p-value and research hypothesis, 733
statistical significance and practical importance, 733–734
p calculated values, 8
recommendations for best practice, 23
vs. practical significance, 9
Strobl, Carolin, 678–700
Structural equation modeling, 193–195
common factors and latent variables
benefits and limitations of including common factors, 315
common factors with cross-sectional observations, 315–316
common factors with longitudinal observations, 316–317
common factors with multiple longitudinal observations, 317–319
the future of, 319–321
and longtitudinal data analysis, 397–406
as a tool, 311–315
creating expectations, 312
estimating linear multiple regression, 313–315
as general data analysis technique, 311–312
statistical indicators, 312–313
Structural equation models, 295–337
appendix: notes and computer programs, 321–334
example of structural equation model fitting, 322–334
fitting simulated MIMIC data with R Code, 332–334
fitting simulated MIMIC data with SEM-CALIS, 325–329
fitting simulated MIMIC data with SEM-Mplus, 329–332
fitting simulated MIMIC data with standard modeling software, 323–325
reconsidering simple linear regression, 321–322
common factors and latent variables, 302–311
benefits and limitations of including common factors, 315
common factor models, 303–304
common factor models within latent path regression, 305
common factors with cross-sectional observations, 315–316
common factors with longitudinal observations, 316–317
common factors with multiple longitudinal observations, 317–319
invariant common factors, 305–307
multiple repeated measures, 307–311
concept of, 298–302
issues with means and covariances, 302
missing predictors, 300
path analysis diagrams, 299–300
true feedback loops, 302
unreliability of both predictors and outcomes, 301–302
unreliable outcomes, 301
unreliable predictors, 300–301
confirmatory factor analysis, 296–297
current state of research, 298
currently available SEM programs, 311
definition of, 295–296
the future of, 319–321
linear structural equation model (LISREL), 297
with product indicators, 375–377
as a tool, 311–315
creating expectations, 312
estimating linear multiple regression, 313–315
and general data analysis, 311–312
statistical indicators, 312–313
(p. 765) T
T-test, introduction of, 8
Taxometrics, 612–634
conclusion, 630
other important considerations, 627–630
number of indicators, 628
other approaches, 629–630
replication, 628–629
sample size, 628
skew, 628
performing a taxometric analysis, 617–627
assessing fit, 623–626
interpreting results, 626–627
selecting suitable indicators, 617–620
taxon group and complement class, 618, 626
winnowing indicators, 620–623
problems with imprecise measures, 613–614
taxometric methods, 614–617
latent mode factor analysis, 617
maximum covariance, 615–616
maximum eigenvalue (MAXEIG), 616–617, 618
mean above minus below a cut (MAMBAC), 614–615, 616, 622, 625
Testing for significant association, 52–58
Thompson, Bruce, 7–25
Time series analysis, 458–485
commonly used terms, notations, and equations, 483–484
concluding remarks and future directions, 482–484
fundamental concepts, 459–461
autocorrelation, 459
estimating mean, variance, and autocorrelation, 460
moving average and autoregressive representations, 460–461
partial autocorrelation, 459–460
strictly and weakly stationary processes, 459
white noise and Gaussian processes, 460
intervention and outlier analysis, 471–473
regression time series models, 475–478
regression with autocorrelated errors, 476
regression with heteroscedasticity, 477–478
time series forecasting, 469–471
forecasting example, 471
updating forecasts, 470–471
time series model building, 464–469
diagnostic checking, 466
illustrative example of, 467–469
model identification, 464–465
model selection, 466–467
parameter estimation, 466
transfer function models, 473–475
univariate time series models, 461–464
nonstationary time series models, 462–463
seasonal time series models, 462–463, 463–464
stationary time series models, 461–462
vector time series models, 478–482
cointegrated processes, 480–481
correlation and partial correlation matrix functions, 478–479
identification of, 481–482
nonstationary vector time series models, 480–481
stationary vector time series models, 479–480, 482
Tomazic, Terry J., 106–141
Traditional statistical approaches, overview of, 7–25
Transfer function models, 473–475
Truncated zeros, 44
Twin model, classical, 202–215
assumptions of the model, 205–208
degrees of genetic similarity, 206
equal environments, 206–207
generalizability, 205
genotype-environment correlation, 207–208
genotype-environment interaction, 207
random mating, 205–206
extensions to the model
data from additional family members, 210–211
liability threshold model, 209–210
sex limitation, 208–209
multivariate modeling
causal model, 214
common pathway model, 211–213
cross-sectional cohort and longitudinal designs, 213–214
independent pathway model, 213
latent class analysis, 214–215
structural equation modeling, 203–205
Two-group configural frequency analysis, 91–93
Two-part models, 44–46
U
Univariate statistics, 9–12, 22–23
V
Variables
See individual variable types
Variance, estimation of, 460
Variance partitions, 9–12, 20–23
Vector time series models, 478–482
Verweij, Karin J. H., 198–218
Von Eye, Alexander, 74–105
Von Weber, Stefan, 74–105
W
Walls, Theodore A., 432–440
Wang, Lihshing Leigh, 718–758
Watts, Amber S., 718–758
Wei, William W. S., 458–485
Weighted Euclidean Model, 238, 246–247
Wen, Zhonglin, 361–386
West, Stephen G., 26–51
What if There Were No Significance Tests (Mulaik, Raju, Harshman), 8
White noise process, 460
Wilcoxon Mann Whitney test, 109–110
Wilcoxon signed rank test, 110–113
Willoughby, Lisa M., 106–141
Woods, Carol M., 52–73
Wu, Wei, 387–410
Z
Zero-inflated regression models, 45–46
Zimmerman, Chad, 4