Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 28 January 2021

(p. 633) Index

(p. 633) Index

Tables and figures are indicated by an italic t and f following the page/paragraph number.

A. C. Nielsen Company, 103
Achen, C. H., 32
Adams, A. N., 5, 43, 493, 501n18
Adcock, R., 343, 344
additive models, 340–41
Adkihari, P., 6, 174n7
Afrobarometer, 4t, 7, 221, 222t, 225f, 245, 392t
age-weight curves graphs, 428–29f, 429–30, 434–35, 435f, 442–45, 445t, 446f
aggregation. see data aggregation
agree-disagree scales, 116–19
Ahern, K., 35
Aitchison, J., 289, 290
Algeria, 222t, 223, 224, 225f, 240n3
Al Zamal, F., 566
American Association of Public Opinion Research (AAPOR), 1, 79, 278–79, 543, 589–90, 603n6
American Community Survey (ACS), 36, 59
American Muslims, 183, 192–95, 200, 201nn1–2
American National Election Study (ANES)
background, 389–90, 404n1
casual inference, 300–301
contextual cues in, 65, 97–98
costs of, 81, 84
described, 4t, 28, 30, 90
design effect in, 88
face-to-face surveys, 58, 80, 81, 89–91, 300–301
intention stability, 40
mass polarization, 344
mode studies, 89
nonresponse rates, 80–81
panel attrition, 35–38
reliability of, 32, 343
response rates as standard, 574n1
sampling design, 58, 80, 94n8, 491, 535, 549n4
satisficing in, 68–69, 69t
social desirability effects, 67
weighting in, 301
The American Panel Survey, 4t, 28
AmericasBarometer, 215
AmeriSpeaks, 43n1
analysis, presentation, 7–8
anchoring vignettes, 235, 588, 597, 603n13
Andrews, F., 121–22
Android Data Gathering System (ADGYS), 212–17
ANES Time Series Study, 301
Ansolabehere, S., 5, 89, 343
Arab Barometer
applications of, 7, 221, 222t
data quality assessment, 224, 225f
described, 4t, 392t, 393
topics included in, 226, 227–28t, 240n2
website, 245
Arab world surveys. see MENA surveys
Armstrong, J. S., 595
AsiaBarometer, 392t, 393
Asian Americans, 183–84, 189, 195–97
Asian Barometer, 4t, 392t, 393
aspect ratio, 472–76, 474–76f
Associated Press, 150
Atkeson, L. R., 5, 43, 493
Australia, 402–3, 404n12
Bafumi, J., 329
Bahrain, 222t
Barabas, J., 499
Barberá, P., 354, 563, 567, 568
bar charts, 454–56, 455f, 457f
(p. 634) Bartels, L. M., 35, 345
Battaglia, M. P., 38
Bayesian Item Response Theory models, 592
Bayesian vs. non-Bayesian modeling, 618–19
Bayes’s law, 293
Benstead, L. J., 7, 236, 241n4
Berkman, M., 330–31
Berry, J. A., 6
best practices
bivariate graphs, 464–65, 477, 478n3
expert surveys, 589–91, 603nn6–7
graphs, 437–38, 448–52, 449f, 451f, 477
for qualitative research, 513, 516, 531n1
question wording, 115–16, 116t
univariate graphs, 448–52, 449f, 451f, 477
bias
acquiescence, 40
bias-variance trade-off, 343–44
CMV biases, 586–87, 603n5
correction in CSES, 398, 404n11
in expert surveys, 586–88, 603n5
intergroup conflict and, in MENA surveys, 236–37
item, in group consciousness, 380
margin of error, 625–26
nonresponse (see nonresponse bias)
positivity, in expert surveys, 587
respondent vulnerability and, in MENA surveys, 235–36
response biases reduction, 595–99, 602
seam, 39–41
selection, in social media surveys, 559–60
social desirability (see social desirability bias)
in subnational public opinion, 327
time-in-sample, 37–39, 44nn7–10, 492, 501n18
binomial probability mass function, 276
bivariate graphs
aspect ratio, 472–76, 474–76f
best practices, 464–65, 477, 478n3
categories of, 464
jittering, 466–68, 469f
labeling points, 468–71, 470f, 478n4
line plots, 471–72, 472f, 478n5
maps, 464, 474–75, 474f
multiple subset/single display, 471–72, 472f, 478n5
overview, 463
plotting symbols, 449–50, 465–66, 467f, 471–72, 472f, 478n5
scatterplots (see scatterplots)
subsequent presidents effects, 435–36, 435f
tables vs., selection of, 445–46
variance (R2) plotting, 433, 434f
visual perception importance, 446–48
Blavoukos, S., 589
Blaydes, L., 236
Blumberg, S. J., 56
Bode, L., 567
Bond, R., 354
Bonica, A., 354
Bonneau, R., 568
Bormann, N., 396, 404n7
Boroditsky, L., 255
Brace, P., 7, 327
Bradburn, N. M., 115, 116t
Brady, H., 198
Brier scores, 623
British Election Study (BES), 4t, 390, 401, 409
Bruce, A., 164
Bryant, L. A., 6
bubble plots, 450, 478n1
Burden, B. C., 58
Burstein, P., 317, 318
Butler, D., 390
Buttice, M. K., 598
Campbell, D. T., 15, 121–22
Canadian Election Study, 390
Canadian Survey of Labour and Income Dynamics, 39
Candidate Emergence Study, 4t, 584, 587, 591, 603n4
Caughey, D., 347, 348
Center for Strategic Studies, 246
central limit theorem, 80
Chandler, J., 490
Chapel Hill Expert Surveys (CHES), 584, 590, 597, 604n12
Chen, M. K., 260
Chen, Q., 36
Ching, P. L. Y. H., 38
Chouhoud, Y., 6
Citizen Participation Study, 198
Ciudadanía surveys, 212, 218n4
(p. 635) Cleveland, W. S., 447, 450, 471, 476
Clinton, J. D., 342, 588, 601
CMV biases, 586–87, 603n5
cognitive aspects of survey methodology (CASM), 16–17
cognitive interviewing
in expert surveys, 596
in MENA surveys, 234–35, 235t
in qualitative research, 508–9, 512–17, 531n1
total survey error and, 16–17
cognitive psychological model, survey process, 513
Cohen, J. E., 321
Collier, D., 343, 344
Collins, C., 39
Columbia studies, 389
Comparative National Elections Project (CNEP), 392t, 394
Comparative Study of Electoral Systems
age-weight curves, 442–45, 445t, 446f
bias correction, 398, 404n11
case selection, 396–98, 397t, 404nn7–10
democratic regimes defined, 396, 404n7
described, 4t, 388–89
development of, 394–96, 404nn4–6
face-to-face surveys, 401–2, 401t
fieldwork, 394, 401–3, 401t, 404nn12–13
funding of, 403, 404n14
incentivization effects, 404n13
modules, themes, 395t
multilevel data structure, 398–99
nonprobability samples, 404
nonresponse bias, 402, 404n11
online option, 402–3, 404n12
party system dimensionality, 400
political knowledge distribution, 399–400
question wording, 399–400
response rates, 401–3, 401t, 404nn12–13
sampling error, 402, 404n11
statistical inference in, 290–92, 291t, 297nn10–12
telephone surveys, 401t, 402
websites, list, 409
computer assisted personal interviews (CAPIs), 3, 54, 223. see also developing countries/CAPI systems
computer assisted telephone interviewing (CATI), 484
conditioned reporting, 38–39
confidentiality
context surveys, 542–43, 550nn16–19
expert surveys, 591, 603, 603n7
informed consent, 522, 528–29
qualitative research, 529–30
conflict-induced displacement surveys, 164–72, 169–70t, 174nn7–10, 175n11
Conrad, F. G., 40
construct validity, 343
content validity, 343
context in social research
cognitive foundations of, 547–48
community, networks, 538, 542, 549n3, 550n26
concepts, definitions, 534–35, 548n1, 549n3
confidentiality, privacy, 542–43, 550nn16–19
contiguity, 542, 551n33
data collection, management, 540–43, 541f, 549n13, 550n22, 550nn16–20, 551n27
descriptors vs. mechanisms, 536–38, 549n10
ethical issues, 542–43, 550nn21–23
expert surveys, 587–88
functional assignments, 539, 549n12
hypothesis testing, 106
language/opinion relationships, 256–57
multilevel models, 551n31
multiple contexts, 548
neighborhood effect, 99
opinion formation, 98–100
random intercepts modeling, 545–46
relationships, 538
respondent characteristics, 535, 549n4
risk-utility trade-off, 542–43, 550n18
samples, balanced spatial allocation of, 106–10, 108f
samples, proportional allocation of, 103–6, 105f
sampling error randomness, 549n11
slope coefficients modeling, 546
snowball sampling, 542, 550n26
socialization, 98–100
social media surveys, 570
(p. 636) spatial distribution, 100–103, 102f, 104–5f, 110nn2–3
statistical inference, 545–47, 547f, 551nn27–33
stratified sampling, 544–45, 550n25
subpopulations, superpopulations, 538–40, 549n12
surroundings, properties of, 536–38, 549n10
surveys and, 535–36, 549n4
unit dimensionality, 544, 550n24
variability, 543–45, 550nn24–26
convergent validity, 343
Converse, P., 390
Cook Political Report, 619
Cooperative Congressional Election Study (CCES)
applications of, 28
described, 4t
expert raters, 594
question wording, 536
TSE approach, 86–89, 87t, 93, 94n6
Coppedge, M., 598, 601
Cornell Anonymization Tool, 550n20
costs
of ANES, 81, 84
automated voice technology in, 610
developing countries/CAPI systems, 211–14
exit polls, 150
face-to-face surveys, 84, 91
hard to reach populations, 160–61
Internet surveys, 78–85, 83–84t
low-incidence populations, 189–90, 199–200
mail surveys, 13, 22–23
mixed mode surveys, 63, 64t, 70, 515
Couper, M. P., 42
cross-national polling. see Comparative Study of Electoral Systems
Current Population Survey (CPS), 36, 39, 40
Daily Kos, 611, 613, 622, 629
Dalrymple, K. E., 567
Danish National Election Study, 409
Danziger, S., 257
data aggregation
Bayesian vs. non-Bayesian modeling, 618–19
expert surveys, 587, 588, 599–600, 603n5
fundamentals vs. poll-only, 619–21
social media surveys, 563–64, 567–68
statistical inference, 614–17, 615–16f, 618f
subnational public opinion, 325–28, 345–46
data collection
context surveys, 540–43, 541f, 549n13, 550n22, 550nn16–20, 551n27
exit polls, 145–46
Internet surveys, 90
overview, 5, 6
total survey error, 18–19 (see also total survey error)
data visualization. see graphs
Debels, A., 36
Deng, Y., 36
density sampling, 186–87
designated market areas (DMAs), 103–4, 104–5f, 110n3
designs
of ANES, 88
data collection (see data collection)
of exit polls, 143–47, 149
expert surveys, 589–91, 601–2, 603nn6–7
hard to reach populations, 174n9
Internet surveys, 58
language/opinion relationships, 252–53, 258–59, 262
longitudinal (panel) surveys, 29–30, 41–43, 44n11
mixed design, 586f, 589
mixed mode surveys, 54–59, 56f (see also mixed mode surveys)
multiple rater, 586f, 588, 600–601
Nepal Forced Migration Survey, 166–67, 174nn8–9
nested-experts, 586f, 587–88
overview, 2–3
question wording (see question wording)
sampling (see sampling designs)
single-rater, 585–87, 586f
subnational public opinion, 319
target-units mapping, 585–89, 586f
(p. 637) developing countries/CAPI systems
Android Data Gathering System (ADGYS), 212–17
benefits of, 211–14
coding error, 208–10
costs, 211–14
data sets, 209, 218nn3–4
error in, 207–8
fraud, 210
GPS coordinates, 215–17, 237, 239t
interview time, 216
overview, 7
PAPI surveys, 208
paradata, 212, 215–16, 218n5
partial question time, 216
photographs, 216–17, 239t
questionnaire application error, 208
sample error, 210–11
survey administration, 215
video/voice clips, 216
DIFdetect, 377–78
Dijkstra, W., 116t
Dillman, D. A., 59, 116t
Dorussen, H., 589
dot plots, 458–59, 458f, 460f, 478n2
Druckman, J. N., 486
DuGoff, E. H., 302
Dutch Parliamentary Election Studies, 4t, 409
Early Childhood Longitudinal Study, National Center for Education Statistics, 35
Edelman, M., 142, 149
Edison Research, 148, 151, 153n3
Egypt, 222t, 225f, 229, 240n3
election forecasting. see poll aggregation, forecasting
election polling generally
challenges in, 2, 13–14
cross-national, development of, 391–94, 392t
data sets, readily accessible, 3, 4t
disclosures, 278–79
forecasting, 612–13
misses, causes, effects of, 1–2
Electoral Integrity Project, 583, 590, 603n1
encoding specificity principle, 256
Enns, P. K., 348, 454
Erikson, R. S., 326, 349
estimation, inference. see hypothesis testing; statistical inference
ethical issues
context surveys, 542–43, 550nn21–23
MENA surveys, 238–40, 239t
qualitative research, 528–30
social media surveys, 558–59
Eurobarometer, 4t, 391–92, 392t
European Community Household Panel Survey, 39
European Election Studies, 4t, 392t
European Social Survey, 4t, 119–20, 119–20f, 126, 127f, 392t, 393, 536
exit polls
absentee voters, 147–49
coding systems, 145
costs, 150
data collection, 145–46
design of, 143–47, 149
error in, 146, 152
estimates, 147–48
in-person early vote, 149–50
interviewers, 144–45, 152
methodology, 143–47
models, 147–48
multivariate estimates, 151
online panels, 150
precinct-level data, 150–51
predictive value of, 142, 147–48, 630
public voter file-based, 150–51
questionnaires, 145, 151
response reliability, 142
roles of, 142–43
sampling, 143–45
state polls, 144
technology in, 151–52
by telephone, 147–50
vote count comparison, 146–47
experiments. see survey experiments
expert surveys
advantages of, 584, 601
anchoring vignettes, 588, 597, 603n13
applications of, 583–84
bias in, 586–88, 603n5
certainty measures, 598, 602–3
CMV biases, 586–87, 603n5
(p. 638) coding designs, 589
cognitive interviewing, 596
confidentiality, 591, 603, 603n7
context in, 587–88
data aggregation, 587, 588, 599–600, 603n5
DW-NOMINATE scores, 594, 598
generalizability coefficient, 593, 603n10
hypothesis testing, 587
inter-rater agreement, 591–92, 603n9
item response theory models, 592, 600–601
measurement error reduction, 599–601
mixed design, 586f, 589
multiple rater design, 586f, 588, 600–601
nested-experts design, 586f, 587–88
null variance, 592, 603n9
pooled measures, 593, 603n10
positivity bias in, 587
reliability, validity, 587–88, 590, 593–95, 598–99, 603n11
response biases reduction, 595–99, 602
response rates, 590–91
sampling designs, 589–90, 593–95, 602, 603n11
single-rater designs, 585–87, 586f
standards, best practices, 589–91, 603nn6–7
target-unit point estimates, 592
target-units mapping design, 585–89, 586f
terminology, 585
timing, speed control, 584
uncertainty measures, 591–93, 603nn8–10
variance, 590, 592, 602, 603n10
exponential random graph models, 546
Facebook, 556, 562, 575n4
face-to-face surveys
CAPI systems as quality control, 7
costs, 84, 91
cross-national polling, 401–2
CSES, 401–2, 401t
in developing countries, 211
don’t know responses, 54
hard to reach populations, 155–56, 158
history of, 55, 79, 610
in-depth individual interviews, 512–13, 521–24, 531n4
language/opinion relationships, 259, 262
MENA, 241n14
mixed mode designs, 53, 59, 70
open-ended responses, 65
PAPI, errors in, 208
satisficing, 68–69
social desirability bias, 67
survey experiments, 496
survey mode transitions, 79
TSE approach to, 13, 79–81
factor analysis, 341
Fausey, C. M., 256
Findley, B., 8
Fink, A., 116t
Fiske, D., 121–22
FiveThirtyEight.com, 1, 609–13, 620, 621, 629
Fleiss equation, 284
focus groups, 510–12, 521–24, 531n4
Folz, D. H., 116t
Fowler, F. J., 116t
Frankel, L. L., 35
French National Election Study, 4t, 409
Fricker, S. S., 40
Gaines, B. J., 484–85, 487, 499
Gallup, G., 389
Gallup American Muslim study, 194
Gallup Poll Daily, 194
Gallup Presidential Approval series, 434–36, 435f
Gallup World Poll, 4t, 392t, 393
Garcia, J. A., 252
Gayo-Avello, D., 560, 561
General Social Survey (GSS), 28, 97, 535, 574n1
generational/cohort trends, graphing, 412, 413f, 421–23, 423f, 427–29, 428–29f, 436–37, 436f, 442, 443–44t, 444f
Gengler, J. J., 235
Genre, V., 599
Genso Initiatives Web surveys, 212, 218n4
German Federal Election Studies, 4t
(p. 639) GfK Knowledge Networks, 28, 30, 38, 43n1, 58, 76, 77, 150, 371
Ghitza, Y., 353–54, 411, 412, 417, 421, 424, 433–36
Gibson, J., 326
Gideon, L., 174n5
Gill, J., 7
Gillum, R. M., 236
Gimbel, K., 8–9
Gimpel, J. G., 5
Global Barometer program, 393
Golder, M., 396, 404n7
González-Bailón, S., 563
GPS coordinates, 3, 215–17, 237, 239t
graphical perception theory, 447, 450, 476
graphs
advantages of, 440–46, 441f, 441t, 443–45t, 444f, 446f
age-weight curves, 428–29f, 429–30, 434–35, 435f, 442–45, 445t, 446f
bar charts, 454–56, 455f, 457f
best practices, 437–38, 448–52, 449f, 451f, 477
bivariate (see bivariate graphs)
bubble plots, 450, 478n1
complicated displays, 449–50, 449–51f
dot plots, 458–59, 458f, 460f, 478n2
election turnout, voting patterns, 424–28, 426–27f
full scale rectangle, showing, 450–52, 453f
generational/cohort trends, 412, 413f, 421–23, 423f, 427–29, 428–29f, 436–37, 436f, 442, 443–44t, 444f
histograms, 442, 444f, 446, 452, 455, 460–63, 462f
income effects, 419–22f, 420–21
information processing, 442, 443–44t, 444f, 446–48, 459
jittering, 466–68, 469f
labeling points, 468–71, 470f, 478n4
line plots, 471–72, 472f, 478n5
model building, 417–23, 419–23f
model checking, 430–33, 431–34f
multipanel, 464–465, 471–472, 472f, 478n3, 478n5
outliers, 478n4
overview, 8, 410–11, 439–40
period effects, 429, 429f
pie charts, 452–54
plotting symbols, 449–50, 465–66, 467f, 471–72, 472f, 478n5
poll design, construction, 412, 414–15f
purpose of, 448
raw data, 411–16, 413–18f
results, interpretation of, 423–30, 426–29f
results, presentation of, 45–436f, 433–37
sampling weights, 412–16, 416–18f
univariate (see univariate graphs)
Green, K. C., 595
group consciousness
additive measures, 380
attachment, 368, 369t
classical test theory, 369–70
data set, 371–72, 372t
described, 364–65
differential item functioning (DIF), 364, 370, 377–78, 378t
evaluation, 366, 367t, 374–75, 375t
identity importance, 367–68, 368t, 374–75, 375t
independent variable approaches, 380
item bias, 380
item response theory, 370–71, 380–81
Kaiser criterion, 372
measurement of, 363–64, 369–71
measurement precision, 375, 376f
methodology, 372–78, 373–75t, 376f, 377–78t
model fit assessment, 375–77, 377t
Mokken scale analysis, 372–73, 382n2
monotonicity, 373, 374t
recoded variables, 373, 374t
self-categorization, 365–66, 365t, 375, 375t
summary statistics, 378–79, 379t
2PL model, 373–74
unidimensionality, 373, 373t, 382n3
validity, 379–80
Groves, R., 13, 15, 188
H. M. Wood, 22–23
Haberman, S. J., 377
Hanretty, C., 351
hard to reach populations. see also low-incidence populations
categories of, 156–57
(p. 640) contacting, 155–56
contextual factors, 174n10
costs, 160–61
design, 174n9
disproportionate sampling, 160, 174n3
forced migrants, 162, 164–72, 169–70t, 174n4, 174nn7–10, 175n11
full roster approach, 158
identification of, 158–62, 170–71, 174n4
incentives, 163
insurgency conflict study, 174n8
internally displaced people, 6, 162, 174n4
interviewers, training, 172–73
interviewing, 163–64, 174n6
locating, 161–62, 174n4
nonresponse, 162–63, 174n5
persuasion of, 162–63, 174n5
research, approach to, 172–73
respondent-driven sampling, 159
respondent identification/recruitment, 519
response rates, 163, 170–71, 175nn13–14
scoring, 164, 174n6
screening methods, 158, 167–68
snowball (chain referral) sampling, 159
He, R., 565
Hecht, B., 564
Heckathorn, D. D., 159
Hensler, C., 545
Hersh, E. D., 356n8
hierarchical linear regressions, 545–47, 547f
high-effort cases, 191–92
Hillygus, D. S., 5, 32, 35, 43, 492–93
HIPAA, 550n17
histograms, 442, 444f, 446, 452, 455, 460–63, 462f
Homola, J., 7
Hong, Y., 259
Horn, J. L., 253
Huckfeldt, R., 106
Huffington Post, 1, 611, 613, 615, 616f, 618f, 619, 621, 622, 626
hypothesis testing
ANES, 300–301
context surveys, 106
expert surveys, 587
Internet surveys, 80, 82–85, 83–84t, 88, 91
low-incidence populations, 190–92, 199
incentives
CSES, 404n13
hard to reach populations, 163
mail surveys, 163
in MENA surveys, 231t
in qualitative research, 520–21
response rates and, 19–20
in survey experiments, 488, 491, 497, 500n10
in-depth individual interviews, 512–13, 521–24, 531n4
India, 404n9
Informal Sector Service Center (INSEC), 165–66
informed consent, 522, 528–29
insurgency conflict study, 174n8
internally displaced people (IDP), surveying. see hard to reach populations
International Social Survey Programme, 392t, 393
Internet surveys
advantages of, 76–77, 90–91
costs, 78–85, 83–84t
coverage issues in, 20, 57–58
criticisms of, 77–78
data collection, 90
designs, 58
hard to reach populations, 155–56
hypothesis testing, 80, 82–85, 83–84t, 88, 91
language/opinion relationships, 259, 262
MENA, 241n14
mixed mode, 84t, 85, 93
modality, qualitative differences in, 89–91
mode selection, 91–94
mode studies, 88–89
nonresponse rates, 80–81
online panels, 491–93
open-ended responses, 65
panels, 77
presentation effects, 66–67
quality, 78–85, 83–84t, 92–94
quantifying quality of, 85–89, 87t
representativeness effects, 60–61, 61t, 71n5
response rates, 58–59, 62–63, 62t, 77, 90–91
(p. 641) sampling error, 20–22, 81
sampling methods, 77, 79–81
satisficing, 68–69
statistical inference, 279, 297n5
survey mode effects, 22–23, 70
survey mode transitions, 79
total survey error, 17, 78, 86–89, 87t, 94n8
TSE approach to, 13
weighting (modeling), 77, 81
interviewer-administered questionnaires (IAQs), 54, 65–68
inverse probability problem, 293
Iraq, 222t, 224, 225f, 240n3
Israel, 630
item response theory. see also latent constructs
in expert surveys, 592, 600–601
group consciousness measurement (see group consciousness)
hierarchical group model, 346–47, 356n5
latent constructs, modeling, 8, 341–42, 356n3
Jackman, S., 32, 300–301, 342, 343, 600
Jackson, N., 43, 492–93
Jacobs, L. R., 320
Jacoby, W. G., 8, 448
Jerit, J., 499
Jessee, S. A., 350–51
Jeydel, A. S., 324
jittering, 466–68, 469f
Johnston, R., 101–2
Jordan, 222t, 223, 224, 225f, 240n3, 246
Jost, J. T., 568
Jungherr, A., 560
Junn, J., 6
Jürgens, P., 560
Kacker, M., 600
Kalman filter model, 617, 618f, 626
Kalt, J. P., 322
Karp, J. A., 8, 43, 493
Kastellec, J. P., 439
Katosh, J. P., 38
Kaushanskaya, M., 256
keeping in touch exercises (KITEs), 42
King, G., 603n13
Kitschelt, H., 590
Klar, S., 497
Klašnja, M., 9
Knight Foundation, 150
Koch, J., 348, 454
Kosslyn, S. M., 448
Krosnick, J., 118
Krupnikov, Y., 8
Kselman, D. M., 590
Kuklinski, J. H., 484–85, 487, 499
Kuwait, 222t, 225f, 240n3
labeling points, 468–71, 470f, 478n4
Laennec, R. T. H., 128
Landry, P., 241n4
language barriers, 188–90, 194
language/opinion relationships
bilingualism, 189, 196–97, 250, 256–57, 260–64
cognitive effects, 255–57, 266
cognitive sophistication, 263
culture influences, 256–59, 262
diglossia, 232
effect sizes, 265
framing effects, 261–62
future-time reference, 260–61
gendered vs. non-gendered tongues, 255, 260, 262
generational status, 263
grammatical nuances in, 255, 260
interviewer effects, 262, 267n5
linguistic determinism, 254
measurement equivalence, 253, 266nn2–4
memory effects, 251, 256
MENA surveys, 231t, 231–32, 241n9
monolingualism, 264
multilingual polls, 251
online polls, 262
overview, 7, 249–51
regression models, 265–66
research design, 252–53, 258–59, 262
research studies, 253–54
survey response effects, 259–64
thinking for speaking, 255, 259–60, 263
thought, automatic influence on, 257
validation of, 258–59, 264
LAPOP surveys, 212, 218n4
latent constructs. see also item response theory
additive models, 340–41
(p. 642) bias-variance trade-off, 343–44
computational challenges, 353, 356nn6–7
consumer confidence, 340
data disaggregation, 325–28, 345–46
data sets, 355
dimensionality assessment, 352–53
dyadic representation, 349, 351–52
emIRT, 346, 356n3
factor analysis, 341
group level applications, 348–49
group level measurements, 345–47, 356
income/opinion relationships, 353–54, 356n8
individual level applications, 344–45
individual level measurements, 340–44, 355
IRT modeling, 8, 341–42, 356n3
Markov chain Monte Carlo algorithms, 353, 356n6
mixed measurement responses, 342
multilevel regression/post-stratification, 328–32, 346, 566–67
non-survey-based data, 354
no-U-turn sampler, 353, 356n6
overview, 338–39
polarization, 344
policy liberalism/mood, 339, 348, 356n2
political knowledge, 339–41, 344–45
racial prejudice, resentment, 340, 349
spatial modeling, 356n5
spatial voting, 350–51
subnational opinion measurement, 353–54, 356n8 (see also subnational public opinion)
uncertainty, 356n7
validity/reliability modeling, 342–44
variation measurement, 348
Latin American Public Opinion Project, 4t, 536
Latino Barometer, 392t, 393, 536, 548
Latino National Political Survey, 183
Latino National Survey, 182, 252
Lauderdale, B. E., 351
Lavine, H., 498
Lax, J. R., 330, 345
Lazarfeld, P., 389
Lebanon, 222t, 225f, 232
Le Brocque, R., 35
Lee, T., 252, 254
Lenski, J., 151
Lenz, G., 318
Lenz, H., 589
LeoGrande, W., 324
Leoni, E. L., 439
Lepkowski, J. M., 42
leverage-saliency theory, 188
Levine, A. S., 486–87, 497
Levinson, S., 255
Lewis, D. E., 588, 601
Lewis, J. B., 350
LGBT surveys. see group consciousness
Libya, 222t, 223, 224, 225f, 229, 232
Likert, R., 118
Lilien, G. L., 600
Lin, Y.- R., 565
Lindeberg-Feller central limit theorem, 290
line plots, 471–72, 472f, 478n5
Link, M. W., 89
list sampling, 185–86
Liu, W., 566
living with the silence, 524
Local Governance Performance Index (LGPI), 241n4
LOESS lines, 615–16, 626
log-ratio transformation, 289–92, 291t, 297nn9–14
longitudinal (panel) surveys
acquiescence bias, 40
advantages of, 31–32, 44n3
background, 29–31, 43nn1–2
challenges in, 33, 41, 44n4
continuity, innovation in, 33–34
cross-sectional design, 29–30
designs, 29–30, 41–43, 44n11
measurement error, 37–42
modeling approaches, 33
online survey panels, 30
panel attrition in, 34–37, 43n2, 44nn5–6
panel conditioning, 37–39, 42, 44nn7–10
question wording in, 33–34, 42
retrospective design, 30
sampling designs, 30–31, 42–43
seam bias, 39–41
weighting, 33, 36, 41, 44n4, 44n6
(p. 643) low-incidence populations. see also hard to reach populations
American Jews, 193
American Muslims, 183, 192–95, 200, 201nn1–2
Asian Americans, 183–84, 189, 195–97
background, 182–83
cooperation, gaining, 188–90
estimation, inference, 190–92, 199
language barriers, 188–90, 194
measurement error, 189, 199
Mormons, 193
nonresponse bias, 188–90, 199
political activists, 183, 197–98
question wording, 194, 200
religious affiliation, 183, 193
sampling, 183–87, 198–99
survey methods, 188–90
Luke, J. V., 56
Lupia, A., 484
Lust, E., 241n4
Lynn, P., 39
MacLuen, M., 90
Maestas, C. D., 598
mail surveys
advantages, limitations of, 18
complex designs, 314n4
cost-error tradeoffs, 13, 22–23
cross-national, 401–3, 401t
donation solicitations, 486–87
don’t know responses, 54
exit polls vs., 6, 148
hard-to-count measure, 164
hard to reach populations, 155–56, 160, 163, 164, 166, 175n14
history of, 55, 79
incentives, 163
interviewer gender effects, 241n15
low-incidence populations, 199
mixed mode, 53–55, 58–63, 61–62t, 70
nonresponse, 54
open-ended responses, 65
panel designs, 30, 42
presentation effects, 66
representativeness effects, 60–61, 61t, 71n5
response rates, 62–65, 62t, 64t, 402
sampling designs, 21
social desirability bias, 22, 68
survey mode transitions, 79
TSE approach to, 13, 79–81
validation of, 85–91
Makela, S., 8, 411, 412
Malawi, 223
Malik, M. M., 562
Malouche, D., 237, 241n4
Mann, C. B., 38
maps, 464, 474–75, 474f
margin of error, 625–26
Marian, V., 256
Markov chain Monte Carlo (MCMC) method, 617
Markov chains, 617
Markus, G., 394
Marquis, K. H., 39
Martinez i Coma, F., 587
matching algorithms and weights
graphs, 412–16, 416–18f
longitudinal (panel) surveys, 33, 36, 41, 44n4, 44n6
nearest-neighbor propensity score matching, 302–4
propensity scores, 191, 302, 304–5
sampling in, 20–22
subclassifications matching, 302, 304
McArdle, J. J., 253
McIver, J. P., 326
MCMCpack, 342
Mechanical Turk, 79, 90, 91, 490, 492, 500n12, 500n17
MENA surveys
anchoring vignettes, 235
behavior coding, 233–34
cognitive interviewing, 234–35, 235t
data quality assessment, 224, 225f, 240n3
data sets, 220–23, 221f, 222t, 223f, 240n3, 241n4, 245–46
democracy, support for, 224, 225f, 226, 240n3
environmental challenges, 231t, 231–32
ethical issues, 239–40, 239t
(p. 644) gender effects, 229, 235–36, 241n15
household selection, 237–38
incentives in, 231t
intergroup conflict, bias and, 235–36
interviewer effects, 235–36, 246–48
language barriers, 231t, 232, 241n9
latent constructs variation measurement, 348
measurement error, 233–36, 235t, 241n14
mode impacts, 237, 241n14
nonresponse, 233, 238, 241n15
parliamentary election 2014, 241n7
public service provision, 241n4
questionnaires, 231t, 232
question wording, 226–29, 227–28t, 240n2, 241n10
Q x Qs, 232–33, 241n10
refusal, 238
religious dress effects, 236–37
representation error, 237–38
research challenges, 229, 231t, 241n7
respondent vulnerability, bias and, 235–36
response rates, 231t, 233, 241n11
social networks, 231t, 233, 241n11
survey genre, 229, 231t
total survey error, 233–34, 234t
Messing, S., 354
Michigan Survey Research Center. see American National Election Study (ANES)
Middle East Governance and Islam Dataset, 245
Milgram experiment, 499n4
Miller, W. E., 37, 317, 326, 390
Miller-Stokes problem, 326
Mitchell, J. S., 235
Mitofsky, W., 142, 149
mixed mode surveys
combining modes, 63–65
contextual cues in, 65
costs, 63, 64t, 70, 515
coverage issues in, 55–58, 56–57f
described, 5, 53–55
designs, 54–59, 56f
expert surveys, 586f, 589
mode effects, 69–70
nonresponse error, 58–59
open-ended responses, 65
presentation effects, 66–67
representativeness effects, 60–61, 61t, 71n5
response rates, 58–59, 62–63, 62t, 64t, 70
sampling designs, 58–59
satisficing, 68–69, 69t
social desirability effects, 65, 67–69, 69t
straight lining, 69
survey mode effects, 22–23
validation testing, 59–63, 61–62t, 71nn3–5
modus tollens, 292–93
Mokdad, A. H., 89
Monte Carlo simulations, 622
Moore, J. C., 39
Morocco, 222t, 223, 224, 225f, 236, 240n3
Morstatter, F., 563
Morton, R. B., 487, 492
MTurk, 79, 90, 91, 490, 492, 500n12, 500n17
Multi-Investigator Study, 484
multilevel regression/post-stratification, 328–32, 346, 566–67
multipanel graphs, 464–465, 471–472, 472f, 478n3, 478n5
multiple rater design surveys, 586f, 588, 600–601
Muslim American Public Opinion Survey (MAPOS), 195, 200
Mutz, D., 484, 489, 490, 499n4, 500n11
Nagler, J., 568
Nall, C., 356n8
National Asian American Survey, 182, 184, 196
National Black Election Study, 182
National Election Pool, 148, 153n3
National Election Studies, 325, 409
National Health Interview Survey (NHIS), 55–56
National Household Education Survey, 574n1
National Opinion Research Center, 4t
National Politics Study, 182
National Survey of Black Americans, 182
nearest-neighbor propensity score matching, 302–4
Nepal Forced Migration Survey
background, 164–65
challenges in, 172–73
design, implementation, 166–67, 174nn8–9
female respondents, 171
Maoist insurgency, 165–66
response rates, 170–71, 175nn13–14
sampling frame, method, 167–72, 169–70t, 174n10, 175nn11–14
nested-experts design surveys, 586f, 587–88
Newsome, J., 8–9
New York Times, 620, 626, 629
New Zealand, 402–3, 404n12
Nie, N., 198
non-Bayesian modeling, Bayesian vs., 618–19
nonresponse bias
ANES, 80–81
CSES, 402, 404n11
hard to reach populations, 162–63, 174n5
Internet surveys, 80–81
low-incidence populations, 188–90, 199
mail surveys, 54
mixed mode surveys, 58–59
Twitter, 556
null hypothesis significance test, 292–94
Oberski, D., 5
O’Brien, R. M., 603n10
O’Connor, B., 561
Ogunnaike, O., 257
Ohio registered voter study, 100–103, 102f, 104–5f, 106–10, 108f, 110nn2–3
Olson, K., 22–23, 36
Oman, 222t
online surveys. see Internet surveys
Page, B. I., 318
Palestine, 222t, 225f, 240n3
Palestinian Center for Policy and Survey Research, 246
Pan, J., 344
Panel Study on Income Dynamics, 28, 39, 42, 81
panel surveys. see longitudinal (panel) surveys
Paolacci, G., 490
paradata, 212, 215–16, 218n5
Park, D. K., 329
PATT estimation, 303–5
Peltzman, S., 322
Pennacchiotti, M., 566
Perception of Electoral Integrity (PEI), 590, 591, 603n8
Pereira, F. B., 345
Pérez, E. O., 7, 252, 254, 260–62
Pew American Muslim study, 193–94
Pew Asian-American Survey, 197
Pew Global Attitudes Survey, 392t, 393
Pew Global Research, 4t, 392t
Pew Research Center, 4t, 245
Phillips, J. H., 330, 345
photographs, 216–17, 239t
pie charts, 452–54
Pilot Asian American Political Survey, 182
plotting symbols, in graphs, 449–50, 465–66, 467f, 471–72, 472f, 478n5
Plutzer, E., 330–31
political activists, 183, 197–98
poll aggregation, forecasting
aggregation statistics, 614–17, 615–16f, 618f
challenges in, 623–29
data sources, collection, 624
forecasting statistics, 617–23
overview, 609–10
pollster quality in, 621
predictive value of, 628–29
single polls vs., 627–28
state level polls in, 622
statistical inference, 614–24, 628–29
technology developments, 610–13
uncertainty in, 624–27
undecided respondents in, 621–22
Pollster, 611, 615–17, 616f, 618f, 626
Popescu, A.- M., 566
population average treatment effects
complex data, causal inference with, 300–303
methodology, 303–5
overview, 299–300, 312–13
post-stratification weights, 301
simulation study, 305–9, 307t, 308f, 313nn1–3
social media/political participation study, 309–12, 310f, 312t, 314n4
weighting for differential selection probabilities, 301
weighting to adjust for unit nonresponse, 301
(p. 646) presidential election results, 323–24
Proctor, K., 8
Program on Governance and Local Development (GLD), 221, 222t, 224, 225f, 241n4, 246
propensity scores, 191, 302, 304–5
Public Opinion Quarterly, 294–96
Qatar, 222t, 235, 246
qualitative research
benefits of, 505
cognitive interviewing (see cognitive interviewing)
concepts, definitions, 506–7
concurrent, 509
confidentiality, 529–30
data management, organization, 525–26
ethical issues, 528–30
file naming, storage, 526
findings, analysis/reporting of, 525–28
focus groups, 510–12, 521–24, 531n4
group/interview management, 523–24
incentives in, 520–21
in-depth individual interviews, 512–13, 521–24, 531n4
informed consent, 522, 528–29
integration of, 507–10
limitations of, 507
observers, 524–25
participants, respect for, 530
post-administration, 509–10
probes, 514, 524
professional respondents, 519–20
project discovery, 507–8
protocol development, 521
question asking, 524
question wording, 514–16, 521
rapport, 522–23
reports, formal, 527–28
research plans, 516–17
respondent identification/recruitment, 518–20
screening criteria, 518
standards, guidelines for, 513, 516, 531n1
survey creation, refinement, 508–9
training, 519, 522, 531n4
usability testing, 514–15
question wording
agree-disagree scales, 116–19
best practices, 115–16, 116t
characteristics, coding, 122, 125, 125f
cognitive processes and, 116–20, 119–20f
common method variance, 121
described, 5–6
design choices, 116–20, 119–20f
in longitudinal (panel) surveys, 33–34, 42
low-incidence populations, 194, 200
meta-analysis, 120–21
multi trait-multi method approach, 118, 121–23, 126, 127f
predictive value, 123–26, 123f, 126–27f
qualitative research, 514–16, 521
quality estimation, 121–22
quasi-simplex model, 118
reliability, 121
responses, unreliability in, 113–14, 118–19
satisficing, 116–17
scale correspondence, 127
seam effect reduction via, 40
smartphone monitoring, 129
SQP project, 124–30, 126–27f, 134–37
in survey experiments, 483–84, 486–87
survey mode effects, 22–23
Quirk, P. J., 484–85, 487, 499
RAND American Life Panel, 28, 43n1
random digit dial phone surveys, 90
random sample surveys, 79–80, 92, 100–103, 102f, 110n2
Rao, D., 566
Rasinski, K., 17
Ratkiewicz, J., 568
Ray, L., 590, 600
Razo, A., 9
RealClearPolitics, 611, 614–15, 615f, 626
referenda results, 324
regression trees, 123–24, 123f
relational database management systems, 541, 541f
representative sampling, 92
Révilla, M., 117, 129
Rips, L. J., 17, 40
RIVA Training Institute, 522, 523, 531n4
Rivero, G., 563
(p. 647) Rivers, D., 342
Robinson, J. G., 164
Rodden, J., 330, 343
Roper, E., 389
Rothschild, D., 77, 565
Ruths, D., 566
Ryan, C., 251
Saiegh, S. M., 351
Sala, E., 39
sampling designs
address-based, 20, 55
ANES, 58, 80, 94n8, 491, 535, 549n4
clustering, 21
density sampling, 186–87
described, 5
expert surveys, 589–90, 593–95, 602, 603n11
high-effort cases, 191–92
list sampling, 185–86
longitudinal (panel) surveys, 30–31, 42–43
mixed mode surveys, 58–59
post-stratification, 190–91
primary sampling units (PSUs), 327
qualitative research, 518–20
simple random sampling, 543–44, 550nn24–26
stratified random sampling, 106–10, 108f, 184–85, 187
stratifying, 21
subnational public opinion, 326–27
in survey experiments, 488–91, 494–95, 498–99, 500nn10–11
Saris, W. E., 5
satisficing, 17, 68–69, 116–17
Saudi Arabia, 222t, 224, 225f, 240n3
scatterplots
applications of, 464
aspect ratio, 475–76, 475–76f
axis labels in, 466, 467f
data presentation in, 450–52, 453f
jittering, 468, 469f
point labels in, 468–71
Schaffner, B. F., 5, 89
Schlozman, K., 198
Schneider, S. K., 8
Schoen, H., 560
Schuler, M., 302
sdcMicro, 550n20
self-administered questionnaires (SAQs), 54, 66–68
Senate Election Studies, 535
Shapiro, R. Y., 320
Shone, B., 289–90
Si, Y., 8
Silver, N., 609, 612
simulations, 324–25
single-rater design surveys, 585–87, 586f
Sinharay, S., 377
Sjoberg, L., 595
Skoric, M., 560
Slobin, D., 255, 259
Smit, J. H., 116t
Smyth, J. D., 22–23
Snell, S. A., 5
Sniderman, P., 484, 499n2
Snyder, J. M, 343
social desirability bias
face-to-face surveys, 67
mail surveys, 22, 68
telephone surveys, 67
Twitter surveys, 556, 561, 569, 575n3, 575n12
Social & Economic Survey Research Institute, 246
social exchange theory, 163, 174n5
social media data. see Twitter
social media/political participation study, 309–12, 310f, 312t, 314n4
South Bend Study, 106
Spahn, B. T., 300–301
Spatial Durbin model, 551n32
spatial voting, 350–51
Sprague, J., 106
SQP2.0 project, 124–30, 126–27f, 134–37
standards, guidelines. see best practices
Stanley, J., 15
statistical inference
aggregation, 614–17, 615–16f, 618f
Bayesian vs. non-Bayesian modeling, 618–19
binomial outcomes, 275–78, 277–78t, 286–87
Brier scores, 623
(p. 648) case studies, 294–96
certainty, 286
compositional data, 286
context surveys, 545–47, 547f, 551nn27–33
data disaggregation, 325–28, 345–46
errors in, 279–84, 297nn6–7
forecasting, 617–23
fundamentals vs. poll-only, 619–21, 628–29
hierarchical linear regressions, 545–47, 547f
Internet surveys, 279, 297n5
item characteristic curves, 370–71
Kalman filter model, 617, 618f, 626
LOESS lines, 615–16, 626
log-ratio transformation, 289–92, 291t, 297nn9–14
margin of error treatment, 284–88, 285t, 297n8
Markov chains, 617
multilevel regression/post-stratification, 328–32, 346, 566–67
multinomial outcomes, 275–78, 277–78t, 289
null hypothesis significance test, 292–94
null variance, in expert surveys, 592, 603n9
poll aggregation, forecasting, 614–24
pooled measures, 593, 603n10
proportions, 288–89
random sampling, 279, 297n5
simulations, 324–25
uncertainty, 278–79
variation matrix, 290
Sterba, S. K., 549n11
Stipak, B., 545
Stokes, D. E., 317, 326, 390
Stone, W. J., 598, 600
stratified random sampling, 106–10, 108f, 184–85, 187
structural equation models (SEM), 121–22
Stuart, E. A., 302
subclassifications matching, 302, 304
subnational public opinion
bias in, 327
cross-sectional measures of, 327
data disaggregation, 325–28, 345–46
data sets, 318, 320–21
dyadic representation model, 317
elite preferences, 320, 328
geographic sorting, 320, 328
ideology measures, 326
income/opinion relationships, 353–54, 356n8
multilevel regression/post-stratification, 328–32, 346, 566–67
observations, number of, 318–19
opinion-policy linkage, 317–21
overview, 7–8, 316–17, 331–32
quality/effects relationships, 317–18
reliability, 326
research designs, 319
research studies, 321
sampling, 326–27
simulations, 324–25
surrogates, 321–24
Sudan, 222t, 225f
Sumaktoyo, N. G., 348
surrogate demographic variables, 322–23
surrogates, 321–24
survey designs. see designs
survey experiments
applications of, 484, 495
background, 483–84
behavioral vs. treatment outcomes, 485, 500n6
benefits of, 484–88, 494–95, 498
concepts, definitions, 483, 487, 499n1
embedded, 496–97
expressed preferences, 496
field experiments, 486–88, 500nn6–9
incentives in, 488, 491, 497, 500n10
laboratory experiments, 485–86, 491, 497, 500n5
measurement limitations, 495–98
MTurk, 79, 90, 91, 490, 492, 500n12, 500n17
natural experiments, 485, 500n5
online panels, 491–93
participant limitations, 488–89, 500n10
professional subjects, 492–93, 501n18
question wording in, 483–84, 486–87
random assignments vs., 485, 499nn4–5
real-world generalizability, 487, 489–90, 500n8
representative sample recruitment, 491–93, 501n19
(p. 649) revealed preferences, 496–98
sample diversity, 489–91, 498–99, 500n11
sampling designs in, 488–91, 494–95, 498–99, 500nn10–11
subject pools, 492
time-in-sample bias, 492, 501n18
validity of, 487–88, 490
Survey of Income and Program Participation (SIPP), 35, 39, 40
Survey of LGBT Americans, 364–66, 365t, 371, 372t. see also group consciousness
Swedish National Election Studies, 4t, 409
Syria, 222t
target-units mapping design, 585–89, 586f
Tausanovitch, C., 330, 348, 349, 350
Tavits, M., 260, 262
telephone surveys
coverage issues in, 55–57, 56–57f
CSES, 401t, 402
in developing countries, 211
hard to reach populations, 156–58
history of, 79, 610
language/opinion relationships, 259, 262
MENA, 241n14
mixed mode designs, 53
open-ended responses, 65
presentation effects, 66–67
random digit dial phone surveys, 90
social desirability bias, 67
survey mode transitions, 79
TSE approach to, 13, 79–81
validity of, 90
Tessler National Science Foundation, 224, 225f
think-aloud protocols, 16–17, 235
thinking for speaking, 255, 259–60, 263
time-in-sample bias, 37–39, 44nn7–10, 492, 501n18
Time Sharing Experiments for the Social Sciences (TESS), 484, 499n4
total survey error
comparability error, 23–24
conversational/flexible interviewing, 18
coverage error, 20
data collection, 18–19
Internet surveys, 17, 78, 86–89, 87t, 94n8
interviewer error, 18
item-level nonresponse, 18–19
measurement, 16–18
overview, 3–5, 13–14
post survey error, 23
principles of, 14–15, 16f, 33
reliability assessment, 32
respondent error, 16–17
response modes, 17
response process stages, 17
sampling error, 20–22
standardized interviewing, 18
survey mode effects, 14, 22–23
unit-level nonresponse, 19–20
validity, internal vs. external, 15
total survey quality, 14, 24
Tourangeau, R., 17, 158, 161–63, 174n1
Transitional Governance Project (TGP), 221, 222t, 224, 225f, 241n4, 246
Traugott, M. W., 38
true population proportion calculation, 276
Tucker, J., 568
Tufte, E. R., 439
Tukey, J. W., 448
Tumasjan, A., 560
Tunisia, 222t, 223, 224, 225f, 229, 230, 232, 237, 241n4, 246–48
Twitter
benefits of, 555–57, 575nn4–5
bots, spammers, 562, 567–68, 575n7
challenges of, 557–59, 571
changes over time, 569
computational focus groups, 565
contextual data, 570
data aggregation, 563–64, 567–68
data archives, 556, 575n4
data sets, 559
ethical issues, 558–59
fake accounts, 562–63, 567–68, 575n7
ideology estimation, 566–67
keyword selection, 565
multilevel regression/post-stratification, 328–32, 346, 566–67
nonresponse bias, 556
panels, 569
political activist opinions, 570
(p. 650) polling, funding/interest in, 556, 575n3
public opinion identification, 559–61, 564–65
research agenda, 571–74
research collaborations, 570, 575n5
response rates, 556, 567, 574n1
selection bias, 559–60
sentiment analysis, 561, 565, 569, 575n12
social desirability bias, 556, 561, 569, 575n3, 575n12
subpopulation studies, 569
topics, 560, 575n11
tweet counting methods, 560–61
user representativeness, 561–63, 565–67, 575n9
validation, 568
U. S. Census, 4t, 70, 185, 187
UC-Davis Congressional Election Study, 4t, 587
uncertainty measures
expert surveys, 591–93, 603nn8–10
latent constructs, 356n7
in poll aggregation, forecasting, 624–27
statistical inference, 278–79
unconditional positive regard, 523
United Arab Emirates, 222t
United Kingdom, 630
univariate graphs
bar charts, 454–56, 455f, 457f
best practices, 448–52, 449f, 451f, 477
dot plots, 458–59, 458f, 460f, 478n2
histograms, 442, 444f, 446, 452, 455, 460–63, 462f
information processing, 459
overview, 452, 478n2
pie charts, 452–54
Unwin, A., 410
Vaccari, C., 567
Van Bruggen, G. H., 598, 600
Vandecasteele, L., 36
Van Ham, C., 587
Varieties of Democracy Project (V-Dem), 4t, 583, 589, 603n2
Verba, S., 198
verbal probing, 235, 235t
video recording, 525
visual perception theory, 447, 450, 476
Vivyan, N., 351
vote share plotting. see graphs
voting behaviors. see also American National Election Study (ANES)
change, measurement of, 31–32, 44n3
intention stability, 40
mixed mode surveys, validation testing, 59–63, 61–62t, 71nn3–5
panel conditioning effects, 37–39, 44nn7–10
spatial voting, 350–51
vote share graphing (see graphs)
Vowles, J., 8
Vox, 629
Wang, W., 411, 412, 430
Ward, R., 257
Warshaw, C., 8, 330, 347, 348, 349
Washington Post, 620, 629
Weisberg, H. F., 3, 14, 15
Whorf, B., 254
Williams, K. C., 487, 492
Witt, L., 36
World Values Survey, 4t, 221, 224, 225f, 245, 392, 392t
Wright, G. C., 326
Xu, Y., 344
Yemen, 222t, 225f, 240n3
YouGov, 28, 30, 38, 76, 77, 88, 94n6, 492–93
Young, M., 43, 492–93
Youth-Parent Socialization Panel study, 30
Zaller, J., 262
Zanutto, E. L., 302
Zell, E. R., 38
Zogby International, 195
Zupan, M. A., 322