Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 24 August 2019

(p. 528) (p. 529) Subject Index

(p. 528) (p. 529) Subject Index

Accuracy ratio (AR), 350–352
Adaptive component selection and smoothing operator (ACOSSO), 287
Additive coefficient models (ACMs)
SBK method in, 162–170
Cobb-Douglas model, application to, 168–170
Additive models, 129–207
overview, 149–153, 176–179
ACMs, SBK method in, 162–170
Cobb-Douglas model, application to, 168–170
additive diffusion models, 201
future research, 173
GDP growth forecasting and, 151–152
noisy Fredholm integral equations of second kind, 205–207
nonparametric additive models, 129–146 (See also Nonparametric additive models)
nonstationary observations, 202–205
PLAMs, SBK method in, 159–162
housing data, application to, 162
related models, 190–202
additive diffusion models, 201
missing observations, 200–201
nonparametric regression with repeated measurements, 192–193
nonparametric regression with time series errors, 191–192
panels of time series and factor models, 197–198
panels with individual effects, 193–196
semiparametric GARCH models, 198–199
simultaneous nonparametric equation models, 201–202
varying coefficient models, 199–200
SBK method in, 153–159
housing data, application to, 157–159
SBK estimator, 154–157
SBS method in, 170–172
smooth least squares estimator in additive models, 179–190 (See also Smooth least squares estimator in additive models)
variable selection in, 255–262
Huang-Horowitz-Wei adaptive group LASSO, 257–258
Meier-Geer-Bühlmann sparsity-smoothness penalty, 258–260
Ravikumar-Liu-Lafferty-Wasserman sparse adaptive models, 260–262
Xue SCAD procedure, 262
Additive regression, 149–153
Aid to Families with Dependent Children (AFDC)
AR (Accuracy ratio), 350–352
ARMA models, 447, 449–450
Artificial neural network (ANN) method, 347
Asymptotic normal inference, 65–93
assumptions, 70–77
deconvolution, 67–68, 90
density, 67, 89–90
estimation methods, 70–77
examples, 67–69
for fixed α, 77–79
functional linear regression, 91–92
with possible endogenous regressors,68–69
Hilbert scales, 70–72
ill-posedness, 70
implementation, 89–93
T estimated, 90–93
T known, 89–90
mean square error, rate of convergence of, 73–77
model, 67–69
nonparametric instrumental regression, 69, 92–93
overview, 65–67, 93
regularization, 72–73
selection of parameter, 87–89
test statistics, 79–82
φ0 fully specified, 79–80
φ0 parametrically specified, 80–82
for vanishing α, 82–87
estimated operator, 85–87
with known operator, 83–85
rate of convergence, 87
Autoregressive models
nonlinear autoregressive models, 431–434
selected proofs, 441
nonparametric models with nonstationary data, 459–460
semiparametric models with nonstationary data, 463–464
(p. 530) Average square prediction error (ASPE)
Canadian Census Public Use data and, 328f
federal funds interest rate and, 337f
GDP growth and, 334f
housing data and, 332f
wages and, 326f
Averaging regression, 230–232
asymptotic optimality of, 235–236
computation, 234–235
feasible series estimators, 237f
jackknife model averaging (JMA) criteria, 233
numerical simulation, 236
selected proofs, 245–247
weights, 237t
Basel Committee on Banking Supervision, 346
Berkson error model, 103t, 104, 108t
Bertin-Lecué model, 290–291
Binary choice models, 467–468
Black-Scholes model, 346
Box-and-whisker plots, 341n
Brownian motion
nonstationary time series and, 449, 456
time series and, 377, 379, 383–387, 390–392, 417n
Bunea consistent selection via LASSO, 288
Canadian Census Public Use data
data-driven model evaluation, application of, 327–329
average square prediction error (ASPE), 328f
empirical distribution function (EDF), 328f
RESET test results, 327t
revealed performance (RP) test results, 329t
CAP (Cumulative accuracy profile) curve, 350–352, 351f
Cauchy sequences, 10
Chebyshev polynomials, 17–19, 19f
Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266
Chromosomes
SVM method and, 361–365, 363–366f, 365t
Classical backfitting
relation to smooth least squares estimator in additive models, 187–189
Cobb-Douglas model
GDP growth forecasting and, 151f, 168–170, 170f
SBK method, application of, 168–170
Cointegrated systems
automated efficient estimation of, 398–402
cointegration of nonlinear processes, 471–473
efficient estimation of, 395–398, 417n
selected proofs, 413–415
Cointegrating models
nonlinear models with nonstationary data, 452–458
nonparametric models with nonstationary data, 460–463
semiparametric models with nonstationary data
varying coefficient cointegrating models, 464–466
varying coefficient models with correlated but not cointegrated data, 466–467
Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289
Component selection and smoothing operator (COSSO), 284–287
Conditional independence
convolution equations in models with, 110–111
Consumer Price Index (CPI)
semilinear time series and, 435, 436f
Convolution equations, 97–125
estimation and, 122–124
plug-in nonparametric estimation, 122–123
regularization in plug-in estimation, 123–124
independence or conditional independence, in models with, 100–111
Berkson error model, 103t, 104, 108t
classical error measurement, 103t, 104–106, 108t
conditional independence conditions, in models with,110–111
measurement error and related models, 102–107
regression models and, 107–110
space of generalized functions S*, 100–102
overview, 97–100, 124–125
partial identification, 120–121
solutions for models, 112–119
classes of, 115–119
existence of, 112–115
identified solutions, 119–120
support and multiplicity of, 115–119
well-posedness in S*, 121–122
COSSO (Component selection and smoothing operator), 284–287
Co-summability, 471–473
Cramer-von Mises statistics, 79, 377
CreditReform database
SVM method, application of, 366–368
data, 366–367t
testing error, 368t
training error, 368t
Cross-validation, 224–226
asymptotic optimality of, 226–227
feasible series estimators, 237f
numerical simulation and, 236
selected proofs, 239–245
typical function, 226f, 237t
Cumulative accuracy profile (CAP) curve, 350–352, 351f
(p. 531) Data-driven model evaluation, 308–338
overview, 308–311, 336–338
box-and-whisker plots, 341n
Canadian Census Public Use data, application to, 327–t329
average square prediction error (ASPE), 328f
empirical distribution function (EDF), 328f
RESET test results, 327t
revealed performance (RP) test results, 329t
empirical illustrations, 325–336
federal funds interest rate, application to, 335–336
average square prediction error (ASPE), 337f
empirical distribution function (EDF), 338f
time plot, autovariance plot and partial autocorrelation plot, 336–337f
GDP growth, application to, 333–335
average square prediction error (ASPE), 334f
empirical distribution function (EDF), 334f
housing data, application to, 330–333
average square prediction error (ASPE), 332f
empirical distribution function (EDF), 332f
methodology, 311–319
bootstrap, validity of, 315–319
empirical distribution of true error, 313–315
Monte Carlo simulations, 319–324
cross-sectional data, 319–320
optimal block length, 340n
time series data, 320–324, 321–324t
scaling factors, 340n
stationary block bootstrap, 340n
wages, application to, 325–327
average square prediction error (ASPE), 326f
empirical distribution function (EDF), 326f
nonoptimal smoothing, implications of, 325–327
Deconvolution, 67–68, 90
Default prediction methods
overview, 346–348
accuracy ratio (AR), 350–352
artificial neural network (ANN) method, 347
cumulative accuracy profile (CAP) curve, 350–352, 351f
maximum likelihood estimation (MLE) method, 347
ordinary least square (OLS) method, 347
performance evaluation, 349t
quality of, 348–352
receiver operating characteristic (ROC) curve, 350–352, 351f
structural risk minimization (SRM) method, 347
Density and distribution functions, 24–32
asymptotic normal inference, density, 67, 89–90
bivariate SNP density functions, 27–28
first-price auction model, application to, 32
general univariate SNP density functions, 24–27
MPH competing risks model, application to, 31–32
SNP functions on [0,1], 28–29
uniqueness of series representation, 29–31
EMM (Extended memory in mean), 448, 454
Empirical distribution function (EDF)
Canadian Census Public Use data and, 328f
federal funds interest rate and, 338f
GDP growth and, 334f
housing data and, 332f
wages and, 326f
Error
average square prediction error (ASPE)
Canadian Census Public Use data and, 328f
federal funds interest rate and, 337f
GDP growth and, 334f
housing data and, 332f
wages and, 326f
Berkson error model, 103t, 104, 108t
integrated mean square error (IMSE), 220–222
selection and averaging estimators, 231f
spline regression estimators, 223f
mean-squared forecast error (MFSE), 224
measurement error
convolution equations in models with independence, 102–107
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Liang-Li variable selection with measurement errors, 267
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272
Estimation
asymptotic normal inference, 70–77
convolution equations and, 122–124
plug-in nonparametric estimation,122–123
regularization in plug-in estimation,123–124
least squares estimation (See Least squares estimation)
nonparametric additive models, 131–142
bandwidth selection for Horowitz-Mammen two-step estimator, 139–140
conditional quantile functions, 136–137
known link functions, 137–139
unknown link functions, 140–142
regression equations, 485–498 (See also Regression equations)
SBS method in additive models, 170–172
sieve estimation, 33–34
sieve regression and, 215–247, 219–220 (See also Sieve regression)
(p. 532) smooth least squares estimator in additive models, 179–190 (See also Smooth least squares estimator in additive models)
stochastic processes, 501–517 (See also Stochastic processes)
time series, 377–476 (See also Time series)
of unconditional moments, 46
Evolutionary model selection
SVM method and, 361–365, 363–366f, 365t
Exchange rates
semilinear time series and, 438, 438f
Extended memory in mean (EMM), 448, 454
Federal funds interest rate
data-driven model evaluation, application of, 335–336
average square prediction error (ASPE), 337f
empirical distribution function (EDF), 338f
time plot, autovariance plot and partial autocorrelation plot, 336–337f
First-price auctions, 8–9
density and distribution functions, application of, 32
Fourier analysis
convolution equations and, 112, 114, 117, 119
non-polynomial complete orthonormal sequences and, 23
orthonormal polynomials and, 20
Fubini’s Theorem, 45, 411–412
Generalized method of moments (GMM), 65–66, 495–497
Genetic algorithm (GA), 361–365, 363–366f, 365t
German CreditReform database
SVM method, application of, 366–368
data, 366–367t
testing error, 368t
training error, 368t
Government bonds
semilinear time series and, 435–438, 437f
Greater Avenues for Independence (GAIN program)
overview, 507–509
statistics, 508t
stochastic dominance procedure for gradient estimates, 507–517
overview, 507–509
all covariates, 514
bandwidth estimates, 509t, 510–514
dominance tests, 516–517
empirical results, 509–517
parameter estimates, 514–515
probability values, 516–517
significant nonparametric gradient estimates, 510t
significant returns by group, 511t
test statistics, 512–513t, 516
treatment variable, 514–515
Gross domestic product (GDP) growth
additive regression models, 151–152
Cobb-Douglas model and, 151f, 168–170, 170f
data-driven model evaluation, application of, 333–335
average square prediction error (ASPE), 334f
empirical distribution function (EDF), 334f
nonparametric additive models, empirical applications, 144–146
Hermite polynomials, 15, 15f
Hilbert scales, 70–72
Hilbert spaces, 9–13
convergence of Cauchy sequences, 10
inner products, 9–10
non-Euclidean Hilbert spaces, 11–13
orthogonal representations of stochastic processes in, 377–378, 380–382, 417n
orthonormal polynomials and, 13–15
spanned by sequence, 10–11
spurious regression and, 387, 389
unit root asymptotics with deterministic trends and, 392–393
Horowitz-Mammen two-step estimator, 139–140
Housing data
data-driven model evaluation, application of, 330–333
average square prediction error (ASPE), 332f
empirical distribution function (EDF), 332f
linearity test, 158f
SBK method, application of
in additive models, 157–159
in PLAMs, 162
Huang-Horowitz-Wei adaptive group LASSO, 257–258
Independence
convolution equations in models with, 100–111
Berkson error model, 103t, 104, 108t
classical error measurement, 103t, 104–106, 108t
conditional independence conditions, in models with, 110–111
measurement error and related models, 102–107
regression models and, 107–110
space of generalized functions S*, 100–102
Integrated mean square error (IMSE), 220–222
selection and averaging estimators, 231f
spline regression estimators, 223f
Jackknife model averaging (JMA) criteria
overview, 233
(p. 533) asymptotic optimality of, 235–236
computation, 234–235
feasible series estimators, 237f
numerical simulation and, 236
selected proofs, 245–247
weights, 237t
Kai-Li-Zou composite quantile regression, 297–299
Kato-Shiohama PLMs, 265–266
Koenker additive models, 296–297
Kolmogorov-Smirnov statistics, 79, 505
Kotlyarski lemma, 97
LABAVS (Locally adaptive bandwidth and variable selection) in local polynomial regression, 293–295
Lafferty-Wasserman rodeo procedure, 291–293
Laguerre polynomials, 15–16, 16f
Least absolute shrinkage and selection (LASSO) penalty function
Bunea consistent selection in nonparametric models, 288
Huang-Horowitz-Wei adaptive group LASSO, 257–258
Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271
variable selection via, 251–255
LASSO estimator, 251–252
other penalty functions, 254–255
variants of LASSO, 252–254
Wang-Xia kernel estimation with adaptive group LASSO penalty in varying coefficient models, 269–270
Zeng-He-Zhu LASSO-type approach in single index models, 276–278
Least squares estimation
Ni-Zhang-Zhang double-penalized least squares regression, 264–265
nonparametric set of regression equations
local linear least squares (LLLS) estimator, 487–488
local linear weighted least squares (LLWLS) estimator, 488–489
ordinary least square (OLS) method, 347
Peng-Huang penalized least squares method, 275–276
smooth least squares estimator in additive models, 179–190 (See also Smooth least squares estimator in additive models)
Legendre polynomials, 16–17, 17f
Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Liang-Liu-Li-Tsai partially linear single index models, 278–279
Liang-Li variable selection with measurement errors, 267
Li-Liang variable selection in generalized varying coefficient partially linear models, 272
Linear inverse problems
asymptotic normal inference in, 65–93 (See also Asymptotic normal inference)
convolution equations, 97–125 (See also Convolution equations)
Lin-Zhang-Bondell-Zou sparse nonparametric quantile regression, 299–301
Lin-Zhang component selection and smoothing operator (COSSO), 284–287
Liu-Wang-Liang additive PLMs, 267–268
Local linear least squares (LLLS) estimator, 487–488
Local linear weighted least squares (LLWLS) estimator, 488–489
Locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295
Long-run variance
estimation of, 402–407
selected proofs, 415–417
Markov chains, 439–440, 459–461
Maximum likelihood estimation (MLE) method, 347
Mean-squared forecast error (MFSE), 224
Measurement error
convolution equations in models with independence, 102–107
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Liang-Li variable selection with measurement errors, 267
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272
Meier-Geer-Bühlmann sparsity-smoothness penalty, 258–260
Mercer’s Theorem
SVM method and, 359
time series and, 378, 382
Merton model, 346–347
Methodology
semi-nonparametric models, 3–34 (See also Semi-nonparametric models)
special regressor method, 38–58 (See also Special regressor method)
MFSE (Mean-squared forecast error), 224
Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295
Missing observations, 200–201
Mixed proportional hazard (MPH) competing risks model, 6–8
density and distribution functions, application of, 31–32
(p. 534) MLE (Maximum likelihood estimation) method, 347
Monte Carlo simulations
data-driven model evaluation, 319–324
cross-sectional data, 319–320
optimal block length, 340n
time series data, 320–324, 321–324t
nonparametric additive models and, 143
nonstationary time series and, 457, 463, 467
Nadaraya-Watson estimators
nonparametric additive models and, 135, 143–145
SBK method and, 153, 155, 160–161
smooth least squares estimator in additive models and, 177, 180, 183–187
Ni-Zhang-Zhang double-penalized least squares regression, 264–265
Noisy integral equations
Fredholm equations of second kind, 205–207
smooth backfitting as solution of, 187
Nonlinear models with nonstationary data, 449–458
cointegrating models, 452–458
error correction models, 452–458
univariate nonlinear modeling, 450–452
Nonlinear nonstationary data, 445–449
Nonparametric additive models, 129–146
empirical applications, 144–146
estimation methods, 131–142
bandwidth selection for Horowitz-Mammen two-step estimator, 139–140
conditional quantile functions, 136–137
known link functions, 137–139
unknown link functions, 140–142
Monte Carlo simulations and, 143
nonparametric regression with repeated measurements, 192–193
nonparametric regression with time series errors, 191–192
overview, 129–131, 146
simultaneous nonparametric equation models, 201–202
tests of additivity, 143–144
Nonparametric models
nonparametric additive models, 129–146 (See also Nonparametric additive models)
with nonstationary data, 458–463
autoregressive models, 459–460
cointegrating models, 460–463
variable selection in, 283–295
Bertin-Lecué model, 290–291
Bunea consistent selection via LASSO,288
Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289
Lafferty-Wasserman rodeo procedure, 291–293
Lin-Zhang component selection and smoothing operator (COSSO), 284–287
Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295
Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287
Non-polynomial complete orthonormal sequences, 21–24
derived from polynomials, 22–23
trigonometric sequences, 23–24
Nonstationary time series, 444–476
overview, 444–445, 473–474
cointegration of nonlinear processes,471–473
co-summability, 471–473
kernel estimators with I(1) data, 474–476
model specification tests with nonstationary data, 469–471
Monte Carlo simulations and, 457, 463, 467
nonlinear models with nonstationary data, 449–458
cointegrating models, 452–458
error correction models, 452–458
univariate nonlinear modeling, 450–452
nonlinear nonstationary data, 445–449
nonparametric models with nonstationary data, 458–463
autoregressive models, 459–460
cointegrating models, 460–463
semilinear time series and, 428–431
semiparametric models with nonstationary data, 463–469
autoregressive models, 463–464
binary choice models, 467–468
time trend varying coefficient models, 468–469
varying coefficient cointegrating models, 464–466
varying coefficient models with correlated but not cointegrated data, 466–467
Oracally efficient two-step estimation
SBS (Spline-backfitted spline) method,170–172
Ordinary least square (OLS) method, 347
Orthogonal representations of stochastic processes, 380–386
selected proofs, 411–413
Orthonormal polynomials, 13–21
Chebyshev polynomials, 17–19, 19f
completeness, 19–20
examples, 15–19
Hermite polynomials, 15, 15f
Hilbert spaces and, 13–15
Laguerre polynomials, 15–16, 16f
(p. 535) Legendre polynomials, 16–17, 17f
SNP index regression model, application to, 20–21
Panels
with individual effects, 193–196
of time series and factor models, 197–198
Partially linear additive models (PLAMs)
SBK method in, 159–162
housing data, application to, 162
Partially linear models (PLMs)
variable selection in, 263–268
Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266
Kato-Shiohama PLMs, 265–266
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Liang-Li variable selection with measurement errors, 267
Liu-Wang-Liang additive PLMs, 267–268
Ni-Zhang-Zhang double-penalized least squares regression, 264–265
Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264
Peng-Huang penalized least squares method, 275–276
Quantile regression
variable selection in, 295–301
Kai-Li-Zou composite quantile regression, 297–299
Koenker additive models, 296–297
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Lin-Zhang-Bondell-Zou sparse nonparametric quantile regression, 299–301
Ravikumar-Liu-Lafferty-Wasserman sparse adaptive models, 260–262
Receiver operating characteristic (ROC) curve, 350–352, 351f
Regression analysis
additive regression, 149–153
averaging regression, 230–232 (See also Averaging regression)
sieve regression, 215–247 (See also Sieve regression)
spurious regression, tools for understanding, 387–391, 417n
Regression equations, 485–498
overview, 485–486, 498
alternative specifications of nonparametric/semiparametric set of regression equations, 490–497
additive nonparametric models, 493–494
nonparametric regressions with nonparametric autocorrelated errors, 492–493
partially linear semi-parametric models, 490–492
varying coefficient models with endogenous variables, 495–497
varying coefficient nonparametric models, 494–495
estimation with conditional error variance-covariance Ω(x), 497–498
nonparametric set of regression equations, 486–490
estimation with unconditional error variance-covariance Ω, 487–490
local linear least squares (LLLS) estimator, 487–488
local linear weighted least squares (LLWLS) estimator, 488–489
two-step estimator, 489–490
RESET test results
data-driven model evaluation, 327t
Revealed performance (RP) test results
data-driven model evaluation, 329t
ROC (Receiver operating characteristic) curve, 251f, 350–352
Rosenblatt-Parzen kernel density estimator, 46–47
SBS (Spline-backfitted spline) method in additive models, 170–172
Selection of models
data-driven model evaluation, 308–338 (See also Data-driven model evaluation)
sieve regression, 215–247 (See also Sieve regression)
variable selection, 249–302 (See also Variable selection)
Semilinear time series, 421–441
overview, 421–423, 439
CPI and, 435, 436f
examples of implementation, 434–438
exchange rates and, 438, 438f
extensions, 434–438
government bonds and, 435–438, 437f
nonlinear autoregressive models, 431–434
selected proofs, 441
nonstationary models, 428–431
selected proofs, 439–441
stationary models, 423–427
d3,, 426–427
1d2, 423–426
(p. 536) Semi-nonparametric (SNP) index regression model, 5–6
orthonormal polynomials, application of, 20–21
Semi-nonparametric models, 3–34
density and distribution functions, 24–32 (See also Density and distribution functions)
first-price auctions, 8–9
density and distribution functions, application of, 32
Hilbert spaces, 9–13 (See also Hilbert spaces)
MPH competing risks model, 6–8
density and distribution functions, application of, 31–32
non-polynomial complete orthonormal sequences, 21–24
derived from polynomials, 22–23
trigonometric sequences, 23–24
orthonormal polynomials, 13–21 (See also Orthonormal polynomials)
overview, 3–5, 34
sieve estimation, 33–34
SNP index regression model, 5–6
orthonormal polynomials, application of, 20–21
Semiparametric models
GARCH models, 198–199
with nonstationary data, 463–469
autoregressive models, 463–464
binary choice models, 467–468
time trend varying coefficient models, 468–469
varying coefficient cointegrating models, 464–466
varying coefficient models with correlated but not cointegrated data, 466–467
Sequential minimal optimization (SMO) algorithm, 361
Short memory in mean (SMM), 448, 454
Sieve regression, 215–247
overview, 215–218, 237–238
alternative method selection criteria, 228–230
averaging regression, 230–232
asymptotic optimality of, 235–236
computation, 234–235
feasible series estimators, 237f
jackknife model averaging (JMA) criteria, 233
numerical simulation, 236
selected proofs, 245–247
weights, 237t
cross-validation, 224–226
asymptotic optimality of, 226–227
feasible series estimators, 237f
number of models, preselection of, 227–228
selected proofs, 239–245
typical function, 226f, 237t
integrated mean square error (IMSE), 220–222
selection and averaging estimators, 231f
spline regression estimators, 223f
mean-squared forecast error (MFSE), 224
model of, 219–220
numerical simulation, 230
with JMA criteria, 236
order of approximation, importance of, 221–222
regularity conditions, 238–239
selected proofs, 239–247
sieve approximation, 218–219
sieve estimation, 33–34, 219–220
sieve estimation-based variable selection in varying coefficient models with longitudinal data, 273–274
Simultaneous nonparametric equation models, 201–202
Single index models
variable selection in, 274–283
generalized single index models, 281–283
Liang-Liu-Li-Tsai partially linear single index models, 278–279
Peng-Huang penalized least squares method, 275–276
Yang variable selection for functional index coefficient models, 280–281
Zeng-He-Zhu LASSO-type approach, 276–278
SMM (Short memory in mean), 448, 454
SMO (Sequential minimal optimization) algorithm, 361
Smooth least squares estimator in additive models, 179–190
asymptotics of smooth backfitting estimator, 182–186
backfitting algorithm, 179–182
bandwidth choice and, 189
classical backfitting, relation to, 187–189
generalized additive models, 189–190
noisy integral equations, smooth backfitting as solution of, 187
SBK method, relation to, 187–189
smooth backfitting local linear estimator, 186–187
Smoothly clipped absolute deviation (SCAD) penalty function
variable selection via, 254–255
Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264
Xue SCAD procedure for additive models, 262
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272
Special regressor method, 38–58
alternative derivation, 45
conditional independence assumption, relaxation of, 52–53
covariates, identification with, 47–48
(p. 537) discrete special regressors, 55–56
extensions, 56–57
latent linear index models, 48–49
with endogenous or mismeasured regressors, 51–52
latent marginal distributions, identification of, 40–42
latent nonparametric instrumental variables, 50–51
latent partly linear regression, 50
latent random coefficients, 49
overview, 38–39, 57–58
Tji, construction of, 53–55
unconditional moments, 42–44
estimation of, 46
Spline-backfitted kernel smoothing (SBK) method
in ACMs, 162–170
Cobb-Douglas model, application to, 168–170
in additive models, 153–159
housing data, application to, 157–159
SBK estimator, 154–157
in PLAMs, 159–162
housing data, application to, 162
smooth least squares estimator in additive models, relation to, 187–189
Spline-backfitted spline (SBS) method in additive models, 170–172
Spurious regression
tools for understanding, 387–391, 417n
SRM (Structural risk minimization) method, 347
Stochastic processes
cointegrated systems
automated efficient estimation of, 398–402
efficient estimation of, 395–398, 417n
selected proofs, 413–415
dominance procedure, 501–518
overview, 501–503, 517–518
for gradient estimates, 503–507
Hilbert spaces and, 417n
long-run variance, estimation of, 402–407
selected proofs, 415–417
orthogonal representations of, 380–386, 417n
selected proofs, 411–413
selected proofs, 411–417
spurious regression, tools for understanding, 387–391, 417n
unit root asymptotics with deterministic trends, 391–395
Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287
variable selection, 287
Structural risk minimization (SRM) method, 347
Support vector machine (SVM) method, 352–369
overview, 346–348
CreditReform database, application to,366–368
data, 366–367t
testing error, 368t
training error, 368t
evolutionary model selection, 361–365, 363–366f, 365t
formulation, 352–361
in linearly nonseparable case, 356–358
in linearly separable case, 352–356, 353f
in nonlinear classification, 358–361, 359f
sequential minimal optimization (SMO) algorithm, 361
TFP (Total factor productivity) growth rate, 152f, 168–170, 170f
Tikhonov regularization, 66, 69, 72–73, 75, 81, 83
Time series, 377–476
overview, 377–379, 407–411
cointegrated systems
automated efficient estimation of, 398–402
cointegration of nonlinear processes, 471–473
efficient estimation of, 395–398, 417n
selected proofs, 413–415
data-driven model evaluation, Monte Carlo simulations, 320–324, 321–324t
long-run variance, estimation of, 402–407
selected proofs, 415–417
nonstationary time series, 444–476 (See also Nonstationary time series)
orthogonal representations of stochastic processes, 380–386, 417n
selected proofs, 411–413
panels of time series and factor models, 197–198
selected proofs, 411–417
semilinear time series, 421–441
spurious regression, tools for understanding, 387–391, 417n
unit root asymptotics with deterministic trends, 391–395
Total factor productivity (TFP) growth rate, 152f, 168–170, 170f
Two-step estimator
in nonparametric set of regression equations, 489–490
SBS (Spline-backfitted spline) method,170–172
Unit root asymptotics with deterministic trends, 391–395
Univariate nonlinear modeling, 450–452
Variable selection, 249–302
overview, 249–251, 301–302
in additive models, 255–262
(p. 538) Huang-Horowitz-Wei adaptive group LASSO, 257–258
Meier-Geer-Bühlmann sparsity-smoothness penalty, 258–260
Ravikumar-Liu-Lafferty-Wasserman sparse adaptive models, 260–262
Xue SCAD procedure, 262
in nonparametric models, 283–295
Bertin-Lecué model, 290–291
Bunea consistent selection via LASSO, 288
Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289
Lafferty-Wasserman rodeo procedure, 291–293
Lin-Zhang component selection and smoothing operator (COSSO),284–287
Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295
Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287
in PLMs, 263–268
Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266
Kato-Shiohama PLMs, 265–266
Liang-Li variable selection with measurement errors, 267
Liu-Wang-Liang additive PLMs,267–268
Ni-Zhang-Zhang double-penalized least squares regression, 264–265
Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264
in quantile regression, 295–301
Kai-Li-Zou composite quantile regression, 297–299
Koenker additive models, 296–297
Liang-Li penalized quantile regression for PLMs with measurement error, 295–296
Lin-Zhang-Bondell-Zou sparse nonparametric quantile regression, 299–301
in single index models, 274–283
generalized single index models, 281–283
Liang-Liu-Li-Tsai partially linear single index models, 278–279
Peng-Huang penalized least squares method, 275–276
Yang variable selection for functional index coefficient models, 280–281
Zeng-He-Zhu LASSO-type approach, 276–278
in varying coefficient models, 268–274
Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271
Li-Liang variable selection in generalized varying coefficient partially linearmodels, 272
sieve estimation-based variable selection in varying coefficient models with longitudinal data, 273–274
Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272
via LASSO or SCAD-type penalties in parametric models, 251–255
LASSO estimator, 251–252
other penalty functions, 254–255
variants of LASSO, 252–254
Varying coefficient models, 199–200
alternative specifications of nonparametric/semiparametric set of regression equations
with endogenous variables, 495–497
nonparametric models, 494–495
semiparametric models with nonstationary data
cointegrating models, 464–466
models with correlated but not cointegrated data, 466–467
time trend varying coefficient models, 468–469
variable selection in, 268–274
Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271
Li-Liang variable selection in generalized varying coefficient partially linearmodels, 272
sieve estimation-based variable selection in varying coefficient models with longitudinal data, 273–274
Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272
Wages
data-driven model evaluation, application of, 325–327
average square prediction error (ASPE),326 f
empirical distribution function (EDF),326f
nonoptimal smoothing, implications of, 325–327
Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270
White’s Theorem, 317–318
Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264
Xue SCAD procedure, 262
Yang variable selection for functional index coefficient models, 280–281
Zeng-He-Zhu LASSO-type approach, 276–278
Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272