Show Summary Details

# (p. 528) (p. 529) Subject Index

(p. 528) (p. 529) Subject Index

Accuracy ratio (AR), 350–352

Adaptive component selection and smoothing operator (ACOSSO), 287

Additive coefficient models (ACMs)

Additive models, 129–207

additive diffusion models, 201

future research, 173

GDP growth forecasting and, 151–152

noisy Fredholm integral equations of second kind, 205–207

nonstationary observations, 202–205

related models, 190–202

additive diffusion models, 201

missing observations, 200–201

nonparametric regression with repeated measurements, 192–193

nonparametric regression with time series errors, 191–192

panels of time series and factor models, 197–198

panels with individual effects, 193–196

semiparametric GARCH models, 198–199

simultaneous nonparametric equation models, 201–202

varying coefficient models, 199–200

SBS method in, 170–172

smooth least squares estimator in additive models, 179–190 (

*See also*Smooth least squares estimator in additive models)Additive regression, 149–153

Aid to Families with Dependent Children (AFDC)

AR (Accuracy ratio), 350–352

Artificial neural network (ANN) method, 347

Asymptotic normal inference, 65–93

assumptions, 70–77

estimation methods, 70–77

examples, 67–69

for fixed

*α*, 77–79Hilbert scales, 70–72

ill-posedness, 70

mean square error, rate of convergence of, 73–77

model, 67–69

Autoregressive models

nonparametric models with nonstationary data, 459–460

semiparametric models with nonstationary data, 463–464

(p. 530)
Average square prediction error (ASPE)

Averaging regression, 230–232

asymptotic optimality of, 235–236

computation, 234–235

jackknife model averaging (JMA) criteria, 233

numerical simulation, 236

selected proofs, 245–247

Basel Committee on Banking Supervision, 346

Bertin-Lecué model, 290–291

Binary choice models, 467–468

Black-Scholes model, 346

Brownian motion

Bunea consistent selection via LASSO, 288

Canadian Census Public Use data

Cauchy sequences, 10

Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266

Classical backfitting

relation to smooth least squares estimator in additive models, 187–189

Cobb-Douglas model

SBK method, application of, 168–170

Cointegrated systems

automated efficient estimation of, 398–402

cointegration of nonlinear processes, 471–473

selected proofs, 413–415

Cointegrating models

nonlinear models with nonstationary data, 452–458

nonparametric models with nonstationary data, 460–463

Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289

Component selection and smoothing operator (COSSO), 284–287

Conditional independence

convolution equations in models with, 110–111

Convolution equations, 97–125

estimation and, 122–124

plug-in nonparametric estimation, 122–123

regularization in plug-in estimation, 123–124

independence or conditional independence, in models with, 100–111

conditional independence conditions, in models with,110–111

measurement error and related models, 102–107

regression models and, 107–110

space of generalized functions

*S*^{*}, 100–102partial identification, 120–121

solutions for models, 112–119

classes of, 115–119

existence of, 112–115

identified solutions, 119–120

support and multiplicity of, 115–119

well-posedness in

*S*^{*}, 121–122COSSO (Component selection and smoothing operator), 284–287

Co-summability, 471–473

CreditReform database

Cross-validation, 224–226

asymptotic optimality of, 226–227

numerical simulation and, 236

selected proofs, 239–245

(p. 531)
Data-driven model evaluation, 308–338

Canadian Census Public Use data, application to, 327–t329

empirical illustrations, 325–336

federal funds interest rate, application to, 335–336

GDP growth, application to, 333–335

housing data, application to, 330–333

Default prediction methods

overview, 346–348

accuracy ratio (AR), 350–352

artificial neural network (ANN) method, 347

maximum likelihood estimation (MLE) method, 347

ordinary least square (OLS) method, 347

quality of, 348–352

structural risk minimization (SRM) method, 347

Density and distribution functions, 24–32

bivariate SNP density functions, 27–28

first-price auction model, application to, 32

general univariate SNP density functions, 24–27

MPH competing risks model, application to, 31–32

SNP functions on [0,1], 28–29

uniqueness of series representation, 29–31

Empirical distribution function (EDF)

Error

average square prediction error (ASPE)

integrated mean square error (IMSE), 220–222

mean-squared forecast error (MFSE), 224

measurement error

convolution equations in models with independence, 102–107

Liang-Li penalized quantile regression for PLMs with measurement error, 295–296

Liang-Li variable selection with measurement errors, 267

Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272

Estimation

asymptotic normal inference, 70–77

convolution equations and, 122–124

plug-in nonparametric estimation,122–123

regularization in plug-in estimation,123–124

nonparametric additive models, 131–142

bandwidth selection for Horowitz-Mammen two-step estimator, 139–140

conditional quantile functions, 136–137

known link functions, 137–139

unknown link functions, 140–142

SBS method in additive models, 170–172

sieve estimation, 33–34

(p. 532)
smooth least squares estimator in additive models, 179–190 (

*See also*Smooth least squares estimator in additive models)of unconditional moments, 46

Federal funds interest rate

Fourier analysis

non-polynomial complete orthonormal sequences and, 23

orthonormal polynomials and, 20

German CreditReform database

Greater Avenues for Independence (GAIN program)

overview, 507–509

Gross domestic product (GDP) growth

additive regression models, 151–152

data-driven model evaluation, application of, 333–335

nonparametric additive models, empirical applications, 144–146

Hilbert scales, 70–72

Hilbert spaces, 9–13

convergence of Cauchy sequences, 10

inner products, 9–10

non-Euclidean Hilbert spaces, 11–13

orthonormal polynomials and, 13–15

spanned by sequence, 10–11

unit root asymptotics with deterministic trends and, 392–393

Horowitz-Mammen two-step estimator, 139–140

Housing data

data-driven model evaluation, application of, 330–333

Huang-Horowitz-Wei adaptive group LASSO, 257–258

Independence

Integrated mean square error (IMSE), 220–222

Jackknife model averaging (JMA) criteria

overview, 233

computation, 234–235

numerical simulation and, 236

selected proofs, 245–247

Kai-Li-Zou composite quantile regression, 297–299

Kato-Shiohama PLMs, 265–266

Koenker additive models, 296–297

Kotlyarski lemma, 97

LABAVS (Locally adaptive bandwidth and variable selection) in local polynomial regression, 293–295

Lafferty-Wasserman rodeo procedure, 291–293

Least absolute shrinkage and selection (LASSO) penalty function

Bunea consistent selection in nonparametric models, 288

Huang-Horowitz-Wei adaptive group LASSO, 257–258

Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271

variable selection via, 251–255

LASSO estimator, 251–252

other penalty functions, 254–255

variants of LASSO, 252–254

Wang-Xia kernel estimation with adaptive group LASSO penalty in varying coefficient models, 269–270

Zeng-He-Zhu LASSO-type approach in single index models, 276–278

Least squares estimation

Ni-Zhang-Zhang double-penalized least squares regression, 264–265

nonparametric set of regression equations

local linear least squares (LLLS) estimator, 487–488

local linear weighted least squares (LLWLS) estimator, 488–489

ordinary least square (OLS) method, 347

Peng-Huang penalized least squares method, 275–276

smooth least squares estimator in additive models, 179–190 (

*See also*Smooth least squares estimator in additive models)Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271

Liang-Li penalized quantile regression for PLMs with measurement error, 295–296

Liang-Liu-Li-Tsai partially linear single index models, 278–279

Liang-Li variable selection with measurement errors, 267

Li-Liang variable selection in generalized varying coefficient partially linear models, 272

Linear inverse problems

Lin-Zhang-Bondell-Zou sparse nonparametric quantile regression, 299–301

Lin-Zhang component selection and smoothing operator (COSSO), 284–287

Liu-Wang-Liang additive PLMs, 267–268

Local linear least squares (LLLS) estimator, 487–488

Local linear weighted least squares (LLWLS) estimator, 488–489

Locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295

Maximum likelihood estimation (MLE) method, 347

Mean-squared forecast error (MFSE), 224

Measurement error

convolution equations in models with independence, 102–107

Liang-Li penalized quantile regression for PLMs with measurement error, 295–296

Liang-Li variable selection with measurement errors, 267

Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272

Meier-Geer-Bühlmann sparsity-smoothness penalty, 258–260

Merton model, 346–347

Methodology

MFSE (Mean-squared forecast error), 224

Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295

Missing observations, 200–201

Mixed proportional hazard (MPH) competing risks model, 6–8

density and distribution functions, application of, 31–32

Monte Carlo simulations

nonparametric additive models and, 143

Nadaraya-Watson estimators

Ni-Zhang-Zhang double-penalized least squares regression, 264–265

Noisy integral equations

Fredholm equations of second kind, 205–207

smooth backfitting as solution of, 187

Nonlinear models with nonstationary data, 449–458

cointegrating models, 452–458

error correction models, 452–458

univariate nonlinear modeling, 450–452

Nonlinear nonstationary data, 445–449

Nonparametric additive models, 129–146

empirical applications, 144–146

estimation methods, 131–142

bandwidth selection for Horowitz-Mammen two-step estimator, 139–140

conditional quantile functions, 136–137

known link functions, 137–139

unknown link functions, 140–142

Monte Carlo simulations and, 143

nonparametric regression with repeated measurements, 192–193

nonparametric regression with time series errors, 191–192

simultaneous nonparametric equation models, 201–202

tests of additivity, 143–144

Nonparametric models

variable selection in, 283–295

Bertin-Lecué model, 290–291

Bunea consistent selection via LASSO,288

Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289

Lafferty-Wasserman rodeo procedure, 291–293

Lin-Zhang component selection and smoothing operator (COSSO), 284–287

Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295

Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287

Non-polynomial complete orthonormal sequences, 21–24

derived from polynomials, 22–23

trigonometric sequences, 23–24

Nonstationary time series, 444–476

cointegration of nonlinear processes,471–473

co-summability, 471–473

kernel estimators with I(1) data, 474–476

model specification tests with nonstationary data, 469–471

nonlinear models with nonstationary data, 449–458

cointegrating models, 452–458

error correction models, 452–458

univariate nonlinear modeling, 450–452

nonlinear nonstationary data, 445–449

nonparametric models with nonstationary data, 458–463

autoregressive models, 459–460

cointegrating models, 460–463

semilinear time series and, 428–431

Oracally efficient two-step estimation

SBS (Spline-backfitted spline) method,170–172

Ordinary least square (OLS) method, 347

Orthonormal polynomials, 13–21

completeness, 19–20

examples, 15–19

Hilbert spaces and, 13–15

SNP index regression model, application to, 20–21

Partially linear models (PLMs)

variable selection in, 263–268

Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266

Kato-Shiohama PLMs, 265–266

Liang-Li penalized quantile regression for PLMs with measurement error, 295–296

Liang-Li variable selection with measurement errors, 267

Liu-Wang-Liang additive PLMs, 267–268

Ni-Zhang-Zhang double-penalized least squares regression, 264–265

Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264

Peng-Huang penalized least squares method, 275–276

Quantile regression

Ravikumar-Liu-Lafferty-Wasserman sparse adaptive models, 260–262

Regression analysis

additive regression, 149–153

Regression equations, 485–498

alternative specifications of nonparametric/semiparametric set of regression equations, 490–497

additive nonparametric models, 493–494

nonparametric regressions with nonparametric autocorrelated errors, 492–493

partially linear semi-parametric models, 490–492

varying coefficient models with endogenous variables, 495–497

varying coefficient nonparametric models, 494–495

estimation with conditional error variance-covariance Ω(

*x*), 497–498RESET test results

Revealed performance (RP) test results

Rosenblatt-Parzen kernel density estimator, 46–47

SBS (Spline-backfitted spline) method in additive models, 170–172

Selection of models

Semilinear time series, 421–441

examples of implementation, 434–438

extensions, 434–438

nonstationary models, 428–431

selected proofs, 439–441

(p. 536)
Semi-nonparametric (SNP) index regression model, 5–6

orthonormal polynomials, application of, 20–21

Semi-nonparametric models, 3–34

non-polynomial complete orthonormal sequences, 21–24

derived from polynomials, 22–23

trigonometric sequences, 23–24

sieve estimation, 33–34

Semiparametric models

GARCH models, 198–199

Sequential minimal optimization (SMO) algorithm, 361

Sieve regression, 215–247

alternative method selection criteria, 228–230

averaging regression, 230–232

asymptotic optimality of, 235–236

computation, 234–235

jackknife model averaging (JMA) criteria, 233

numerical simulation, 236

selected proofs, 245–247

cross-validation, 224–226

asymptotic optimality of, 226–227

number of models, preselection of, 227–228

selected proofs, 239–245

integrated mean square error (IMSE), 220–222

mean-squared forecast error (MFSE), 224

model of, 219–220

order of approximation, importance of, 221–222

regularity conditions, 238–239

selected proofs, 239–247

sieve approximation, 218–219

Simultaneous nonparametric equation models, 201–202

Single index models

variable selection in, 274–283

generalized single index models, 281–283

Liang-Liu-Li-Tsai partially linear single index models, 278–279

Peng-Huang penalized least squares method, 275–276

Yang variable selection for functional index coefficient models, 280–281

Zeng-He-Zhu LASSO-type approach, 276–278

SMO (Sequential minimal optimization) algorithm, 361

Smooth least squares estimator in additive models, 179–190

asymptotics of smooth backfitting estimator, 182–186

backfitting algorithm, 179–182

bandwidth choice and, 189

classical backfitting, relation to, 187–189

generalized additive models, 189–190

noisy integral equations, smooth backfitting as solution of, 187

SBK method, relation to, 187–189

smooth backfitting local linear estimator, 186–187

Smoothly clipped absolute deviation (SCAD) penalty function

Special regressor method, 38–58

alternative derivation, 45

conditional independence assumption, relaxation of, 52–53

covariates, identification with, 47–48

extensions, 56–57

latent marginal distributions, identification of, 40–42

latent nonparametric instrumental variables, 50–51

latent partly linear regression, 50

latent random coefficients, 49

*T*

_{ji}, construction of, 53–55

Spline-backfitted kernel smoothing (SBK) method

smooth least squares estimator in additive models, relation to, 187–189

Spline-backfitted spline (SBS) method in additive models, 170–172

SRM (Structural risk minimization) method, 347

Stochastic processes

selected proofs, 411–417

unit root asymptotics with deterministic trends, 391–395

Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287

variable selection, 287

Structural risk minimization (SRM) method, 347

Support vector machine (SVM) method, 352–369

overview, 346–348

CreditReform database, application to,366–368

formulation, 352–361

in linearly nonseparable case, 356–358

sequential minimal optimization (SMO) algorithm, 361

Time series, 377–476

cointegrated systems

automated efficient estimation of, 398–402

cointegration of nonlinear processes, 471–473

selected proofs, 413–415

panels of time series and factor models, 197–198

selected proofs, 411–417

semilinear time series, 421–441

unit root asymptotics with deterministic trends, 391–395

Two-step estimator

in nonparametric set of regression equations, 489–490

SBS (Spline-backfitted spline) method,170–172

Unit root asymptotics with deterministic trends, 391–395

Univariate nonlinear modeling, 450–452

Variable selection, 249–302

in additive models, 255–262

Meier-Geer-Bühlmann sparsity-smoothness penalty, 258–260

Ravikumar-Liu-Lafferty-Wasserman sparse adaptive models, 260–262

Xue SCAD procedure, 262

in nonparametric models, 283–295

Bertin-Lecué model, 290–291

Bunea consistent selection via LASSO, 288

Comminges-Dalayan consistent selection in high-dimensional nonparametric regression, 288–289

Lafferty-Wasserman rodeo procedure, 291–293

Lin-Zhang component selection and smoothing operator (COSSO),284–287

Miller-Hall locally adaptive bandwidth and variable selection (LABAVS) in local polynomial regression, 293–295

Storlie-Bondell-Reich-Zhang adaptive component selection and smoothing operator (ACOSSO), 287

in PLMs, 263–268

Chen-Yu-Zou-Liang adaptive elastic-net estimator, 266

Kato-Shiohama PLMs, 265–266

Liang-Li variable selection with measurement errors, 267

Liu-Wang-Liang additive PLMs,267–268

Ni-Zhang-Zhang double-penalized least squares regression, 264–265

Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264

in quantile regression, 295–301

Kai-Li-Zou composite quantile regression, 297–299

Koenker additive models, 296–297

Liang-Li penalized quantile regression for PLMs with measurement error, 295–296

Lin-Zhang-Bondell-Zou sparse nonparametric quantile regression, 299–301

in single index models, 274–283

generalized single index models, 281–283

Liang-Liu-Li-Tsai partially linear single index models, 278–279

Peng-Huang penalized least squares method, 275–276

Yang variable selection for functional index coefficient models, 280–281

Zeng-He-Zhu LASSO-type approach, 276–278

in varying coefficient models, 268–274

Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271

Li-Liang variable selection in generalized varying coefficient partially linearmodels, 272

sieve estimation-based variable selection in varying coefficient models with longitudinal data, 273–274

Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270

Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272

Varying coefficient models, 199–200

alternative specifications of nonparametric/semiparametric set of regression equations

with endogenous variables, 495–497

nonparametric models, 494–495

semiparametric models with nonstationary data

cointegrating models, 464–466

models with correlated but not cointegrated data, 466–467

time trend varying coefficient models, 468–469

variable selection in, 268–274

Lian double adaptive group LASSO in high-dimensional varying coefficient models, 270–271

Li-Liang variable selection in generalized varying coefficient partially linearmodels, 272

sieve estimation-based variable selection in varying coefficient models with longitudinal data, 273–274

Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270

Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272

Wages

Wang-Xia kernel estimation with adaptive group LASSO penalty, 269–270

White’s Theorem, 317–318

Xie-Huang SCAD-penalized regression in high-dimension PLMs, 263–264

Xue SCAD procedure, 262

Yang variable selection for functional index coefficient models, 280–281

Zeng-He-Zhu LASSO-type approach, 276–278

Zhao-Xue SCAD variable selection for varying coefficient models with measurement errors, 271–272