Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 23 October 2019

Introduction

Abstract and Keywords

This article deals with substantial computation component and discusses powerful computers and simulation algorithms that lead to model development where Bayesian econometric methods are predominant. It includes flexible or nonparametric models. It also distinguishes econometrics from statistics by its combination of economic theory with statistics. This article addresses principles, methods, and applications in different parts. It deals with a set of issues namely, the use of computationally intensive posterior simulation algorithms, heterogeneity, and problems caused by proliferation of parameters. The models discussed in this article have increased range and level of complication. They are strongly infused with economic theory and decision-theoretic issues. This article covers a broad range of the methods and models used by Bayesian econometricians in a wide variety of fields.

Keywords: computation component, simulation algorithms, nonparametric models, economic theory, Bayesian econometricians

1 Introduction

Bayesian econometrics has expanded enormously in recent years. This expansion has occurred not only in econometric theory, but also in empirical work Many applied fields have seen a large increase in the use of Bayesian econometric methods. Researchers interested in learning the basics of Bayesian econometrics have available a wide range of textbooks, from the classic textbook of Zellner (1971) through the influential contributions of Poirier (1995) and Bauwens, Lubrano, and Richard (1999) to the recent burgeoning of graduate textbooks such as Geweke (2005), Koop (2003), Koop, Poirier, and Tobias (2007), Lancaster (2004), and Rossi et al. (2005). However, there is no single source for researchers and policymakers wanting to learn about Bayesian methods in specialized fields, or for graduate students seeking to make the final step from textbook learning to the research frontier. The purpose of this Handbook is to fill this gap.

Although each chapter in this Handbook deals with a set of issues distinct to its topic, there are some unifying themes that run through many of the chapters. The first of these is the use of computationally intensive posterior simulation algorithms. In Bayesian econometrics, the simulation revolution has been so overwhelming that almost all of the chapters in this book have a substantial computation component. The development of more powerful computers and more sophisticated simulation algorithms have gone hand in hand, leading to a virtuous cycle where Bayesian econometric methods and model development have become complementary. Indeed, with some models (e.g. dynamic stochastic general equilibrium (DSGE) models), Bayesian methods have become predominant. In the case of DSGE models, there are several factors which account for this Bayesian predominance. But one of these factors is the availability of powerful simulation tools which allow the researcher to uncover features of the high‐ dimensional, irregular posterior distributions that arise.

(p. 2) A second theme that runs through many of the chapters is heterogeneity. In cross‐ sectional and panel data sets this is manifest at the individual level. That is, even after controlling for observable characteristics, individuals may still differ to such an extent that use of standard regression‐based methods assuming slope coefficients which are common across individuals is inappropriate. In marketing, different groups of consumers may respond to a change in the price of a product differently. In labor economics, different individuals may have different returns to schooling. In time series econometrics, heterogeneity takes the form of time‐varying parameters. For instance, in macroeconomics, parameters may follow stochastic processes permitting their gradual evolution, or they may follow Markov switching models permitting different values in expansions and contractions. Although the precise treatment of heterogeneity differs between macroeconomic time series applications and microeconomic panel data applications, the general issues which arise are the same. The researcher must seek to model this heterogeneity in some manner and this is most conveniently done in a Bayesian framework using hierarchical priors. For instance, in Chapter 8 ("Bayesian Applications in Marketing"), heterogeneity relates to the issue of clustering of related groups of consumers using a mixture of normals model. In macroeconomic applications, the mixtures of normals and the Markov switching components both can be interpreted as hierarchical priors, suggesting a Bayesian treatment.

A third, related, theme of much modern Bayesian econometrics arises from problems caused by proliferation of parameters. Many modern models either directly have high‐ dimensional parameter spaces (e.g. vector autoregressive (VAR) models and the nonlinear extensions of VARs used in macroeconomics and finance) or depend on latent variables of high dimension (e.g. the states in state space models are unobserved latent variables and are often of high dimension). In such models, hierarchical priors address concerns about over‐fitting in the context of high‐dimensional parameter spaces. That is, if the research simply says a parameter, θi, varies with i, then the parameter space proliferates regardless of whether i indexes individuals in a panel data exercise or indexes time in a time series exercise. By assuming θi for i = 1,.., N to be drawn from a common distribution (i.e. by using a hierarchical prior), we retain a model that allows for individual heterogeneity, but in a much more parsimonious manner. The precise choice of a hierarchical prior (allowing for an adequate degree of heterogeneity, but not so much as to make the model over‐parameterized) is crucial and several chapters in this Handbook describe the various choices that are coming to be seen as empirically sensible. For the Bayesian, the treatment of such issues is simple and straightforward. The recent advances in Bayesian computation typically mean that Markov chain Monte Carlo (MCMC) methods allow the researcher to integrate out latent variables or nuisance parameters.

Another theme of the Handbook will be immediately apparent to anyone who studied Bayesian econometrics 25 years ago. Before the availability of abundant computer power, Bayesian econometrics was mainly limited to models, such as the normal linear regression model with natural conjugate prior, for which analytical results are available. Now, the range and level of complication of models have greatly increased. Of particular

(p. 3) note is the development of flexible parametric and nonparametric Bayesian approaches. For instance, in many decision problems in economics, the importance of allowing for asymmetric risk functions has led to econometric models where the use of symmetric distributions such as the normal is inappropriate. In the normal linear regression model, Bayesians are no longer wedded to either normality or linearity, but have developed methods that can relax either or both of these assumptions. Many of the chapters in this Handbook include flexible or nonparametric models appropriate to the specific chapter topic. In addition, we think this topic of such importance that we have devoted an entire chapter to it.

Finally, econometrics is distinguished from statistics by its combination of economic theory with statistics. From the analysis of DSGE models through marketing models of consumer choice, we often have models which are strongly infused with economic theory and decision‐theoretic issues are important. Bayesian methods have enjoyed an increasing popularity in such cases. Such models are often parameterized in terms of structural parameters with an economic interpretation and attendant prior information. Many of the chapters in this Handbook are characterized by their careful linking of statistical with economic theory and their close attention to prior elicitation. Furthermore, features of interest to policymakers facing decision problems can be directly calculated using output from an MCMC algorithm.

This book is organized in three parts addressing principles, methods, and applications, respectively. In the following sections of this Introduction, we offer brief summaries of the contributions of this Handbook in each of these areas.

2 Principles

This section contains two chapters on principles of Bayesian analysis especially relevant in econometrics.

Chapter 1 on "Bayesian Aspects of Treatment Choice" by Gary Chamberlain offers a Bayesian approach to decision theory, focusing on the case of an individual deciding between treatments. An important focus of this chapter is the role of information that is available about other individuals through a propensity score. The chapter shows how the propensity score does not appear in the likelihood function, but does appear in the prior. The chapter discusses various priors in this context. It takes up the extension to the case of treatment selection based on unobservables (including a case where an instrumental variable is available) and provides a comparison with the related literature.

Dale Poirier, in Chapter 2 on "Exchangeability, Representation Theorems, and Subjectivity", turns to the foundations of statistical inference rooted in the representation theorems of Bruno de Finetti. He refers to his chapter as a "subjectivist primer" and shows how different assumptions about the joint distribution of the observable data lead to different parametric models defined by prior and likelihood function. Thus, parametric (p. 4) models arise as an implication of the assumptions the researcher makes about observables. Parameters are merely convenient mathematical fictions which serve, in his words, as "lubricants for fruitful communication and thinking". The chapter presents many extensions and offers a clear exposition of the subjectivist attitude which underlies much of Bayesian econometrics.

3 Methods

The second part of the Handbook contains three chapters about Bayesian methods that are important in their own right and are used in most of the remaining chapters.

Chapter 3 on "Time Series State Space Models" by Paolo Giordani, Michael Pitt, and Robert Kohn provides a description of the time series methods that underpin much of modern macroeconomics and finance. In these fields, state space methods are commonly used. For instance, various regime switching and change‐point models (e.g. Markov switching or time‐varying parameter VARs) are state space models, as is the popular dynamic factor model. Stochastic volatility is a state space model. Various treatments of outliers, breaks, and jumps in time series involve state space models. This chapter discusses a variety of posterior simulation algorithms and illustrates their use in a range of models such as those just listed. It is worth noting the extensive discussion of particle filtering methods in this chapter. The particle filter is a very useful tool in the Bayesian analysis of the kinds of complicated nonlinear state space models which are increasingly being used in macroeconomics and finance. The very practical discussion of the advantages and disadvantages of each algorithm provided in this chapter will be of use to the reader wanting to use these methods in empirical work

The burgeoning use by Bayesians of models far more flexible than the simple parametric models of the past was noted above. Chapter 4 on "Flexible and Nonparamet‐ ric Modelling" by Jim Griffin, Fernando Quintana, and Mark Steel serves to take the reader to the research frontier in the use of these models. The chapter divides into two parts. The first part considers flexible parametric models while the latter is purely nonparametric. For the Bayesian, nonparametric models are those where the dimension of the parameter space is unfixed and unbounded. Within the class of flexible parametric models, the authors discuss ways of making distributions more flexible than the normal, first in terms of fat tails and then in terms of skewness. A brief discussion of finite mixture models opens the way to the nonparametric part of the chapter. The most popular Bayesian nonparametric approach involves the use of Dirichlet processes and results in an infinite mixture representation for the data. The chapter discusses Dirichlet processes in detail, describes various posterior simulation algorithms for Bayesian nonparametric models, and illustrates their usefulness in empirical illustrations. The chapter also contains a discussion of methods for flexibly estimating the conditional (p. 5) mean in a regression model, including splines, Gaussian processes, and smoothing priors. The concluding part of the chapter ties the previous parts together in a discussion of fully nonparametric (or flexible parametric) regression modelling. That is, it considers the case where both the conditional mean of the regression is given a nonparametric (or flexible parametric) treatment and the p.d.f. (probability density function) of the regression error is estimated nonparametrically (or with a flexible parametric finite mixture model). In short, this chapter provides a detailed discussion of both theory and computation for the reader interested in flexible treatment of distributions or functional forms or both.

Chapter 5 is an "Introduction to Simulation and MCMC Methods" by Siddhartha Chib. As discussed above, posterior simulation methods have revolutionized Bayesian econometrics. This chapter begins with an intuitive exposition of the ideas and concepts which underlie popular algorithms such as importance sampling and the Metropolis‐ Hastings algorithm, before moving on to multi‐block algorithms (e.g. the Gibbs sampler). These algorithms are used in almost every other chapter of this book, so the reader unfamiliar with posterior simulation should first read Chapter 5. This chapter also discusses state‐of‐the‐art algorithms that are not yet in the textbooks. For instance, in Bayesian analysis of DSGE models, the posterior is potentially of a complicated form and no natural blocking of the parameters for an MCMC algorithm suggests itself. An empirical illustration using a DSGE model shows the usefulness of tailored randomized block Metropolis‐Hastings algorithms. Finally, the chapter offers extensive discussion of marginal likelihood calculation using posterior simulator output.

4 Applications

The chapters in the third part of the book show how the computational methods and modelling ideas of the earlier chapters are being used by Bayesian econometricians. The aims of these chapters are twofold. First, each chapter's goal is to provide an overview of a field which is as broad as possible. Second, each chapter aims to familiarize the reader with the most recent research in the relevant field.

Chapter 6, on "Bayesian Methods in Microeconometrics" by Mingliang Li and Justin Tobias, surveys a broad range of models used by microeconometricians. Beginning with the regression model, this chapter considers extensions such as heteroskedasticity and the hierarchical linear model (both of which draw on ideas from Chapter 4) and provides a discussion of Bayesian treatments of endogeneity problems. A large number of models can be expressed as linear regression models in which the dependent variable is a suitably defined latent variable. Examples include probit and logit. Such nonlinear hierarchical models form the basis of much of this chapter which, after a general treatment, provides several examples and extensions of multivariate models. As part of the latter, multinomial and multivariate probit models are discussed extensively (p. 6) as are treatment effects models, which have played an important role in recent policy debates (see also Chapter 1). Bayesian methods have proved popular with nonlinear hierarchical models since MCMC methods involving data augmentation (discussed in Chapter 6) can typically be used. Furthermore, Bayesian methods allow the researcher to go beyond a study of model parameters and uncover the posterior of any feature of interest; for example, in a policy study such features might be the effects of treatment on the treated or a local average treatment effect, both of which are much more informative than simply presenting parameter estimates. The chapter concludes with a discussion of duration models. It is replete with many empirical examples and advice for the reader interested in using Bayesian methods in practice.

In recent years, Bayesian methods have enjoyed particular success in macroeconomics. Two main reasons for this are: (i) the fact that macroeconomic models often involve high‐dimensional parameter spaces (e.g. as in VARs) and (ii) macroeconomists often desire to incorporate economic theory (e.g. as in the Bayesian estimation of DSGE models). As discussed previously, these are two of the themes that run throughout this Handbook-but they run particularly strongly in macroeconomics. Chapter 7 on "Bayesian Macroeconometrics" by Marco Del Negro and Frank Schorfheide emphasizes these points repeatedly. The introduction to their chapter, with its "challenges for inference and decisionmaking" in macroeconomics and response, offers a succinct justification for the use of Bayesian methods in macroeconomics. Much of the chapter deals with multivariate time series models such as VARs and vector error correction models which are so popular in the field. But many macroeconomists may find the clear exposition of Bayesian DSGE modelling of greatest interest. DSGE modelling is a field where Bayesian methods are enjoying great popularity, but little in the way of textbook exposition exists. This chapter takes the reader through both linear and nonlinear DSGE models, describing the practical issues that arise when implementing these methods.

Macroeconomists often find that parameters change. Models that fit well in the 1970s may not fit well now. Models that fit well in recessions may not fit well in expansions. This challenge of building models that allow for the right amount and sort of change (but not too much change or change of the wrong sort, since then the model can be over‐parameterized) lies at the heart of much current macroeconomic research. This chapter takes the reader through many of the approaches (e.g. time‐varying parameter VARs, Markov switching models, and even DSGE models with Markov switching) that are currently being used by macroeconometricians.

Chapter 7 also has a discussion of the challenges that arise since macroeconomists are often working in data‐rich environments. It shows how Bayesian methods have been empirically successful in responding to these challenges. The chapter concludes with a discussion of model uncertainty (often an important issue in macroeconomics) and decision‐making with multiple models.

Bayesian methods have been used increasingly in marketing, as emphasized in Chapter 8 by Peter Rossi and Greg Allenby on "Bayesian Applications in Marketing". This chapter describes various discrete choice models of consumers who may be (p. 7) heterogeneous both in terms of their preferences and in their sensitivities to marketing variables such as price. This poses a distinct set of challenges that are addressed in this chapter, often through the use of hierarchical priors. Nonparametric and flexible parametric models involving Dirichlet processes and other mixtures are also accepted and favored in marketing. Building on Chapter 4, Chapter 8 describes how and why such methods are used in marketing. Computational issues are important when dealing with large marketing data sets and this chapter is full of useful advice to the practitioner on how to implement posterior simulation methods in marketing models. Of particular interest is the authors' package of R computer code (bayesm) which can be used to implement the various models described in the chapter (Rossi and McCulloch, 2008).

Chapter 9, on "Bayesian Methods in Finance" by Eric Jacquier and Nicholas Polson offers a thorough survey of the usefulness of Bayesian methods in finance. It covers all the major topics in finance (from asset allocation and portfolio management to option pricing and everything in between). Building on earlier chapters in the Handbook (especially Chapters 3 and 5), it describes the MCMC and particle filtering algorithms that lie at the heart of modern Bayesian financial econometrics. Many of the other themes in modern Bayesian econometrics, including the use of shrinkage (e.g. with hierarchical priors), the interaction between theory and econometrics, and the desire to obtain posteriors for complicated nonlinear functions of model parameters, run throughout this chapter.

5 Conclusion

This Handbook is intended as a resource for readers with a textbook‐level knowledge of Bayesian econometrics who wish to move to the research frontier. With this goal in mind, we have striven to produce a volume that is characterized by both breadth and accessibility. That is, one of our aims has been to cover a broad range of the methods and models used by Bayesian econometricians in a wide variety of fields. A second aim has been to have chapters written in a manner so as to allow a student to use the methods and models described in the chapter to do empirical work. As editors, we are grateful to the authors of the chapters in this book. We feel that they have admirably achieved the goals we set out for them-and we hope the readers of this book will feel the same.

References

Bauwens, L., Lubrano, M. and Richard, J.‐F. (1999). Bayesian Inference in Dynamic Econometric Models. Oxford: Oxford University Press.Find this resource:

Geweke, J. (2005). Contemporary Bayesian Econometrics and Statistics. New York: John Wiley and Sons.Find this resource:

(p. 8) Koop, G. (2003). Bayesian Econometrics. Chichester: John Wiley and Sons.Find this resource:

-- Poirier, D.J., and Tobias, J.L. (2007). Bayesian Econometric Methods. Cambridge: Cambridge University Press.Find this resource:

Lancaster, T. (2004). An Introduction to Modern Bayesian Econometrics. Oxford: Blackwell.Find this resource:

Poirier, D. (1995). Intermediate Statistics and Econometrics: A Comparative Approach. Cambridge, Mass.: MIT Press.Find this resource:

Rossi, P.E., Allenby, G. and McCulloch, R. (2005). Bayesian Statistics and Marketing. Chichester: John Wiley and Sons.Find this resource:

Rossi, P.E. and McCulloch, R. (2008). Bayesm: Bayesian Inference for Marketing/Micro‐econometrics. R package version 2.2.2. 〈http://faculty.chicogogsb.edu/peter.rossi/research/bsm.html

Zellner, A. (1971). An Introduction to Bayesian Inference in Econometrics. New York: John Wiley and Sons.Find this resource: