Adaptation in an Uncertain World—Detection and Attribution of Climate Change Trends and Extreme Possibilities
Xiyue Li and Gary Yohe
This chapter offers results from an artificial simulation exercise that was designed to answer three fundamental questions that lie at the heart of anticipatory adaptation. First, how can confidence in projected vulnerabilities and impacts be greater than the confidence in attributing what has heretofore been observed? Second, are there characteristics of recent historical data series that do or do not portend our achieving high confidence in attribution to climate change in support of framing adaptation decisions in an uncertain future? And finally, what can analysis of confidence in attribution tell us about ranges of “not-implausible” extreme futures vis-à-vis projections based at least implicitly on an assumption that the climate system is static? An extension of the IPCC method of assessing our confidence in attribution to anthropogenic sources of detected warming presents an answer to the first question. It is also possible to identify characteristics that support an affirmative answer to the second. Finally, this chapter offer some insight into the significance of our attribution methodology in informing attempts to frame considerations of potential extremes and how to respond.
Jubayer Chowdhury and Teng Wu
In this chapter, an effort has been made to give an overview of the aerodynamic loading on structures due to non-synoptic wind events, mainly tornadoes and downbursts. A brief description of the provisions in the building codes and standards for non-synoptic wind loads is presented. Current state of the art in simulating non-synoptic wind systems to obtain wind loads on structures is also discussed. The primary focus is buildings, bridges, and transmission lines in the discussion of the aerodynamic loading on structures. Finally, some insights are given on how future research in evaluating non-synoptic wind loads on structures might unfold.
Algebraic generality versus arithmetic generality in the 1874 controversy between C. Jordan and L. Kronecker
This article revisits the 1874 controversy between Camille Jordan and Leopold Kronecker over two theorems, namely Jordan’s canonical forms and Karl Weierstrass’s elementary divisors theorem. In particular, it compares the perspectives of Jordan and Kronecker on generality and how their debate turned into an opposition over the algebraic or arithmetic nature of the ‘theory of forms’. It also examines the ways in which the various actors used the the categories of algebraic generality and arithmetic generality. After providing a background on the Jordan-Kronecker controversy, the article explains Jordan’s canonical reduction and Kronecker’s invariant computations in greater detail. It argues that Jordan and Kronecker aimed to ground the ‘theory of forms’ on new forms of generality, but could not agree on the types of generality and on the treatments of the general they were advocating.
This article discusses the connection between the matrix models and algebraic geometry. In particular, it considers three specific applications of matrix models to algebraic geometry, namely: the Kontsevich matrix model that describes intersection indices on moduli spaces of curves with marked points; the Hermitian matrix model free energy at the leading expansion order as the prepotential of the Seiberg-Witten-Whitham-Krichever hierarchy; and the other orders of free energy and resolvent expansions as symplectic invariants and possibly amplitudes of open/closed strings. The article first describes the moduli space of algebraic curves and its parameterization via the Jenkins-Strebel differentials before analysing the relation between the so-called formal matrix models (solutions of the loop equation) and algebraic hierarchies of Dijkgraaf-Witten-Whitham-Krichever type. It also presents the WDVV (Witten-Dijkgraaf-Verlinde-Verlinde) equations, along with higher expansion terms and symplectic invariants.
Since about 1970 the broadly accepted theory of the universe has been the standard hot big-bang model. However, there is and has always been alternative theories which challenge one or more features of the standard model or, more radically, question the scientific nature of cosmology. Is the universe governed by Einstein’s field equations? Is it really in a state of expansion? Did it begin with a big bang? The chapter discusses various alternative or heterodox theories in the period from about 1930 to 1980, among them the idea of a static universe and the conception that our universe evolves cyclically in infinite cosmic time. While some of these theories have been abandoned long ago, others still live on and are cultivated by a minority of cosmologists and other scientists.
Hedibert Lopes and Nicholas Polson
This article discusses the use of Bayesian multiscale spatio-temporal models for the analysis of economic data. It demonstrates the utility of a general modelling approach for multiscale analysis of spatio-temporal processes with areal data observations in an economic study of agricultural production in the Brazilian state of Espìrito Santo during the period 1990–2005. The article first describes multiscale factorizations for spatial processes before presenting an exploratory multiscale data analysis and explaining the motivation for multiscale spatio-temporal models. It then examines the temporal evolution of the underlying latent multiscale coefficients and goes on to introduce a Bayesian analysis based on the multiscale decomposition of the likelihood function along with Markov chain Monte Carlo (MCMC) methods. The results from agricultural production analysis show that the spatio-temporal framework can effectively analyse massive economics data sets.
Djordje Romanic and Horia Hangan
Analytical and semi-empirical models are inexpensive to run and can complement experimental and numerical simulations for risk analysis-related applications. Some models are developed by employing simplifying assumptions in the Navier-Stokes equations and searching for exact, but many times inviscid solutions occasionally complemented by boundary layer equations to take surface effects into account. Other use simple superposition of generic, canonical flows for which the individual solutions are known. These solutions are then ensembled together by empirical or semi-empirical fitting procedures. Few models address turbulent or fluctuating flow fields, and all models have a series of constants that are fitted against experiments or numerical simulations. This chapter presents the main models used to provide primarily mean flow solutions for tornadoes and downbursts. The models are organized based on the adopted solution techniques, with an emphasis on their assumptions and validity.
D. Daghero, G.A. Ummarino, and R.S. Gonnelli
This article investigates the potential of the point contact Andreev reflection spectroscopy (PCARS) technique for measuring the symmetry of the energy gap and other key parameters of various 0-, 1-, and 2-dimensional superconducting systems. It begins with a brief description of PCARS, explaining what a point contact is and how it can be made and the conditions under which a PC is ballistic, as well as why and to what extent a PC between normal metals is spectroscopic. It then discusses the basics of Andreev reflection and the length scales in mesoscopic systems before considering the limits of applicability of PCARS for spectroscopy of ‘small’ superconductors. Finally, it reviews some examples of PCARS in quasi-0D, quasi-1D and quasi-2D superconductors.
Pontus Lurcock and Fabio Florindo
Antarctic climate changes have been reconstructed from ice and sediment cores and numerical models (which also predict future changes). Major ice sheets first appeared 34 million years ago (Ma) and fluctuated throughout the Oligocene, with an overall cooling trend. Ice volume more than doubled at the Oligocene-Miocene boundary. Fluctuating Miocene temperatures peaked at 17–14 Ma, followed by dramatic cooling. Cooling continued through the Pliocene and Pleistocene, with another major glacial expansion at 3–2 Ma. Several interacting drivers control Antarctic climate. On timescales of 10,000–100,000 years, insolation varies with orbital cycles, causing periodic climate variations. Opening of Southern Ocean gateways produced a circumpolar current that thermally isolated Antarctica. Declining atmospheric CO2 triggered Cenozoic glaciation. Antarctic glaciations affect global climate by lowering sea level, intensifying atmospheric circulation, and increasing planetary albedo. Ice sheets interact with ocean water, forming water masses that play a key role in global ocean circulation.
M.J. Rob Nout, Bei-Zhong Han, and Cherl-Ho Lee
This article discusses the fermentation process for Asian foods, with particular emphasis on fermented food products made from major primary produce such as soybeans, cereals, and meat. A selection of representative fermentations in Asian countries or subregions is provided and fermentations dominated by different types of microorganisms (bacteria, yeasts, and filamentous fungi) are described. Furthermore, fermentations are distinguished according to the inclusion of salt. The article first considers the scientific knowledge referring to food production, microbiological, and chemical composition and (bio)chemical changes taking place during the fermentation before giving an overview of fermented soybean products such as soybean sauce and soybean paste. It then examines fermented meat products and cereal products and concludes with an assessment of prospects for research and development relating to the fermentation of Asian products.
James S. Clark, Dave Bell, Michael Dietze, Michelle Hersh, Ines Ibanez, Shannon LaDeau, Sean McMahon, Jessica Metcalf, Emily Moran, Luke Pangle, and Mike Wolosin
This article focuses on the use of Bayesian methods in assessing the probability of rare climate events, and more specifically the potential collapse of the meridional overturning circulation (MOC) in the Atlantic Ocean. It first provides an overview of climate models and their use to perform climate simulations, drawing attention to uncertainty in climate simulators and the role of data in climate prediction, before describing an experiment that simulates the evolution of the MOC through the twenty-first century. MOC collapse is predicted by the GENIE-1 (Grid Enabled Integrated Earth system model) for some values of the model inputs, and Bayesian emulation is used for collapse probability analysis. Data comprising a sparse time series of five measurements of the MOC from 1957 to 2004 are analysed. The results demonstrate the utility of Bayesian analysis in dealing with uncertainty in complex models, and in particular in quantifying the risk of extreme outcomes.
Joan Martí Molist
Volcanoes represent complex geological systems capable of generating many dangerous phenomena. To evaluate and manage volcanic risk, we need first to assess volcanic hazard (i.e., identify past volcanic system behavior to infer future behavior. This requires acquisition of all relevant geological and geophysical information, such as stratigraphic studies, geological mapping, sedimentological studies, petrologic studies, and structural studies. All this information is then used to elaborate eruption scenarios and hazard maps. Stratigraphic studies represent the main tool for the reconstruction of past activity of volcanoes over time periods exceeding their historical record. This review presents a systematic approach to volcanic hazard assessment, paying special attention to reconstruction of past eruptive history. It reviews concepts and methods most commonly used in long- and short-term hazard assessment and analyzes how they help address the various serious consequences derived from the occurrence (and nonoccurrence in some crisis alerts) of volcanic eruptions and related phenomena.
Timothy P. Marshall and J. Arn Womble
Most building damage occurs at relatively low wind speeds, at or below 50 m s–1 (112 mph), as certain components fail, such as doors, windows, chimneys, and roof coverings. Rainwater then enters these openings, leading to interior damage. Structural failures usually begin with the removal of gable end walls, roof decking, and poorly attached roof structures as wind speeds increase; the greatest damage occurs at roof level as wind speeds increase with height above the ground. Internal wind pressure effects can lead to additional, more catastrophic damage, such as the removal of walls and ceilings. It is difficult to measure wind speeds directly on buildings as they would have to be instrumented well in advance of the storm, and there is no guarantee the storm would strike them. Furthermore, flying debris can damage pressure sensors on instrumented buildings. Thus, damage evaluators must infer failure wind speeds indirectly by studying damage left behind in the wake of windstorms. Therefore, it is important that damage evaluators know how buildings are constructed to better understand how they fail. This chapter identifies similar failure modes in residential structures regardless of wind type according to information from more than four decades of storm damage surveys. The information presented herein highlights some of the lessons learned in evaluating storm damage to wood-framed residential structures.
Antonia Tulino and Sergio Verdu
This article examines asymptotic singular value distributions in information theory, with particular emphasis on some of the main applications of random matrices to the capacity of communication channels. Results on the spectrum of random matrices have been adopted in information theory. Furthermore, information theorists, motivated by certain channel models, have obtained a number of new results in random matrix theory (RMT). Most of those results are related to the asymptotic distribution of the (square of) the singular values of certain random matrices that model data communication channels. The article first provides an overview of three transforms that are useful in expressing the asymptotic spectrum results — Stieltjes transform, η-transform, and Shannon transform — before discussing the main results on the limit of the empirical distributions of the eigenvalues of various random matrices of interest in information theory.
M. Stamenova and S. Sanvito
This article reviews recent advances towards the development of a truly atomistic time-dependent theory for spin-dynamics. The focus is on the s-d tight-binding model [where conduction electrons (s) are exchange-coupled to a number of classical spins (d)], including electrostatic corrections at the Hartree level, as the underlying electronic structure theory. In particular, the article considers one-dimensional (1D) magnetic atomic wires and their electronic structure, described by means of the s-d model. The discussion begins with an overview of the model spin Hamiltonian, followed by molecular-dynamics simulations of spin-wave dispersion in a s-d monoatomic chain and spin impurities in a non-magnetic chain. The current-induced motion in a magnetic domain wall (DW) is also explored, along with how an electric current can affect the magnetization landscape of a magnetic nano-object. The article concludes with an assessment of spin-motive force, and especially whether a driven magnetization dynamics can generate an electrical signal.
This article considers Josephson junction barriers, focusing on barriers made from insulators, metals, semiconductors, magnets, and nanowires. The main characteristic of Josephson junctions is the local reduction or even suppression of the critical current in the barrier. These barriers affect the static and dynamics properties of Josephson junctions, including coupling strength, ground state, phase damping, and tunability of the critical current. The article first provides an overview of the fundamental physics of Josephson junctions, with particular emphasis on the Josephson effect, before describing the properties of two coupled superconductors. It then discusses tunnel barriers, metallic barriers, semiconducting barriers, and magnetic barriers.
Antonio Pievatolo and Fabrizio Ruggeri
This article discusses the results of a Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experiments. Using the Gullfaks oil and gas reservoir located in the North Sea as a case study, the article demonstrates the applicability of Bayes linear methods to address highly complex problems for which the full Bayesian analysis may be computationally intractable. A reservoir simulation model, run at two different levels of complexity, is used, and a simulator of a hydrocarbon reservoir represents properties of the reservoir on a three-dimensional grid. The article also describes a general formulation for the approach to uncertainty analysis for complex physical systems given a computer model for that system. Finally, it presents the results of simulations and forecasting for the Gullfaks reservoir.
Jonathan A. Cumming and Michael Goldstein
This article discusses the results of a study in Bayesian analysis and decision making in the maintenance and reliability of nuclear power plants. It demonstrates the use of Bayesian parametric and semiparametric methodology to analyse the failure times of components that belong to an auxiliary feedwater system in a nuclear power plant at the South Texas Project (STP) Electric Generation Station. The parametric models produce estimates of the hazard functions that are compared to the output from a mixture of Polya trees model. The statistical output is used as the most critical input in a stochastic optimization model which finds the optimal replacement time for a system that randomly fails over a finite horizon. The article first introduces the model for maintenance and reliability analysis before presenting the optimization results. It also examines the nuclear power plant data to be used in the Bayesian models.
Dani Gamerman, Tufi M. Soares, and Flávio Gonçalves
This article discusses the use of a Bayesian model that incorporates differential item functioning (DIF) in analysing whether cultural differences may affect the performance of students from different countries in the various test items which make up the OECD’s Programme for International Student Assessment (PISA) test of mathematics ability. The PISA tests in mathematics and other subjects are used to compare the educational attainment of fifteen-year old students in different countries. The article first provides a background on PISA, DIF and item response theory (IRT) before describing a hierarchical three-parameter logistic model for the probability of a correct response on an individual item to determine the extent of DIF remaining in the mathematics test of 2003. The results of Bayesian analysis illustrate the importance of appropriately accounting for all sources of heterogeneity present in educational testing and highlight the advantages of the Bayesian paradigm when applied to large-scale educational assessment.
Bayesian approaches to aspects of the Vioxx trials: Non-ignorable dropout and sequential meta-analysis
Jerry Cheng and David Madigan
This article discusses Bayesian approaches to aspects of the Vioxx trials study, with a focus on non-ignorable dropout and sequential meta-analysis. It first provides a background on Vioxx, a COX-2 selective, non-steroidal anti-inflammatory drug (NSAID) approved by the FDA in May 1999 for the relief of the signs and symptoms of osteoarthritis, the management of acute pain in adults, and for the treatment of menstrual symptoms. However, Vioxx was found to cause an array of cardiovascular side-effects such as myocardial infarction, stroke, and unstable angina. As a result, Vioxx was withdrawn from the market. The article describes an approach to sequential meta-analysis in the context of Vioxx before considering dropouts in the key APPROVe study. It also presents a Bayesian approach to handling dropout and showcases the utility of Bayesian analysis in addressing multiple, challenging statistical issues and questions arising from clinical trials.