Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 20 July 2019

# Earthquake Risk Assessment

## Abstract and Keywords

This article discusses the importance of assessing and estimating the risk of earthquakes. It begins with an overview of earthquake prediction and relevant terms, namely: earthquake hazard, maximum credible earthquake magnitude, exposure time, earthquake risk, and return time. It then considers data sources for estimating seismic hazard, including catalogs of historic earthquakes, measurements of crustal deformation, and world population data. It also examines ways of estimating seismic risk, such as the use of probabilistic estimates, deterministic estimates, and the concepts of characteristic earthquake, seismic gap, and maximum rupture length. A loss scenario for a possible future earthquake is presented, and the notion of imminent seismic risk is explained. Finally, the chapter addresses errors in seismic risk estimates and how to reduce seismic risk, ethical and moral aspects of seismic risk assessment, and the outlook concerning seismic risk assessment.

# Summary

Seismic risk assessment is on shaky ground in more ways than one. No seismic risk map of the world exists at present, and the value of the world seismic hazard map, from which risk estimates are presently made, is questioned in the expert community. The residents of L’Aquila, Italy, felt so poorly informed about their earthquake hazard and risk that they took an Italian Civil Protection official and the country’s leading seismologists to court. Given this background, this chapter not only reviews the basic data and methods needed for estimating seismic hazard and risk but also brings to light some ethical aspects of risk assessment. In some cases, government officials and industrial leaders may not want to know about the seismic hazard and risk because they wish to build cheaply. In the absence of a reliable method to calculate the seismic risk probabilistically, worst case scenarios should be estimated for population centers in strongly seismogenic regions, as is the current practice for critical structures. Engineered building practices and education are the traditional ways to reduce seismic risk. New approaches to reduce losses include early warning for industrial activities and near-real-time loss alerts for immediate adequate rescue response. Advances in understanding worldwide seismic risk are hampered by a lack of accurate information on population distribution and the properties of the built environment. For the much-needed, but never-ending, task of compiling world data on population and building properties, techniques using satellite images and crowd sourcing (e.g., OpenStreetMap) give hope that progress may be accelerated.

# Introduction

The risk arising from losses in earthquakes would be much reduced if only one could predict them. Predicting events is generally a dream of humans; so much so that quack predictors continue to pop up with unfounded claims. As a consequence, earthquake prediction research has been abandoned in most countries for fear of getting a bad reputation and for lack of funding. Nevertheless, efforts to forecast earthquakes continue with different approaches (e.g., Kossobokov, 2014; Sobolev & Chebrov, 2014; Schorlemmer & Gerstenberger, 2014; Wu, 2014, Zechar & al., 2014).

Trying to better understand the physics of the processes leading to earthquakes, constructing databases on all relevant aspects, and improving engineering skills for construction are currently the preferred ways to advance the understanding of earthquake risk and reduce potential loss.

Various terms, commonly used in seismic risk analysis, are defined as follows: Earthquake hazard is the probability that a certain level of strong ground motions may occur at a particular location due to earthquakes. This depends chiefly on the distance of the chosen location from an active fault. Our planet is crisscrossed by faults, surfaces that separate geologic units or offset structural features, but only those that currently slip in occasional earthquakes (active) are of interest. The maximum credible earthquake magnitude, Mmax, of which a fault may be capable, is estimated from the probable maximum length and width of a future earthquake rupture. The exposure time is another variable in earthquake hazard estimates, and with longer assumed exposure, the probability of encountering large accelerations increases. Earthquake risk designates the losses that may be expected to occur. Losses include fatalities, injuries, and damage to infrastructure as well as direct and indirect economic losses. Estimates of future losses, that is, the risk, for a given region may be large in areas of only moderate seismicity, if the region is heavily populated and/or contains expensive built environments. In an unpopulated area without infrastructure the risk can be nil even if large earthquakes are likely. The notion of return time (or recurrence interval, the average time between events) for a specific fault is an imprecise but important estimate of the time between earthquakes of similar size on a fault: Where strain is accumulated slowly, large earthquakes cannot be expected frequently, whereas in regions of fast strain rates frequent major events must be expected. A characteristic earthquake is one that reoccurs with approximately the same magnitude along the same fault segment more (1, 2)or less periodically.

Different users will focus on different types of losses. Insurers of infrastructure will estimate the probability and extent of losses due to damage to the built environment (e.g., Michel, 2014), while humanitarian institutions are more interested in fatalities and numbers of injured (e.g., Wyss, 2005, 2006). In this article the focus is on fatalities because the numbers of injured are usually poorly known or unknown.

The basis for estimating earthquake risk is a quantitative evaluation of seismic hazard. The information concerning hazard comes from historical seismicity, the geology of active faults and crustal deformation rates and spatial gradients. To understand seismic risk, we first discuss these basic data in the next section.

# Data Sources for Estimating Seismic Hazard

The input required for estimating future likely accelerations requires first of all the history of seismic activity. The length and completeness of this information varies greatly from country to country. Next, the rate of deformation due to tectonic forces is known in details in developed nations, but poorly in some developing countries. Finally, since soil conditions amplify accelerations, it would be desirable to know them in detail for all districts of important cities exposed to earthquakes.

## Learning from the Past

Catalogs of historical earthquakes are the first resource one looks to if one wants to estimate seismic hazard for a particular location. Developed nations maintain organizations with databases of the countries’ seismicity, frequently easily accessed via web-based portals. The US Geological Survey (USGS) offers extensive information on global seismicity. Their page on earthquakes presents opportunities for searching their homogeneous catalog dating back to 1973, and other USGS pages present lists of recent as well as major historic events. The most complete worldwide earthquake catalog can be searched on the website of the International Seismological Center (ISC). Finally, the team of the Global Earthquake Model (GEM) is assembling the newest generation of a world catalog for historical earthquakes. These catalogs must be used with care because they sometimes include erroneous locations and magnitudes derived from secondary sources. The detail of historical information varies from country to country, and for some large areas it is absent.

Evidence for large prehistorical earthquakes is much needed because the period covered by instrumentally recorded and historically documented events is shorter than the return time of large earthquakes, in most countries. Paleoseismologists trench active faults and have developed sophisticated methods to identify, date, and estimate the size of prehistoric major ruptures (e.g., Meghraoui & Atakan, 2014). Their work is similar to that of archeologists. This way of extending the record of large earthquakes to thousands of years is helpful for estimating the seismic hazard where surface faults exist. Paleoseismology is less successful where faults do not cut the surface, or where the return time of damaging earthquakes exceeds tens of thousands of years, and where erosional processes can erase clues about fault slip.

Measurements of crustal deformation, the strain that accumulates and ultimately leads to fault ruptures, furnishes important clues to the potential readiness of a fault segment to rupture. This is the only way available to estimate the slip that may result in a future earthquake because the level of stress in the Earth’s crust cannot be measured directly. Rocks are both elastic and brittle, like twigs, except it takes the enormous forces of moving tectonic plates to bend them until they break in earthquakes. The energy to move the plates is generated by radioactive decay in the Earth’s interior in the form of heat that ascends by convection currents for cooling to the surface.

The phenomenon of fault creep (unloading of accumulated strain energy by slow, long-duration slip along a fault without generation of seismic waves) was discovered based on measurements of changes of angles in a network of nails hastily pounded into oak trees in Central California (Smith & Wyss, 1968). Such primitive methods are no longer necessary because satellite techniques (GPS, Global Positioning System, and InSAR, synthetic aperture radar) furnish copious data on crustal deformations on all scales (e.g., Taubenböck et al., 2014). Along some plate margins, creep releases some of the strain accumulated by plate motions. This complicates the estimation of available elastic energy for release in earthquakes.

Introduced in the 1980s, GPS stations now cover the surface of the Earth by the thousands, forming a dense, but discrete, network of displacement measurement points. These networks are best suited for measuring ongoing deformation from local to continental scales. Whereas the principle is the same as in the determination of the position of a holder of a modern mobile telephone, the accuracy ranges from a centimeter to millimeters, depending on the amount of processing applied.

InSAR images of the Earth’s surface, taken at different times from satellites, are superposed for deducing the deformation that occurred in the interexposure time. In this method, no stations are deployed; the surface of the Earth serves as a continuous (nondiscrete) set of reflection points where the surface is not heavily vegetated. This is particularly useful for locations where it is difficult or dangerous to measure on the ground (active volcanoes or war zones) or for mapping the deformations caused by an earthquake in greater detail than is possible by a sparse GPS station network.

The model of global plate tectonics serves as a framework for interpreting the aforementioned deformation data because most large earthquakes are generated along defined plate boundaries, although intraplate earthquake surprises do exist (e.g., Johnston, 1989). Some plate boundaries are relatively simple, consisting of one well-known fault (e.g., sections of the San Andreas and the North Anatolian Faults), whereas the impact of India on Eurasia leads to complex faulting systems at far distances from the actual contact surface. Earthquake hazard and risk assessment can therefore vary from relatively straightforward to highly complex.

## Local Amplification of Shaking

Soil conditions and topography vary locally and can amplify strong ground motions. In poorly consolidated soils, seismic wave amplitudes can easily be four times larger than on nearby rocks. Therefore seismic hazard and risk can vary locally over short distances and microzonation maps of these differences are necessary for detailed estimates of the risk in large cities (e.g., Parvez & Rosset, 2014).

There are three types of specialists who contribute to gathering the data concerning the contribution of nature to the problem of seismic hazard and risk discussed in the above section: seismologists, geologists, and geodesists. In the next section, the data concerning the contribution of humanity to the problem, gathered by sociologists and engineers, is presented.

# Estimating Seismic Hazard

It would be desirable to estimate the probability of a given level of acceleration to be expected. However, what level is appropriate for urban areas? For designing critical structures, rules have been define of how to calculate the hazard. Estimating the seismic hazard due to the worst case is physically sound, but gives no probability. The standard probabilistic method is questioned based on numerous objections.

## Probabilistic Seismic Hazard Estimates

The method of estimating seismic hazard is currently subject to controversy. There is no point in presenting the whole discussion here, because relevant arguments are put forward in recent summaries (e.g., Panza et al., 2014; Stirling, 2014, and references cited therein; Wyss & Rosset, 2012). The most salient aspects of the problem are briefly mentioned below.

The current standard method is termed probabilistic seismic hazard assessment (PSHA). The basic assumption is that the occurrence frequency of large earthquakes may be estimated by extrapolation from the record of smaller earthquakes. The observation in which this idea is rooted is that most earthquake ensembles along a fault follow the Gutenberg-Richter law of size distribution (Equation 1)

$Display mathematics$
(1)

where N is the cumulative number of events larger than and equal to M, the magnitude, with a and b (typically equal to 1) parameters to be determined. These parameters are derived from a few decades of recorded earthquakes. The maximum observed M rarely reaches 6, but the Mmax may be in the range of 7 to 9. Thus one usually needs to extrapolate beyond the range of observations by about 3 orders of magnitude. Then the peak ground acceleration (PGA) and its probability of occurrence resulting from all possible sources affecting a site of interest is estimated.

This method is so well established that it is defined in engineering handbooks and required to be used as a basis for safe design of critical structures in some countries. Yet, proponents of more reliable seismic hazard assessments point out that the results are subject to an arbitrary selection of the length of exposure time and the level of probability that the estimated shaking will not exceed.

More problematically, along fault segments where the return period of large earthquakes is known from paleoseismology with a resolution of a few decades, it can be shown that the relationship above does not hold. According to Equation (1), a single M8 event should be preceded by ca. 32,000 earthquakes of ca. M3 over an interval of 130 years. However, the San Andreas Fault segment near the site of the 1857 rupture experienced a period of 49 years between 1932 and 2015 with no event ≥ M3.5. (Wyss, 2015b).

A priori estimates of probabilities of strong shaking cannot be verified, until Mother Nature does it herself. In Southern California, a line of precariously balanced rocks is found equidistant and parallel to the Elsinore and San Jacinto Faults, but not closer to either fault. They are survivors of earthquakes during the last approximately 10,000 years, attesting to the fact that shaking never exceeded 0.25 g at these locations (Brune, Anooshehpoor, Purvance, & Brune, 2006; Anderson et al., 2014, and references therein). The seismic hazard map for Southern California proposes that 0.5 g acceleration is the level relevant for safe construction. Clearly, Californians are asked in some circumstances to be more conservative and to spend more money on buildings than may be necessary.

The seismic hazard map of the world (GSHAP map, Giardini, 1999) shows the opposite departure from Equation 1 in many regions: The accelerations produced by many recent large earthquakes exceed by far the values recommended by the map for safe construction (e.g., Geller, 2011; Wyss, Nekrasova, & Kossobokov, 2012; Zuccolo et al., 2011). This means that the safeguards in place give the population of these regions a false sense of security.

## Deterministic Seismic Hazard Estimates

In a double issue of Pure and Applied Geophysics, Panza et al. (2011) assembled articles critical of the PSHA method and proposed more deterministic approaches. A deterministic, or scenario, approach for estimating expected ground accelerations, as opposed to PSHA, is used in the design of high dams. Here, the maximum credible earthquake (MCE) that would generate the maximum credible acceleration (PGAMCE) must be estimated.

The magnitude of the MCE (Mmax) can be estimated from the maximum length of a nearby major fault that could rupture in a single event. This is not easy, because ruptures sometimes involve only a single segment of a fault, while at other times the propagating rupture jumps across discontinuities and connects several segments. It can also happen that an unbiased estimate of Mmax is corrupted by societial pressures acting on individuals or groups charged with risk assessment (e.g., Bilham, 2014; Gaur, 2015).

I have argued elsewhere that people should receive the same consideration as dams and that, therefore, in seismically strongly active areas the consequences of PGAMCE should be presented to authorities and the population as the seismic risk for which they should prepare (Wyss, 2015a. However, in this approach there is no probability of occurrence given and it is a weak means of assessment in regions of low seismicity. In contrast, for regions of high seismicity, where the likelihood of Mmax to occur during the next three to four generations is more than 50%, assessments intended to be useful to future generations should treat the Mmax earthquake as possibly occurring tomorrow. Mitigating measures will be useful for present or future generations because the occurrence of a large earthquake is certain, and only its occurrence time is unknown.

## The Concepts of Characteristic Earthquake, Seismic Gap, and Maximum Rupture Length

To learn from the past and assume the same, or at least something similar, will happen in the future is generally accepted. In the case of earthquakes one speaks therefore of recurrence time, the time one has to wait until the same earthquake happens again. Based on this thinking, earthquakes are given names, like the Parkfield M6 earthquake that has reruptured the same approximately 27-km segment of the San Andreas Fault at 35.93N/120.5W several times (Bakun & McEvilly, 1979, Bakun & McEvilly, 1984, Bakun and Lindh, 1985).

In June of 1966, I prepared a field campaign to measure crustal deformations that appeared to take place at Parkfield according to observations by C. R. Allen. I was too late by a day and missed the earthquake on June 28, 1966. Mr. Carr, a rancher in the Parkfield valley, told me that in 1934 an earthquake produced a surface rupture that passed between his farmhouse and the barn in the exact same place as the rupture in 1966. Subsequently it was discovered that waveforms of quakes in 1934 and 1922 were astonishingly similar to those in 1966. Adding historical reports, the conclusion was that the 1966 Parkfield earthquake was number 6 in a relatively regular sequence of almost identical earthquakes with an average return time of 22 years with the earliest one assumed to have taken place in 1857 (Bakun & McEvilly, 1979).

Had this regular sequence been discovered in the early 1960s, the 1966 earthquake could have been predicted with an acceptably short time window uncertainty. As it was, Mother Earth nixed the reasoned prediction that another M6 earthquake would happen between 1985 and 1993 (Bakun & Lindh, 1985) and delayed the next characteristic rupture until 2004, beyond the previously established 22-years interval. My interpretation of this delay is that the Coalinga M6.5, 1983 earthquake (Toda & Stein, 2002) and possibly also the Kettleman Hills M6.1, 1985, thrust earthquake (Lin & Stein, 2006) clamped the San Andreas fault at Parkfield to some degree, requiring more shear stress than usual to initiate the next rupture.

Click to view larger

Figure 1 Maps depicting the history of large earthquake ruptures along the North Anatolian fault in the 20th (top, after Stein et al., 1996, 1997) and the 18th (bottom, after Barka et al., 2002) centuries, respectively. In both cases, the sequences propagated from east to west, each rupture abutting the previous one. The last rupture in 1999 stopped 40 km east of Istanbul. Historical reports indicate that in earlier centuries the ruptures along this fault occurred not in a propagating manner (Stein et al., 1996).

Another example of a well-organized sequence of earthquakes, as simple-minded thinking would hope for, is that along the North Anatolian Fault in Turkey. This sequence of 11 M≥6.7 ruptures (Figure 1, top) began in 1939 at the extreme east of the North Anatolian fault and progressed, abutting one another, more than 1,000 km to the west (e.g., Stein, Barka, & Dieterich, 1997), ending in 1999 only about 40 km from Istanbul in the Izmit M7.6 earthquake (Barka et al., 2002) that killed about 17,000 people. The next rupture in this spatially regular sequence is expected to occur beneath the Marmara Sea, south of Istanbul. This expectation does not qualify as a prediction because the occurrence time is unknown.

An earthquake similar to the one in 1999 occurred in 1719, followed abutting to the west (south of Istanbul) by a smaller rupture after 35 years in 1754 (Figure 1, bottom). This quake in turn was followed by two additional M7+ abutting earthquakes in May and August 1766, forming another spatially well-ordered sequence rupturing from east to west along this northernmost strand of the North Anatolian Fault (Barka et al., 2002). The potential slip accumulation rate along this fault, called the Karamuersel branch, is 1.6 cm/y (Straub & Kahle, 1994). Even given this information, it is not straightforward to estimate the occurrence time of the next earthquake south of Istanbul, because there exist two fault strands beneath the Marmara Sea and major earthquakes have happened there in 1894 and 1963 (Barka et al., 2002). In addition, earlier historic earthquakes along the North Anatolian Fault have not occurred in as orderly a sequence as those in the 20th century (Stein et al., 1997).

Extensions of fault ruptures at their ends, following by seconds to decades, is expected because the stress redistribution caused by fault slip accumulates stresses at the ends of the ruptures (e.g., Harris, 1998). Reorienting the stress tensor, or changing the scalar value of its components, can both favor and hinder slip along faults (surfaces of weakness) existing in the Earth’s crust, depending on the orientation of these surfaces. After large earthquakes, volumes of the Earth’s crust beyond the ends of the rupture are often activated producing small earthquakes at a rate higher than normal, and in other, neighboring volumes the seismicity is turned off, a phenomenon termed “stress shadow” (e.g., Harris, 1998). Stein et al. (1997) estimate that in this way the strain loading of a fault segment can be advanced up to 30 years over the normal period required to accumulate the elastic energy for an M7-class rupture. If the recurrence time is about 150 years, then the speeding up of the earthquake occurrence would amount to 20%.

A section like the one along the North Anatolian Fault south of Istanbul is termed a seismic gap. This means that fault segments have broken relatively recently to one or both sides, setting up the gap for rupture in the not too distant future. Mapping the world’s seismic gaps along the tectonic plate boundaries was vigorously pursued by the Lamont-Doherty group of seismologists at Columbia University, New York, during the 1970s and 1980s (e.g., Kelleher et al., 1973; McCann, Nishenko, Sykes, & Krause, 1979; Nishenko, 1991). The ideas that elastic energy accumulated by plate motions is required for earthquakes to happen, that additional events are unlikely where the energy was recently reduced in a large rupture, that the Earth’s crust is a continuum, and finally that therefore seismic gaps are the location of temporarily increased probability to rupture, are eminently reasonable. However, the large earthquakes following the 1970/80 definitions of seismic gaps did not preferentially occur in gaps (Kagan & Jackson, 1991, 1995; Nishenko & Sykes, 1993), demonstrating that the jigsaw puzzle of the Earth’s crust with its many plates, platelets, and complex stress regions is more complicated than one would wish, but wishing has no place in science.

If the Earth produced earthquakes at intervals and in sequences as regular as the aforementioned examples, assessing the hazard and risk would be much easier. Worse than the relatively small deviation from the regular recurrence interval at Parkfield, the Sumatra 2004 and the Tohoku 2011 M9-class ruptures, about 1,600 km and 600 km long, respectively, combined segments of the plate boundaries involved that previously had ruptured separately in smaller earthquakes. Thus, in the world’s most intensively studied seismic area, Japan, the Mmax of the Pacific plate boundary was significantly underestimated.

The fact that earthquakes can be multiple ruptures has been known since 1968, when it was noticed that the great Alaska earthquake of 1964 emanated increasingly larger pulses of energy with time (Wyss & Brune, 1968). With modern inversion techniques, all large earthquakes are modeled in detail and invariably strong variations of the slip on the rupture surface are discovered. This means that the stress drop, and thus the energy released as a function of position on the fault surface, varies strongly. It can happen that the propagating rupture stops for a few seconds, as is clearly seen in the strong motion record for the Kalapana M7 earthquake of 1971 recorded at 40 km distance. The Tohoku M9 earthquake has also been modeled as a multiple rupture (Maercklin, Festa, Colombelli, & Zollo, 2012). Presumably, the multiplicity reflected the fact that several previously independently rupturing segments combined to produce this giant earthquake.

The Landers, M7.3, 1992 earthquake is an example in which a rupture jumped across offsets of the geologically mapped surface trace. In seismic hazard analyses for critical facilities, seismologists may take the length of the nearest segment as the basis for estimating the magnitude of the Mmax and the possible worst case acceleration (PGAMCE), but Nature may ignore the apparent limits of the fault segments and generate a much larger Mmax. In cases where the seismic hazard is underestimated, the risk, as measured in fatalities, may be 200 times larger than expected (Wyss et al., 2012).

# Additional Data Necessary for Estimating Seismic Risk

The problem of seismic risk exists only because of the presence of humans and their built environment. Thus, for calculating the risk, one needs to know the population distribution and the resistance to strong shaking by dwellings, factories, and office buildings as well as those of schools, hospitals, infrastructure and critical facilities. Ideally, one would like to know the exact number of people in each village and in each section of a city, as well as the details of construction of each building. However, these data are unavailable in most countries and approximate models for both the population and the built environment have to be constructed, especially for those countries, which need it most: The ones with poorly constructed buildings.

## World Population Data

The distribution of the population in many countries is not well known. Surprisingly, many employees of agencies who own such data are unaware of the gaps in and the approximate nature of some of the population data. The census data are the most important source. However, they are often outdated, and in some cases they are available by units of administrative districts only, instead of by individual towns and villages. Other sources accessible by the Internet, such as Geonames, give coordinates and names of settlements, but for thousands of these the population is not known in many countries. A different source for constructing models for the population distribution is based on data gathered by sensors on satellites. In this approach, the total population of a country, assumed as known internationally, is distributed into pixels of the surface of the planet judged by one or several methods to be populated. A third method is to define census tracts with an ID number, coordinates, and population.

Click to view larger

Figure 2 The distribution of buildings and population in these buildings as a function of vulnerability class (A is most fragile, and F is earthquake resistant) in three size categories of settlements: village, town, and city. In Greece, the origin of the examples shown, separate models for all settlements exist, whereas in most countries we are forced to construct models for three size classes assuming all settlements in each class have identical distributions, and further assuming that the boundaries of the classes are at 2,000 and 20,000 inhabitants.

For calculating the earthquake risk, the pixel approach is the least well suited. The four disadvantages associated with it are: (1) There is no name attached to the pixel, and first responders in case of a disaster will not readily know where to go if they depend on handheld GPS devices. (2) The properties of the built environment cannot readily be estimated because they depend on settlement size (Figure 2). (3) There are assumptions necessary for distributing the “known” population of the entire country (or of provinces) into the populated pixels. (4) The smaller the number of buildings in a unit for which the losses are estimated, the greater the uncertainty. Because of the approximations (or unknowns) regarding the built environment and the soil conditions one needs a large number of buildings in a unit for damage calculations to average out local variations. The pixel sizes of typically 1 km2 are too small for this purpose.

Census tract data for the population are only available for the United States, as far as I know. Their advantage is that they are highly accurate. Also, the size of a tract unit (typically 8,000 people) is sufficient to average uncertainties in loss calculations. The disadvantage is that they are identified by numbers, not by settlement names, so first responders in case of disasters may be confused or misled when using maps showing tract numbers only.

Using Geonames for settlements without known population could be called the fill-in approach for constructing population models by settlements. For many countries three types of input exist: The total population of the country (for example the World Fact Book, https://www.cia.gov/library/publications/the-world-factbook/), the population in the medium to large cities, and settlement names with coordinates but not population. Thus, one needs to distribute that part of the total population which is not reported to be in the large cities evenly into the settlements with names but unknown population. In Iceland this leads to five people per what we call a “fill-in settlement.” This comprises a single, remote farm, typical for Iceland. For countries like India, the number associated with a fill-in settlement is closer to 5,000. The advantages of this approach are that the settlements can be found by their names and the properties of the built environment for each settlement can be derived approximately from its size and the overall properties of the country in question.

## Resistance to Shaking by Buildings

Issues associated with the built environment include the following. (1) First of all, one has to know what the several typical building types are in a country and its regions. The best source for this information is the World Housing Encyclopedia (http://www.world-housing.net/), which has, together with the USGS PAGER group, accumulated much information. (2) Next, one has to construct separate models for building type distributions as a function of size of the settlements (e.g., Figure 2). (3) It would be desirable to assign different models for buildings in settlements with different emphasis of function (farming, administrative, industrial, for example). However, this information is rarely available. (4) The separation into dwellings and industrial (including office) buildings is also poorly known in most countries. (5) The distribution of people into the different building types is also difficult to estimate in some cases. In villages the percentage of people living in the various building types may equal the percentage of the building types present. However, in cities, large apartment buildings house a larger percentage of the population than do villas. A comparison of the distribution of buildings with the distribution of people is shown in Figure 2.

The standard approach to classifying a building is for a civil engineer to examine its structure and quality of construction. Better would be to have the blueprints and to know whether these have been followed and the correct materials have been used. In most of the world, none of these pieces of information are available. Therefore, in attempts to characterize the built environment a number of methods have been used and are employed using satellite images. Based on such images, the topography of most large cities has been mapped by telecommunication companies, in order for them to place their antennas such that mobile phones receive signals in the canyons of downtown. Unfortunately, models of cities that contain three-dimensional representation of all buildings are not available to the public. The information one gains from satellite images is (1) footprint (size and shape), (2) height (calculated from shadows cast by the buildings), and (3) properties of the roof. Based on general knowledge about the building stock in the country in question, one can then attempt to assign each building to a type using the input 1 through 3.

Using crowd sourcing via OpenStreetMap (OSM) may develop into a powerful means to augment information regarding the built environment. The approximately 130 million building footprints currently on the OSM are growing by about 100,000 per day by volunteers interested in their neighborhood. Efforts are underway to form a community of mappers interested in furnishing features important for improving risk assessments, loss estimates, and disaster alerts, which include location and size of schools and hospitals, locations of critical facilities (e.g., bridges, power plants, police and fire stations), and attributes of buildings visible from the street. Information about land use (parks, industrial plants) is another area where OSM crowd sourcing can be useful.

An important procedure is to assign buildings of a certain type (Jaiswal et al., 2010) to a vulnerability class on the EMS98 scale (Grünthal, 1998). The type is defined on the basis of construction, wall material, roof material, age, and shape using a set taxonomy.

Click to view larger

Figure 3 Vulnerability curves for buildings as a function of intensity of shaking (EMS98 scale) showing the probability for buildings of different fragility to collapse in a developing (a) and an industrialized (b) country. The weakest buildings are not present in the industrialized country, and the curve for category C shows a stronger version of this type than in the developing world. This contrast is at the bottom of the fact that an M6.5 earthquake typically leads to approximately 20,000 and 2 fatalities in Iran and California, respectively.

The vulnerability classes (A weakest to F strongest) are assigned different functions of the probability of a particular damage grade (e.g., collapse, Figure 3) depending on the intensity of shaking experienced. Based on details of construction and materials used, the same building types in different regions may belong to different vulnerability classes, or different building types can also belong to the same vulnerability class. To sort out these relationships in different countries is a major engineering challenge (e.g., Tolis, 2014).

An ideal world dataset for the built environment to calculate risk and the extent of earthquake disasters would include the following features: (1) Quantification of the distribution of buildings into the EMS98 vulnerability classes for each settlement. (2) Neighborhoods with relatively homogeneous building stock are mapped for large cities and modeled separately. (3) The properties of dwellings, office buildings, and the industrial plant are known separately. (4) The locations and attributes of critical facilities (including schools, fire stations, and health facilities) are known.

The current data for the world building stock do not contain all of the above listed features. Approximations include the following: (1) In developing countries the building stock is often assumed to be uniform throughout, even though climatic differences lead to drastically different building types in different regions (e.g., tropical coast versus elevations over 2,000 m); (2) large cities are modeled with a single portfolio of buildings, and the entire population may be located at one coordinate point; (3) the nature of a city’s function (administrative, agricultural, industrial) is not modeled; (4) the dependence of the building stock on settlement size is crudely modeled without knowledge of the construction habits of the country in question.

An example of the contrast in buildings present in a developing and an industrial country is shown in Figure 3. With the relatively low probability of collapse for the buildings in an industrial country, the chances of surviving strong shaking are clearly far better than in a developing country, where most people live in buildings of the classes A, B and C.

# Estimating Seismic Risk

The units in which seismic risk is assessed must be the annual probability of losses due to damage of the built environment in dollars for the insurance companies, so they can calculate the necessary premiums to cover eventual losses (e.g., Michel, 2014). The interest of insurance companies is also restricted to their portfolios and potential portfolios. My focus has been estimating losses in those parts of the world where little expertise exists for estimating the risk and where the data necessary are poorly known—the seismically active parts of the world that most need help.

## The Choice of Unit to Measure Seismic Risk

The best unit for communicating seismic risk to the population at risk is, in my opinion, the number of fatalities in for the maximum credible earthquake, Fat(MCE). The first scientific reason for advocating these units is that the numbers of fatalities in past large earthquakes is approximately known, thus estimates of future Fat(MCE) are to some degree calibrated. The second scientific reason is that the total numbers of fatalities (Fattot) is a relatively robust value because various factor-introducing uncertainties in local differences are averaged out somewhat. A psychological reason to give Fattot, rather than the estimated percent of the population to die, the mortality, is that people at risk are more likely to reinforce schools if they are told that 20,000 schoolchildren may die, which is only a 2% mortality in a school population of 1 million, as is typical for large cities like Lima, Peru, located in the immediate vicinity of a fault that is capable of Mmax>8.

The data on population and the built environment that are necessary for estimating earthquake risk are complex and poorly known. In addition, the regional attenuation of seismic waves must be considered. Examples of different attenuation are the western and eastern United States. The New Madrid earthquakes of 1811/1812 were felt over an area about eight times larger than the 1906 San Francisco earthquake, although their magnitudes were smaller.

For calculating intensities, Equation 2 is convenient, because by adjusting the constant C3 one can increase or decrease transmission properties from an average value.

$Display mathematics$
(2)

where M designates magnitude, r distance, and h depth (Shebalin, 1968).

We assume that we have a model for the built environment for a seismically active region that approximates the truth reasonably well as a function of the size of settlements, and that the population distribution is known from a relatively recent census. We further assume that the major active faults transecting the region have been mapped and there exists some information on historic and prehistoric large earthquakes, as well as on attenuation of seismic waves. With this information in hand, we can proceed to calculate a loss scenario. In the following I describe the calculation of risk in units of fatalities in a scenario approach because the principles involved are clear and easily understood.

## Calculating a Loss Scenario for a Possible Future Earthquake

Click to view larger

Figure 4 Maps of estimated intensity (MMI scale from IX, red, to V, yellow) of strong ground motion at top, and mean damage at bottom (4 red, heavy, to 1 blue, minor) in a hypothetical Mw = 9 earthquake with epicenter at 18.15°N/103.0°W, with rupture length of 560 km, along the Pacific coast of Mexico. (Calculations restricted to a radius of 400 km.)

As a first step one defines the segment of the fault that may rupture in the Mmax in the region of interest, if it is to be a worst case scenario. In my example, I select the Pacific plate boundary of Mexico, where the greatest historic earthquake had a rupture length of almost 500 km (Suárez & Albini, 2009). A possible position of such a rupture adjacent to that modeled by Zúñiga, Merlo, & Wyss (2015) is shown in (Figure 4). The rupture length chosen is 560 km and its width is 60 km. With 35 m of slip this would rupture in an Mw9 earthquake.

In the next step, one calculates the accelerations expected at every affected settlement in the data base. Preferred scientific units of accelerations are m/s2 or g (the fraction of the Earth’s acceleration at the surface). However, Intensity of shaking, I (on the Modified Mercalli scale that ranges from 0 to 12; see Wood & Neumann, 1931), can also be used. Intensity is derived from observations of how people felt an earthquake, or from the damage that resulted. We often use Intensity in our calculations because this type of information is known from historic events. For the purpose of this review, we consider the two measures of acceleration equivalent, and the relative merits of the various equations converting one into the other that have been proposed over the years are not discussed. The expected I at all settlements in the database are mapped in Figure 4, top, for our example of an Mmax = 9 earthquake in Mexico’s coastal states of Jalisco and Guerrero.

The response of a particular building to strong shaking is defined in vulnerability (sometimes called fragility) curves. The EMS98 classification (Grünthal, 1998) contains six classes: From A the weakest to F the most resistant building. These curves give probabilities for each class to sustain a certain level of damage (for example collapse, Figure 3) at a given intensity of strong ground motion. In every settlement, the damage grade (on a scale of 0 to 5, total collapse) of each building class is calculated and then the average damage grade for each settlement, considering the percentages of buildings in each vulnerability class, is presented in Figure 4, bottom.

Finally, the impact of the damaged buildings on the occupants is calculated. The first issue here is, What percentage of the population lives in each building class (Figure 2)? The second question is, What percentage is indoors as a function of the time of day? And yet another question is, What is the occupancy rate of factories and office buildings? In some parts of the world one could, or should, also ask, How does the population vary numerically as a function of the season, including tourists and transient workers? Currently, some of the answers to these questions are not well or not at all known for many parts of the world. Consequently we must make assumptions. In the example presented here, I assume that the earthquake takes place at 3 a.m., when the occupancy rate in dwellings is a maximum and it is near nil in factories and office buildings. This assumption tends to maximize the fatalities. On the other hand, no tourists are assumed to be present, diminishing the result from what it might be otherwise.

To estimate the fatalities and injured, a casualty matrix is used. It contains the probability that an occupant of a building is dead, injured, or unscathed as a function of the level of damage and the class of building. In well-constructed homes (California, Japan) few fatalities, but some injured are expected in major earthquakes. In the least well-constructed buildings (Iran, Pakistan, India, China) almost all occupants are killed, few survive. This way, a range of fatalities at given probability levels is calculated for each settlement. The final result of the losses in units of fatalities and injured is the sum over all settlements. In our case, for the Jalisco-Guerrero hypothetical M9 earthquake, the range of expected fatalities due to building damage is 7,000 to 30,000 and the number of injured may range from 30,000 to 150,000. These numbers do not include tsunami victims and those in Mexico City, a location of anomalous amplification of seismic waves in unconsolidated soils. There are two reasons why the expected casualties due to shaking are more than an order of magnitude higher than in the case of the Tohoku earthquake: The plate boundary is very much closer to shore and the building stock is weaker.

## Imminent Seismic Risk

The seismic hazard may be a function of time, if one has information, for example from crustal deformation measurements, that little or much strain energy is stored in a given region. Using the concept of worst case risk scenario, the risk, however, would not be a function of time (except for the increase in population and the built environment). Nevertheless, one may reasonably consider the notion of imminent seismic risk.

In the eyes of the Italian court, the seismic risk at L’Aquila in March/April 2009, should have been judged as imminent, although statistics of Italian seismicity yielded a probability of about 2% to 5% for a damaging earthquake to be imminent.

In the eyes of a regional Red Guard leader in Haicheng, China, the earthquake swarm in January/February 1975 indicated imminent risk, and he ordered the population to evacuate their dwellings. As a consequence, the fatalities and injured saved numbered approximately 8,000 and 27,000, respectively, based on a scenario calculation (Wyss & Wu, 2014).

Locations of temporarily increased probability (TIP) for great earthquakes have been defined for 25 years (Kossobokov, 2014). These forecasts are based on an algorithm using statistical properties of the seismicity as a function of time and space. The locations of TIPs and the temporal windows (typically a few years) for which they are opened are placed biannually on the Internet with password access for experts. Given the track record presented by Kossobokov (2014), it seems worthwhile to take the next step and ask, What will be the human losses, if an M8 earthquake occurs in a TIP (e.g., Wyss, 2008)?

# Errors in Seismic Risk Estimates

In the deterministic approach, errors are somewhat tractable, whereas they are elusive in probabilistic calculations. I will focus on the tractable ones.

Uncertainties in the distance of the energy release from the population is, and will remain, the largest source of uncertainty in real time as well as in scenario loss estimates. There are two problems: the inaccuracy of the epicenter and the unknown direction and extent of the rupture. The inaccuracy of the epicenter in the first hours after an earthquake in the developing world, where no local networks exist to constrain it to 1 to 2 km, is about 25 km (Wyss et al., 2011). This error can translate into a difference of about 20,000 fatalities (an order of magnitude) in cases where a single city with poorly constructed buildings is nearby (Wyss et al., 2011). In regions where many similar size settlements are distributed relatively uniformly, the uncertainty of the total estimate is negligible, because shifts in epicenter produce only shifts in the distribution of the casualties, not in the total number.

The unknown direction and extent of the rupture has the same effect: In a uniformly populated area, the casualties are shifted, but the total remains the same. In the case of an M6.3 earthquake on May 27, 2006 in Indonesia, I underestimated the casualties by an order of magnitude in real time because the initial epicenter was too far south, namely offshore, and the rupture, which propagated north, closely approached Yogyakarta, a city of more than 700,000 inhabitants.

Incorrect information regarding the quality of the built environment can also cause an order of magnitude error. In the first hour after the Haiti M7 earthquake of January 10, 2012, I classified it as a major disaster, but estimated there would be about 10,000 fatalities. In reality, the death toll was approximately 100,000 (some estimates larger than this are exaggerated). Before this test by an earthquake, the resistance to shaking of Haiti’s buildings was evidently much overestimated by all sources known to us. Cases like this help us correct the data set on building quality in QLARM, the tool I use for calculating damage and human losses.

It may come as a surprise that the migration of large numbers of people from the countryside into great cities causes negligible errors. When 300,000 people move into a city of 15 million, a large new community is added. Because they represent only 2% of the total they cause only 2% increase in the total estimate of casualties. If, however, most of this new community constructs weak dwellings at the edge of a megalopolis right on top of an active fault, then estimates of casualties may be in error of an order of magnitude, as in the case of an incorrectly assumed location of the energy release.

Click to view larger

Figure 5 News reports of fatalities following the earthquakes of Wenchuan M8 (China) and L’Aquila M6.5 (Italy) as a function of time. In the case of Wenchuan, the Chinese News Agency issued numbers of fatalities and missing separately. In the end, the sum of the two (triangles in A) equaled the total fatality count. For L’Aquila the time of issue of the first news report that increased each death toll was plotted. Because of the extent of the disaster in China, it took longer to realize how extensive it was than in the Italian case of a relatively small earthquake. My near-real-time estimates were correct, issued after 100 and 22 minutes, respectively (diamonds). The vertical bars indicate the stated uncertainty of the real-time alert.

If the wrong attenuation function is assumed, the resulting overall loss estimates may have an error of typically 10% to 20%. Locally, the error can be very large if differences in soil conditions are not known, as it was the case in L’Aquila (e.g., Mucciarelli, 2015), where, however, my overall estimate of fatalities in real time was correct (Figure 5).

An interesting psychological observation is the fact that seismologists tend to believe that the greatest influence on losses comes from the phenomenon that is their special subject of study. This belief is generally not underpinned by quantitative analysis, but just communicated as a basic feeling. Among other similar claims, I have often been told that direction of rupture propagation is a most important error source. However, only the detailed distribution of losses changes, the overall losses remain the same because the energy radiated does not change. In addition, the amplification in the forward quadrant is typically about a factor of two, smaller or equal to amplification factors due to soil conditions.

# Communicating Seismic Risk Estimates to the Public

The vocabulary of the scientific community and the public overlap, but the key words are often different, or absent in the world of daily use. This hinders effective communication. Also, scientists are too often wrapped up in communicating infinite details about their work, ignoring the need of the public to be told important facts in a nutshell. Marone et al. (2015) have translated a scientifically written text addressing the problem of sea level rise due to global warming affecting coastal communities into a communiqué. The rule they set was to use only the 1,000 most common words in English. The result reads a little awkwardly, and is strange in parts, so they tried a different version, allowing some more relatively well known words. This kind of thoughtfulness should be required in discourse with the public, and scientists should think more carefully about the language, the way of thinking, and the needs of the population when addressing them.

Conveying risks by means of maps to the public may also need to be considered carefully. Heesen et al. (2015) argue that the parameters presented on maps may be so incomplete that they are misleading. They also point out that the consumer in the developing world may read maps in a different way than intended by the authors working in a university of a developed country. The parameters depicted on maps and their scientific units may be so foreign to the public that the map is useless (e.g., Wyss, 2015a).

The worst case of miscommunicating earthquake hazard and risk occurred in L’Aquila in April 2006, as discussed above. In this case, the scientists remained silent and were convicted for this in the first court decision, although it was a civil protection official without scientific expertise who mishandled the communication. Seismologists may be able to learn something about communicating with the public in a crisis mode from volcanologists, who face critical situations more frequently and have developed procedures to follow (e.g., Neuberg, 2015).

In some way, all seismic risk is man-made, because without humans and their built environment there would be no risk. What I discuss here is the risk generated by earthquakes induced by human activities.

Industrial activities that can induce earthquakes include mining, extraction of petroleum and gas, and geothermal harvesting of energy. These activities perturb components of the stress tensor (for a recent review, see Ellsworth, 2013). The most important means of inducing earthquakes is elevating the pore pressure of underground fluids. Elevating the pore pressure within a fault zone with its cracks is similar to hydroplaning of vehicles on a drenched road: The two sides of a fault are lifted off each other to some extent, allowing the existing shear stress to overcome the remaining friction.

The original case where pumping of fluids into porous strata was first suspected, then proven to have triggered earthquakes was at the Rocky Mountain arsenal, where the US Army disposed of waste fluids (Evans, 1966). Located in Colorado, where earthquakes are rare, the occurrence of magnitude 5 class earthquakes attests to the fact that many parts, if not all, of the continental crust are under relatively high stress at all times. Also in Colorado, at the Rangely oil field, Healy, Rubey, Griggs, and Raleigh (1968) and Raleigh, Healy, and Bredeheoft (1972) showed how microseismic activity could be turned on and off at will, by increasing or by lowering the pressure in an injection well, respectively.

Since then it has become evident that some deep reservoirs induce seismicity. In 1967, an earthquake of M6.3 at the Koyna, India, reservoir killed about 200 people. The seismic activity at Koyna continues to the present, spreading for several kilometers downstream. The danger that the pore pressure, or load from reservoirs, may trigger earthquakes larger than microearthquakes is only present if an active fault is located nearby that has dimensions of 10 or more kilometers that may rupture as one event. Consequently, triggering of earthquakes is more likely in tectonically active regions than in stable continents. Identifying a reservoir as the culprit of triggering a rupture is difficult in tectonically active areas because major ruptures happen also without help by pore pressure increase. In the case of the Wenchuan, China, M7.9 earthquake of 2008, some are of the opinion that a nearby reservoir may have triggered it (Ge et al., 2009), others doubt this (Deng et al., 2010).

Fracking is an activity in which cracks are opened at depth in rock formations for the purpose of allowing fluids to pass through these formations. Opening a crack is in itself a small earthquake. Although these occur by the ten thousands near a fracking well, they are far too small to cause damage, most are too small to be felt. Nevertheless, some have been large enough to frighten residents and to cause damage. This raised concern in the United States to a level that laws were formulated to govern fracking activity (e.g., Ellsworth, 2013).

In Switzerland, a case of fracking has caused a loss of more than $100 million. In a misguided effort to develop geothermal power, a well was dug to 3 km depth within the city of Basel at a location surrounded by apartment buildings. In my opinion, the probability that this fracking would cause earthquakes with M>2 and seriously frighten the residents of Basel was about 99.9%, before the experiment started. Several earthquakes with M>3 occurred, the residents were not only frightened but outraged. The drilling was stopped and the tax payers of Switzerland had lost$100 million, which went to organizations that drilled the well and that supported the activity.

# Reducing Seismic Risk

Everyone agrees that reducing seismic risk would be desirable and it is generally understood that $1 spent for mitigation before a disaster can save$10 in recovery costs. However, the costs to retrofit shoddy construction are high, the will to enforce building codes is eroded by corruption, developing countries have different pressing problems to deal with, and the probability for an earthquake to happen during the term of a government in power is small. Nevertheless, there are a number of possibilities to reduce earthquake risk that are being pursued successfully.

The engineering solution is doubtless the most effective way to reduce casualties due to earthquakes (e.g., Tolis, 2014). If one is aware of the correct level of the seismic hazard, has the required funds, and assigns a high priority to risk reduction, one can construct new buildings in an earthquake-resistant way and retrofit existing weak buildings. Building codes are supposed to achieve this. Where they have been implemented and followed, a shift of victims from the fatality to the injured category is documented (Wyss & Trendafiloski, 2009). Problems that prevent implementation or make it difficult are discussed in the following.

First of all, the correct level of accelerations to be expected may not be known. If one designs a building for resisting 0.5 g acceleration, as the seismic hazard map of Southern California suggests for the locations where 0.25 g has never been attained during the last 10,000 years (e.g., Brune et al., 2006; Anderson et al., 2014), one spends money needlessly. With this overdesign people are at least well protected. On the other hand, if one designs for 0.25 g, as suggested by the world seismic hazard map (Giardini, 1999) in regions where 0.3 g and higher accelerations occur (Zuccolo et al., 2011; Wyss et al., 2012), then the population is misled and left at risk.

Second, the funds or the best-suited materials may not be available, or local builders may not be aware of simple techniques of constructing relatively safe buildings. Much progress has been and is being made in helping local builders to select appropriate and available materials and to learn the skills of constructing buildings that resist strong shaking (e.g., Schacher, 2015; Dixit et al., 2014).

Taking shortcuts using legal and illegal means by building cheaply, that is, below required quality standards, is an everlasting problem rooted in greed (e.g., Bilham, 2014, 2015; Gaur, 2015; Limaye, 2015). The balancing act between designing a structure to acceptable safety standards yet economically is particularly critical in the case of nuclear power plants (NPPs). The catastrophe at the Dai-Ichi NPP, where a tsunami breached the defense wall in 2011, moved this problem into worldwide focus. Experts and the public were equally astonished to learn that 28 tsunamis higher than the 6 m wall at Dai-Ichi have hit Japan in the past (e.g., Wyss, 2015c; Geller, 2015.

The recent Napa Valley, California, earthquake (August 24, 2014, M6), in which only one person died, prompted renewed interest in the controversy about the Diablo Canyon NPP located in central California. A report by the Associated Press revealed that the safety of this power plant is still not guaranteed at present. This, although some shortcuts had been detected and corrected over the years (Chapman et al., 2014; Gawthrop, 2015; Wyss, 2015c). The lack of transparency in the design of some NPPs in India has been pointed out in detail (Bilham & Gaur, 2013; Gaur & Bilham, 2012). In other countries this problem may also exist, but has hitherto gone undetected.

## Early Earthquake Warning

Early warning is a novel method to reduce disastrous consequences of earthquakes. The idea is simple: Detect the initiation of an earthquake rupture immediately and try to estimate the final magnitude (possibly attained after more than 1 minute) from very limited information available during the first few seconds. Whereas the development of the scientific methods is exciting, the potential for being useful for the population is questionable. The problem is that at distances where fatalities occur the warning comes too late for a person in an apartment building to take effective protective action, unless he/she has an earthquake protection unit (a strong closet) in the apartment (Wyss, 2012). On the positive side: Trains can be stopped and some dangerous processes in factories may be interrupted in time. The performance of the most advanced early warning system, that in Japan, has been described by Hoshiba (2014).

## Fast and Adequate Response After the Disaster

Real-time estimates of the extent of earthquake disasters is a way of belatedly trying to mitigate the impact on the population. Communication does not function at a normal level in regions hit by a devastating earthquake, therefore the usual sequence of events is: (1) News reports sweep the world that an earthquake has hit, no or a few casualties are reported. The response by rescue organizations is designed on the basis of these reports. (2) The initial reports of casualties come from the periphery of the affected area. As time passes, the count of casualties mounts (Figure 5). (3) Days later, when the full extent of the disaster has become evident, an appropriate rescue response is launched, too late to be of much use for the injured.

To alleviate this problem, the International Centre for Earth Simulation (ICES), Geneva, and the PAGER group of the USGS distribute estimates of the extent of earthquake disasters, including expected casualty numbers, within about half an hour, median, of strong earthquakes worldwide (e.g., Wyss, 2014).

During the days after an earthquake disaster data gathered by remote sensing furnish important data for rescuers on where what happened (e.g. Huyck, Verrucci, & Bevington, 2014). The long term socio-economic impacts of earthquake disasters last for a long time and need to be addressed by specialists other than engineers and seismologists (e.g. Daniell, 2014).

Educating the population about earthquake hazard and risk in their region and about how to protect themselves in case of strong shaking is widely recognized as important. Various organizations, including United Nations offices, distribute cartoon-like material advising what to do in an earthquake. I wonder whether such brochures are effective. A mainstay of such material is the advice to crawl under a table when shaking begins. This advice may work in Japan and California, where few people die in M6.5 earthquakes, but some are injured by falling objects. In countries, ranging from Iran to China, where in similar size earthquakes more people are killed than injured by collapsing buildings, only a strong earthquake closet can save lives (Wyss, 2012).

Although the Japanese government is very active in trying to educate its population about earthquake and tsunami danger, parts of the population are not well informed and even negative results have been noted in some people’s responses during the 2011 Tohoku tsunami (Ishida & Ando, 2014). To get people really thinking about the seismic risk problem, activities like the California shakeouts are probably necessary.

# Ethical and Moral Aspects of Seismic Risk Assessment

Not all philosophers make the same distinction between the terms ethical and moral. The distinction I understand is to consider ethical the behavior considered correct by the community, whereas moral refers to the understanding of right and wrong by an individual. Based on these definitions, it is ethical to estimate the earthquake hazard using a method that yields wrong results, as long as it is the method accepted by the scientific/engineering community at large, whereas for me personally this is immoral because people are misled.

The trial of the L’Aquila seven brings the question of ethics into stark focus. Much has been published about this trial (e.g., Hall, 2011; Mucciarelli, 2015; Wyss, 2013), thus here I summarize it only briefly.

An earthquake swarm frightened the population of L’Aquila, central Italy, during March 2009. The Italian Civil Defense (ICD) responded by organizing a scientific review of the seismic hazard by the Italian Commission of Great Risks (CGR) in a gathering at L’Aquila. The head of the ICD was taped in a telephone conversation describing the purpose of this review as appeasement of the population. The ICD official in charge of the meeting gave a TV interview before the review took place, in which he made two literally fatal statements: (1) He advised that there was nothing to worry about, and one should go home and drink a glass of wine. (2) He claimed that the more small earthquakes the smaller the seismic hazard because energy was being released (a technically incorrect statement because the elastic energy released by small earthquakes is insignificant compared with that available for large ones).

The minutes of the CGR meeting that followed the aforementioned announcements contained no incorrect statements. The seismologists present simply said that earthquake swarms are frequent in Italy and that the probability of a damaging main shock to follow was very small. After the scientific deliberation, a second news conference was held by the official of the ICD to which the seismologists of the CGR were not invited and which was designed to calm the population. Six days later an M6.5 earthquake killed 309 people. The relatives of the dead and injured felt betrayed and initiated court proceedings against the seven men who attended the deliberations of the CGR.

All of the seven were convicted of manslaughter and handed the same sentence in the initial trial in October 2013: 6 years in prison. Strangely, the court did not recognize that some of the men who attended the meeting of the CGR had no expertise regarding earthquake swarms (e.g., a civil engineer) and one person convicted was not a member of the CGR, he was simply asked to bring the newest data on seismicity to the meeting. In addition, the scientists did not have the right to inform the public, a job claimed exclusively by the ICD (e.g., Dolce & di Bucci, 2015). Although some have tried to analyze the reasoning of the court (e.g., Mucciarelli, 2015) it is difficult for me to follow it. On November 10, 2014, a year after the original trial, the appeals court acquitted the 6 scientists, but found the deputy head of the civil defense, who had conducted the interviews, guilty. He was sentenced to 2 years in jail. This ruling can be appealed one more time and brought before the Court of Cassation in Rome.

What should the population have been told in the face of a rumbling Earth, but with an approximately 2% to 5% probability of a deadly earthquake to follow during the next month? I argued that they should have been told to strengthen their dwellings, for example by constructing an earthquake closet (similar to a tornado shelter that costs around \$3,000 to 5,000) in their houses, which could serve to protect their families not only during the following months, but down the generations (Wyss, 2012), because it was general knowledge among seismologists that the eventual occurrence of an M6.5 class earthquake near L’Aquila was virtually certain.

The tricks that are being used to reduce the construction cost of critical facilities have so far not been the subject of legal investigations, although they put the public at significant risk. The Dai Ichi NPP, Japan, was destroyed by a tsunami that breached a defense wall that was 6 m high, although Japan has experienced 28 tsunamis higher than 6 m (ITDB/WRL, 2005). The Diablo Canyon NPP was built to withstand about half of the acceleration later defined as necessary by the USGS and is still not safe according to a report by the Associated Press (for the history of evasion, see Gawthrop, 2015; Wyss, 2015c). The seismic hazard assessment for the design of the Jaitapur, India, NPP is not carried out openly and appears to be underestimated (Gaur, 2015). The 250-m high Tehri, India, dam and hydroelectric facility has been constructed to withstand only a fraction of the accelerations most seismological experts deem possible at that location (Gaur, 2015).

The above list of cases where the public has been placed at risk by a lack of due care in assessing the seismic hazard at critical facilities is possibly only the tip of the iceberg, the few cases that happen to be known to me. The methods used by companies or government to obtain the desired answers from experts is described well by Bilham (2015) and Richards (2015), respectively.

Whether the rural poor and city dwellers treated equally in attempts to mitigate the seismic risk is a more subtle question of ethics. Mitigation efforts center on large cities, with good reason: The largest number of possible victims are concentrated in them. However, the following facts suggest that the rural population is somewhat unfairly neglected: (1) For many earthquake disasters, the sum of the casualties in the villages is a multiple of those in the largest city affected. (2) The dwellings of the rural population are weaker than those in the cities. This can giving rise to a several times larger mortality rate in the rural compared with the city population (Zúñiga et al., 2015).

# Outlook

The outlook concerning earthquake risk globally is grim: It rises with the rising population of the planet (Bilham, 2009, 2014; Holzer & Savage, 2013; Tucker, 2013). However, the ratio of injured to fatalities in earthquakes shows that in some countries high-quality buildings are effective in reducing the human losses (Wyss & Trendafiloski, 2011). In buildings that resist strong shaking, occupants are more likely to be injured than killed, whereas in the weakest buildings most occupants are killed and few survive with injuries.

Reducing the errors in seismic hazard assessments that lead to errors in the risk estimates will not be easy. First, it is a difficult scientific problem, and second, the standard, inadequate method is so strongly entrenched in general practice that it will take decades to move on to more reliable approaches.

To accumulate adequate global data sets on population, building stock, and critical facilities is a herculean task. Many agencies, institutions, and individuals are working on it, but progress has hitherto been slow. Using crowd sourcing by OpenStreetMap to accumulate data useful for risk assessment may be the most promising way to accelerate the gathering of the needed information.

# Acknowledgment

I thank Roger Bilham for a very thorough review.

## References

Anderson, J. G., Biasi, G. P., & Brune, J. N. (2014). Precarious rocks: Providing upper limits on past ground shaking from earthquakes. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 377–403). Waltham, MA: Elsevier.Find this resource:

Bakun, W. H., & Lindh, A. G. (1985). The Parkfield, California, earthquake prediction experiment. Science, 229, 619–624.Find this resource:

Bakun, W. H., & McEvilly, T. V. (1979). Earthquakes near Parkfield, California: Comparing the 1934 and 1966 sequences. Science, 205, 1375–1377.Find this resource:

Bakun, W. H., & McEvilly, T. V. (1984). Recurrence models and Parkfield, California, earthquakes. Journal of Geophysical Research, 89, 3051–3058.Find this resource:

Barka, A., Akyüz, S., Altunel, E., Sunal, G., Çakir, Z., Dikbas, A., … Page, W. (2002). The surface rupture and slip distribution of the 17 August 1999 Izmit Earthquake (M 7.4), North Anatolian Fault. Bulletin of the Seismological Society of America, 92(1), 43–60.Find this resource:

Bilham, R. (2009). The seismic future of cities. Bulletin of Earthquake Engineering, 7, 839–887. doi:10.1007/s10518–009–9147–0Find this resource:

Bilham, R. (2014). Aggravated earthquake risk in South Asia: Engineering vs. human nature. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 103–141). Waltham, MA: Elsevier.Find this resource:

Bilham, R. (2015). Mmax: Ethics of the maximum credible earthquake. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 119–140). Waltham, MA: Elsevier.Find this resource:

Bilham, R., & Gaur, V. K. (2013). Buildings as weapons of mass destruction: Earthquake risk in South Asia. Science, 341, 618–619.Find this resource:

Brune, J. N., Anooshehpoor, A., Purvance, M. D., & Brune, R. J. (2006). Band of precariously balanced rocks between the Elsinore and San Jacinto, California, fault zones: Constraints on ground motion for large earthquakes. Geology, 34, 137–140. doi:10.1130/G22127.1Find this resource:

Chapman, N., Berryman, K., Villamor, P., Epstein, W., Cluff, L., & Kawamura, H. (2014). Active faults and nuclear power plants. EOS Transactions American Geophysical Union, 95(4), 33–40.Find this resource:

Daniell, J. E. (2014). The socioeconomic impact of earthquake disasters. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 203–236). Waltham, MA: Elsevier.Find this resource:

Deng, K., Zhou, S., Wang, R., Robinson, R., Zhao, C., & Cheng, W. (2010). Evidence that the 2008 Mw 7.9 Wenchuan earthquake could not have been induced by the Zipingpu reservoir. Bulletin of the Seismological Society of America, 100(5B), 2805–2814. doi: 10.1785/0120090222Find this resource:

Dixit, A. M., Acharya, S. P., Shrestha, S. N., & Dhungel, R. (2014). How to render schools safe in developing countries. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 183–202). Waltham, MA: Elsevier.Find this resource:

Dolce, M., & Di Bucci, D. (2015). Risk management: Roles and responsibilities in the decision-making process. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 211–221). Waltham, MA: Elsevier.Find this resource:

Ellsworth, W. L. (2013). Injection-induced earthquakes. Science, 341(6142). doi:10.1126/science.1225942Find this resource:

Evans, M. D. (1966). Man made earthquakes in Denver. Geotimes, 10, 11–18.Find this resource:

Gaur, V. K. (2015). Geoethics: Tenets and praxis: Two examples from India. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 141–160). Waltham, MA: Elsevier.Find this resource:

Gaur, V. K., & Bilham, R. (2012). Discussion of seismicity near Jaitapur. Current Science, 103, 1273–1278.Find this resource:

Gawthrop, B. (2015). Corporate money trumps science. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 161–168). Waltham, MA: Elsevier.Find this resource:

Ge, S., Liu, M., Lu, N., Godt, J. W., & Luo, G. (2009). Did the Zipingpu Reservoir trigger the 2008 Wenchuan earthquake? Geophysical Research Letters, 36(L20315). doi:10.1029/2009GL040349Find this resource:

Geller, R. (2015). Geoethics, risk communication, and scientific issues in earthquake science. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 263–272). Waltham, MA: Elsevier.Find this resource:

Geller, R. J. (2011). Shake-up time for Japanese seismology. Nature, 472, 407–409. doi:10.1038/nature10105Find this resource:

Giardini, D. (1999). The Global Seismic Hazard Assessment Program (GSHAP)–1992/1999. Annali di Geofisia, 42, 957–974.Find this resource:

Grünthal, G. (1998). European Macroseismic Scale 1992 (up-dated MSK-scale). Rep. 7, 79 pp. Conseil de l’Europe, Luxembourg.Find this resource:

Hall, S. S. (2011). Scientists on trial: At fault? Nature, 477, 264–269. doi:10.1038/477264aFind this resource:

Harris, R. (1998). Introduction to special section: Stress triggers, stress shadows, and implications for seismic hazard. Journal of Geophysical Research, 103, 24347–24358.Find this resource:

Healy, J. H., Rubey, W. W., Griggs, D. T., & Raleigh, C. B. (1968). The Denver earthquakes. Science, 161, 1301–1310.Find this resource:

Heesen, J., Lorenz, D., Voss, M., & Wenzel, B. (2015). Reflections on ethics in mapping as an instrument and result of disaster research. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 251–262). Waltham, MA: Elsevier.Find this resource:

Holzer, T. L., & Savage, J. C. (2013). Global earthquake fatalities and population. Earthquake Spectra, 29, 155–175.Find this resource:

Hoshiba, M. (2014). Review of the nationwide earthquake early warning in Japan during its first five years. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 505–529). Waltham, MA: Elsevier.Find this resource:

Huyck, C., Verrucci, E., & Bevington, J. (2014). Remote sensing for disaster response: A rapid, image-based perspective. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 1–24). Waltham, MA: Elsevier.Find this resource:

Ishida, M., & Ando, M. (2014). The most useful countermeasure against giant earthquakes and tsunamis—what we learned from interviews of 164 tsunami survivors. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 72–101). Waltham, MA: Elsevier.Find this resource:

ITDB/WRL. (2005). Integrated Tsunami Database for the World Ocean. TSLI SD. Novosibirsk: Russian Academy of Sciences.Find this resource:

Jaiswal, K., Wald, D. J., & Porter, K. (2010). Creating global building inventory for earthquake loss estimation and risk management. Earthquake Spectra, 26, 731–748. doi: http://dx.doi.org/10.1193/1.3450316Find this resource:

Johnston, A. C. (1989). The seismicity of “stable continental interiors.” In S. Gregersen & P. W. Basham (Eds.), Earthquakes at North-Atlantic passive margins: Neotectonics and postglacial rebound (pp. 299–327). Dordrecht, Netherlands: Kluwer.Find this resource:

Kagan, Y. Y., & Jackson, D. D. (1991). Seismic gap hypothesis: Ten years after. Journal of Geophysical Research, 96, 21419–21431.Find this resource:

Kagan, Y. Y., & Jackson, D. D. (1995). New seismic gap hypothesis: Five years after. Journal of Geophysical Research, 100, 3943–3960.Find this resource:

Kelleher, J., Sykes, L. R., & Oliver, J. (1973). Possible criteria for predicting earthquake locations and their application to major plate boundaries of the Pacific and the Caribbean. Journal of Geophysical Research, 78, 2577–2585.Find this resource:

Kossobokov, V. G. (2014). Times of increased probabilities for the occurrence of catastrophic earthquakes: 25 years of the hypothesis testing in real time. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 477–504). Waltham, MA: Elsevier.Find this resource:

Limaye, S. (2015). Geoethics and geohazards—A perspective from low-income countries, an Indian experience. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 409–417). Waltham, MA: Elsevier.Find this resource:

Lin, J., & Stein, R. (2006). Seismic constraints and coulomb stress changes of a blind thrust fault system: 1. Coalinga and Kettleman Hills, California. Menlo Park, CA: US Geological Survey.Find this resource:

Maercklin, N., Festa, G., Colombelli, S., & Zollo, A. (2012). Twin ruptures grew to build up the giant 2011 Tohoku, Japan, earthquake. Scientific Reports, 2, doi:10.1038/srep00709Find this resource:

Marone, E., Castro Carneiro, J., Cintra, J., Ribeiro, A., Cardoso, D., & Stellfeld, C. (2015). Extreme sea level events, coastal risks and climate changes: Informing the players. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 274–302). Waltham, MA: Elsevier.Find this resource:

McCann, W. E., Nishenko, S. P., Sykes, L. R., & Krause, J. (1979). Seismic gaps and plate tectonics: Seismic potential for major plate boundaries. Pageoph, 117, 1082–1147.Find this resource:

Meghraoui, M., & Atakan, K. (2014). The contribution of paleoseismology to earthquake hazard evaluations. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 237–271). Waltham, MA: Elsevier.Find this resource:

Michel, G. (2014). Decision making under uncertainty: Insuring and reinsuring earthquake risk. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 543–568). Waltham, MA: Elsevier.Find this resource:

Mucciarelli, M. (2015). Some comments on the first degree sentence of the “L’Aquila trial.” In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 205–210). Waltham, MA: Elsevier.Find this resource:

Neuberg, J. (2015). Thoughts on ethics in volcanic hazard research. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 305–312). Waltham, MA: Elsevier.Find this resource:

Nishenko, S. P. (1991). Circum-Pacific seismic potential: 1989–1999, Pageoph, 135, 169–259.Find this resource:

Nishenko, S. P., & Sykes, L. R. (1993). Comment on “Seismic gap hypothesis: ten years after” by Y. Y. Kagan and D. D. Jackson. Journal of Geophysical Research, 98, 9909–9916.Find this resource:

Panza, G., Kossobokov, V. G., Peresan, A., & Nekrasova, A. (2014). Why are the standard probabilistic methods of estimating seismic hazard and risks too often wrong? In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 309–357). Waltham, MA: Elsevier.Find this resource:

Panza, G. F., Irikura, K., Kouteva-Guentcheva, M., Peresan, A., Wang, Z., & Saragoni, R. (Eds.). (2011). Advanced seismic hazard assessment. Basel: Birkhauser Verlag.Find this resource:

Parvez, I. A., & Rosset, P. (2014). The role of microzonation in estimating earthquake risk. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 273–308). Waltham, MA: Elsevier.Find this resource:

Raleigh, C. B., Healy, J. H., & Bredeheoft, J. D. (1972). Faulting and crustal stress at Rangely, Colorado. In H. C. Heard (Ed.), Flow and fracture of rocks (p. 275). Washington, DC: American Geophysical Union.Find this resource:

Richards, P. (2015). When scientific evidence is not welcome. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 109–118). Waltham, MA: Elsevier.Find this resource:

Schacher, T. (2015). Disaster risk reduction through the training of masons and public information campaigns: Experience of SDC’s Competence Centre for Reconstruction in Haiti. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 55–69). Waltham, MA: Elsevier.Find this resource:

Schorlemmer, D., & Gerstenberger, M. C. (2014). Quantifying improvements in earthquake rupture forecasts through testable models. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 405–429). Waltham, MA: Elsevier.Find this resource:

Shebalin, N. V. (1968). Methods of engineering seismic data application for seismic zoning. In S. V. Medvedev (Ed.), Seismic zoning of the USSR (pp. 95–111). Science, Moscow.Find this resource:

Smith, W. S., & Wyss, M. (1968). Displacement on the San Andreas fault subsequent to the 1966 Parkfield earthquake. Bulletin of the Seismological Society of America, 58, 1955–1973.Find this resource:

Sobolev, G., & Chebrov, V. (2014). The experience of real time earthquake predictions in Kamchatka. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 449–475). Waltham, MA: Elsevier.Find this resource:

Stein, R. S., Dieterich, J. H., & Barka, A. A. (1996). Role of stress triggering in earthquake migration on the North Anatolian Fault. Physics and Chemistry of the Earth, 21(4), 225–230.Find this resource:

Stein, R., Barka, A. A., & Dieterich, J. H. (1997). Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering. Geophysical Journal International, 128, 594–604.Find this resource:

Stirling, M. W. (2014). The continued utility of probabilistic seismic hazard assessment. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 359–376). Waltham, MA: Elsevier.Find this resource:

Straub, C., & Kahle, H. G. (1994). Global positioning system (GPS) estimates of crustal deformation in the Marmara Sea region, Northwestern Anatolia. Earth and Planetary Science Letters, 121, 495–502.Find this resource:

Suárez, G., & Albini, P. (2009). Evidence for great tsunamigenic earthquakes (M 8.6) along the Mexican subduction zone. Bulletin of the Seismological Society of America, 99, 892–896. doi:10.1785/0120080201Find this resource:

Taubenböck, H., Geiß, C., Wieland, M., Pittore, M., Saito, K., So, E., & Eineder, M. (2014). Remote sensing for earthquake research: From pre-event risk analysis to post-event damage assessment and recovery monitoring. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 25–53). Waltham, MA: Elsevier.Find this resource:

Toda, S., & Stein, R. (2002). Response of the San Andreas fault to the 1983 Coalinga-Nunez earthquakes: An application of interaction-based probabilities for Parkfield. Journal of Geophysical Research, 107, 101029.Find this resource:

Tolis, S. V. (2014). To what extent can engineering reduce seismic risk? In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 531–541). Waltham, MA: Elsevier.Find this resource:

Tucker, B. E. (2013). Reducing earthquake risk. Science, 341, 1070–1072.Find this resource:

Wood, H. O., & Neumann, F. (1931). Modified Mercalli Intensity Scale of 1931. Bulletin of the Seismological Society of America, 21, 277–283.Find this resource:

Wu, Z. L. (2014). Duties of earthquake forecast: Cases and lessons in China. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 431–448). Waltham, MA: Elsevier.Find this resource:

Wyss, M. (2005). Human losses expected in Himalayan earthquakes. Natural Hazards, 34, 305–314.Find this resource:

Wyss, M. (2006). The Kashmir M7.6 shock of 8 October 2005 calibrates estimates of losses in future Himalayan earthquakes. Paper presented at Proceedings of the Conference of the International Community on Information Systems for Crisis Response and Management, Newark, New Jersey.Find this resource:

Wyss, M. (2008). Expected human losses, if TIPs in Sumatra or Chile should produce earthquakes. Paper presented at 31st General Assembly of the European Seismological Commission, Hersonissos, Crete, Greece, September 7–12.Find this resource:

Wyss, M. (2012). The earthquake closet: Rendering early-warning useful. Natural Hazards, 62, 927–935. doi:10.1007/s11069-012-0177-6Find this resource:

Wyss, M. (2013). Judged unfairly in L’Aquila: Roles and responsibilities should have been considered. Earth, 8–9.Find this resource:

Wyss, M. (2014). Ten years of real-time earthquake loss alerts. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 143–165). Waltham, MA: Elsevier.Find this resource:

Wyss, M. (2015a). Do probabilistic seismic hazard maps address the need of the population? In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 239–249). Waltham, MA: Elsevier.Find this resource:

Wyss, M. (2015b). Shortcuts in seismic hazard assessments for nuclear power plants are not acceptable. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 169–174). Waltham, MA: Elsevier.Find this resource:

Wyss, M. (2015c). Testing the basic assumption for probabilistic seismic hazard assessment: Eleven failures. Seismological Research Letters, in press.Find this resource:

Wyss, M., & Brune, J. N. (1968). Seismic moment, stress, and source dimensions for earthquakes in the California-Nevada region. Journal of Geophysical Research, 73, 4681–4694.Find this resource:

Wyss, M., Elashvili, M., Jorjiashvili, N., & Javakhishvili, Z. (2011). Uncertainties in teleseismic epicenter estimates: Implications for real-time loss estimate. Bulletin of the Seismological Society of America, 101, 1152–1161.Find this resource:

Wyss, M., Nekrasova, A., & Kossobokov, V. G. (2012). Errors in expected human losses due to incorrect seismic hazards estimates. Natural Hazards, 62, 927–935. doi:10.1007/s11069-012-0125-5Find this resource:

Wyss, M., & Rosset, P. (2012). Mapping seismic risk: The current crisis. Natural Hazards. doi:10.1007/s11069-012-0256-8Find this resource:

Wyss, M., & Trendafiloski, G. (2009). The ratio of injured to fatalities in earthquakes, estimated from intensity and building properties. Paper presented at European Geosciences Union General Assembly, Vienna.Find this resource:

Wyss, M., & Trendafiloski, G. (2011). Trends in the casualty ratio of injured to fatalities in earthquakes. In R. Spence, E. So, & C. Scawthorn (Eds.), Human casualties in natural disasters: Progress in modeling and mitigation (pp. 267–274). London: Springer.Find this resource:

Wyss, M., & Wu, Z. L. (2014). How many lives were saved by the evacuation before the M7.3 Haicheng earthquake of 1975? Seismological Research Letters, 85(1), 126–129. doi:10.1785/02201.30089Find this resource:

Zechar, J. D., Herrmann, M., van Stiphout, T., & Wiemer, S. (2014). Forecasting seismic risk as an earthquake sequence happens. In M. Wyss (Ed.), Earthquake hazard, risk and disasters (pp. 167–182). Waltham, MA: Elsevier.Find this resource:

Zuccolo, E., Vaccari, F., Peresan, A., & Panza, G. F. (2011). Neo-deterministic and probabilistic seismic hazard assessments: A comparison over the Italian territory. Pure and Applied Geophysics, 168(1–2), 69–83.Find this resource:

Zúñiga, F. R., Merlo, J., & Wyss, M. (2015). On the vulnerability of the indigenous and low income population of Mexico to natural hazards. A case study: The state of Guerrero. In M. Wyss & S. Peppoloni (Eds.), Geoethics: Ethical challenges and case studies in Earth sciences (pp. 381–391). Waltham, MA: Elsevier.Find this resource: