Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 16 January 2019

Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experiments

Abstract and Keywords

This article discusses the results of a Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experiments. Using the Gullfaks oil and gas reservoir located in the North Sea as a case study, the article demonstrates the applicability of Bayes linear methods to address highly complex problems for which the full Bayesian analysis may be computationally intractable. A reservoir simulation model, run at two different levels of complexity, is used, and a simulator of a hydrocarbon reservoir represents properties of the reservoir on a three-dimensional grid. The article also describes a general formulation for the approach to uncertainty analysis for complex physical systems given a computer model for that system. Finally, it presents the results of simulations and forecasting for the Gullfaks reservoir.

Keywords: uncertainty analysis, oil reservoirs, linear methods, simulations, multiscale computer experiments, forecasting

10.1 Introduction

Reservoir simulators are important and widely used tools for oil reservoir management. These simulators are computer implementations of high-dimensional mathematical models for reservoirs, where the model inputs are physical parameters, such as the permeability and porosity of various regions of the reservoir, the extent of potential faults, aquifer strengths and so forth. The outputs of the model, for a given choice of inputs, are observable characteristics such as pressure readings, oil and gas production levels, for the various wells in the reservoir.

Usually, we are largely uncertain as to the physical state of the reservoir, and thus we are unsure about appropriate choices of the input parameters for a reservoir model. Therefore, an uncertainty analysis for the model often proceeds by first calibrating the simulator against observed production history at the wells and then using the calibrated model to forecast future well production, and act as an information tool for the efficient management of the reservoir.

In a Bayesian analysis, all of our uncertainties are incorporated into the system forecasts. In addition to the uncertainty about the input values, there are three other basic sources of uncertainty. First, although the simulator is deterministic, an evaluation of the simulator for a single choice of parameter values can take hours or days, so that the function is unknown to us, except at the subset of values which we have chosen to evaluate. Secondly, the reservoir simulator, even at the best choice of input values, is only a model for the reservoir, and we must therefore take into account the discrepancy between the model and the reservoir. Finally, the historical data which we are calibrating against is observed with error.

This problem is typical of a very wide and important class of problems each of which may be broadly described as an uncertainty analysis for a complex physical system based on a model for the system (Sacks, Welch, Mitchell, and (p. 242) Wynn 1989; Craig, Goldstein, Seheult, and Smith 1996; O’Hagan 2006). Such problems arise in almost all areas of scientific enquiry; for example climate models to study climate change, or models to explore the origins and generating principles of the universe. In all such applications, we must deal with the same four basic types of uncertainty: input uncertainty, function uncertainty, model discrepancy and observational error. A general methodology has been developed to deal with this class of problems. Our aim, in this chapter, is to provide an introduction to this methodology and to show how it may be applied for a reservoir model of realistic size and complexity. We shall therefore analyse a particular problem in reservoir description, based upon a description of our general approach to uncertainty analysis for complex models. In particular, we will highlight the value of fast approximate versions of the computer simulator for making informed prior judgements relating to the form of the full simulator. Our account is based on the use of Bayes linear methodology for simplifying the specification and analysis for complex high-dimensional problems, and so this chapter also serves as an introduction to the general principles of this approach.

10.2 Preliminaries

10.2.1 Model description

The focus of our application is a simulation of a hydrocarbon reservoir provided to us by Energy SciTech Ltd. The model is a representation of the Gullfaks oil and gas reservoir located in the North Sea, and the model is based around a three-dimensional grid of size 38 × 87 × 25 where each grid cell represents a cuboid region of subterranean rock within the reservoir. Each grid cell has different specified geological properties and contains varying proportions of oil, water and gas. The reservoir also features a number of wells which, during the course of the simulation, either extract fluids from or inject fluids into the reservoir. The overall purpose of the simulation is to model changes in pressure, and the flows and changes in distribution of the different fluids throughout the reservoir, thereby giving information on the pressures and production levels at each of the wells. A simple map of the reservoir is shown in Figure 10.1.

The inputs to the computer model are a collection of scalar multipliers which adjust the magnitudes of the geological properties of each grid cell uniformly across the entire reservoir. This results in four field multipliers – one each for porosity (ϕ), x-permeability (kx), z-permeability (kz), and critical saturation (crw). There is no multiplier for y-permeability as the (x, y) permeabilities are treated as isotropic. In addition to these four inputs, we have multipliers for aquifer permeability (Ap) and aquifer height (Ah) giving a total of six input parameters. The input parameters and their ranges are summarised in Table 10.1. (p. 243)

Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experimentsClick to view larger

Fig. 10.1 Map of the Gullfaks reservoir grid. Production wells are marked Δ, injection wells are marked ▽, wells considered in our analysis are labelled, and the boundaries of different structural regions of the reservoir are indicated by dotted lines.

The outputs of the model are collections of time series of monthly values of various production quantities obtained for each well in the reservoir. The output quantities comprise monthly values of oil, water and gas production rates, oil, water and gas cumulative production totals, water-cut, gas-oil ratio, bottom-hole pressure and tubing-head pressure. For the purposes of our analysis, we shall focus exclusively on oil production rate since this is the quantity of greatest practical interest and has corresponding historical observations. In terms of the time series aspect of the output, we shall focus on a three-year window in the operation of the reservoir beginning at the start of the third year of production. We smooth these 36 monthly observations by taking four-month averages. By making these restrictions, we focus our attention on the 10 production wells which were operational throughout this period and so our outputs now consist of a collection of 10 time series with each 12 time points.

Table 10.1 The six input parameters to the hydrocarbon reservoir model.

Description

Symbol

Initial range

Porosity

ϕ

[0.5, 1.5]

x-permeability

kx

[0.25, 6]

z-permeability

kz

[0.1, 0.75]

Critical saturation

crw

[0.4, 1.6]

Aquifer height

Ah

[50, 500]

Aquifer permeability

Ap

[300, 3000]

(p. 244) For the purposes of our analysis, it will be necessary to have access to a fast approximate version of the simulator. To obtain such an approximation, we coarsened the vertical gridding of the model by a factor of 10. The evaluation time for this coarse model is between 1 and 2 minutes, compared to 1.5–3 hours for the full reservoir model.

10.2.2 Uncertainty analysis for complex physical systems

We now describe a general formulation for our approach to uncertainty analysis for complex physical systems given a computer model for that system, which is appropriate for the analysis of the reservoir model. There is a collection, x+, of system properties. These properties influence system behaviour, as represented by a vector of system attributes, y = (yh, yp), where yh is a collection of historical values and yp is a collection of values that we may wish to predict. We have an observation vector, zh, on yh. We write

(10.1)

z h = y h +e

and suppose that the observational error, e, is independent of y with E[e] = 0.

Ideally, we would like to construct a deterministic computer model, F(x) = (Fh(x), Fp(x)), embodying the laws of nature, which satisfies y = F(x+). In practice, however, our actual model F usually simplifies the physics and approximates the solution of the resulting equations. Therefore, our uncertainty description must allow both for the possible differences between the physical value of x+ and the best choice of inputs to the simulator, and also for the discrepancy between the model outputs, evaluated at this best choice, and the true values of the system attributes, y.

Therefore, we must make explicit our assumptions relating the computer model F(x) and the physical system, y. In general, this will be problem dependent. The simplest and most common way to relate the simulator and the system is the so-called ‘Best Input Approach’. We proceed as though there exists a value x, independent of the function F, such that the value of F = F(x) summarizes all of the information that the simulator conveys about the system, in the following sense. If we define the model discrepancy as the difference between y and F, so that

(10.2)

y= F * +ε

then our assumption is that ϵ is independent of both F and x. (Here, and onwards, all probabilistic statements relate to the uncertainty judgements of the analyst.) For some models, this assumption will be justified as we can identify x with the true system values x+. In other cases, this should be viewed more (p. 245) as a convenient simplifying assumption which we consider to be approximately true because of an approximate identification of this type. For many problems, whether this formulation is appropriate is, itself, the question of interest; for a careful discussion of the status of the best input approach, and a more general formulation of the nature of the relationship between simulators and physical systems, see Goldstein and Rougier (2008).

Given this general framework, our overall aim is to tackle previously intractable problems arising from the uncertainties inherent in imperfect computer models of highly complex physical systems using a Bayesian formulation. This involves a specification for (i) the prior probability distribution for best input x, (ii) a probability distribution for the computer function F, (iii) a probabilistic discrepancy measure relating F(x) to the system y, (iv) a likelihood function relating historical data z to y. This full probabilistic description provides a formal framework to synthesise expert elicitation, historical data and a careful choice of simulator runs. From this synthesis, we aim to learn about appropriate choices for the simulator inputs and to assess, and possibly to control, the future behaviour of the system. For problems of moderate size, this approach is appropriate, practical and highly effective (Kennedy and O’Hagan, 2001; Santner, Williams, and Notz, 2003). As the scale of the problem increases, however, the full Bayes analysis becomes increasingly difficult because (i) it is difficult to give a meaningful full prior probability specification over high-dimensional spaces; (ii) the computations, for learning from data (observations and computer runs), particularly in the context of choosing informative sets of input values at which to evaluate the simulator, become technically difficult and extremely computer intensive; (iii) the likelihood surface tends to be very complicated, so that full Bayes calculations may become highly non-robust.

However, the idea of the Bayesian approach, namely capturing our expert prior judgements in stochastic form and modifying them by appropriate rules given observations, is conceptually appropriate, and indeed there is no obvious alternative. In this chapter, we therefore describe the Bayes linear approach to uncertainty analysis for complex models. The Bayes linear approach is (relatively) simple in terms of belief specification and analysis, as it is based only on mean, variance and covariance specifications. These specifications are made directly as, following de Finetti (1974, 1975), we take expectation as our primitive quantification of uncertainty. The adjusted expectation and variance for a random vector y, given random vector z, are as follows.

(10.3)

E z [ y ]=E[ y ]+Cov[ y,z ]Var [ z ] 1 ( zE[ z ] ),

(10.4)

Var z [ y ]=Var[ y ]Cov[ y,z ]Var [ z ] 1 Cov[ z,y ].

(If Var[z] is not invertible, then an appropriate generalized inverse is used in the above forms.)

(p. 246) For the purpose of this account, we may either view Bayes linear analysis as a simple approximation to a full Bayes analysis, or as the appropriate analysis given a partial specification based on expectation. We give more details of the rationale and practicalities of the Bayes linear approach in the appendix, and for a detailed treatment, see Goldstein and Wooff (2007).

10.2.3 Overview of the analysis

The evaluation of complex computer models, such as the hydrocarbon reservoir simulation, at a given choice of input parameters is often a highly expensive undertaking both in terms of the time and the computation required. This expense typically precludes a large-scale investigation of the behaviour of the simulation with respect to its input parameters on the basis of model evaluations alone. Therefore, since the number of available model evaluations is limited by available resources there remains a substantial amount of uncertainty about the function and its behaviour, which we represent by means of an emulator (see Section 10.3.2.1).

For some problems, an approximate version of the original simulation may also be available. This coarse simulator, denoted Fc, can be evaluated in substantially less time and for substantially less cost, albeit with a consequent lower degree of accuracy. Since, both this coarse simulator and the original accurate simulator, Fa, are models of the same physical system, it is reasonable to expect that there will be strong qualitative and quantitative similarities between the two models. Therefore, with an appropriate belief framework to link the two simulators, we can use a large batch of evaluations of F c to construct a detailed emulator of the coarse simulator, which we can then use to inform our beliefs about F a and supplement the sparse collection of available full model evaluations. This is the essence of the multiscale emulation approach.

Our multiscale analysis of the hydrocarbon reservoir model proceeds in the following stages:

  1. 1. Initial model runs and screening – we perform a large batch of evaluations of Fc(x) and then identify which wells are most informative and therefore most important to emulate.

  2. 2. Emulation of the coarse simulator – given the large batch of evaluations of Fc(x), we now emulate each of the remaining outputs after the screening process.

  3. 3. Linking the coarse and accurate emulators – we use our emulators for Fc(x) to construct an informed prior specification for the emulators of Fa(x), which we then update by a small number of evaluations of Fa(x).

  4. (p. 247) 4. History matching – using our updated emulators of Fa(x), we apply the history matching techniques of Section 10.3.4.1 to identify a set X of possible values for the best model input x.

  5. 5. Re-focusing – we now focus on the reduced space, X, identified by our history matching process. In addition to the previous outputs we now consider an additional time point 12 months after the end of our original time series, which is to be the object of our forecast. We then build our emulators in the reduced space for Fc(x) and Fa(x) over the original time series and the additional forecast point.

  6. 6. Forecasting – using our emulators of Fa(x) within the reduced region, we forecast the ‘future’ time point using the methods from Section 10.3.5.1.

10.3 Uncertainty analysis for the Gullfaks reservoir

10.3.1 Initial model runs and screening

We begin by evaluating the coarse model Fc(x) over a 1000-point Latin hypercube design (McKay, Beckman, and Conover, 1979; Santner et al., 2003) in the input parameters. Since emulation, history matching and forecasting are computationally demanding processes, we choose to screen the collection of 120 outputs and determine an appropriate subset which will serve as the focus of our analysis. In order to identify this reduced collection, we will apply the principal variable selection methods of Cumming and Wooff (2007) to the 120 × 120 correlation matrix of the output vectors { F c ( x i ) },i=1,,1000 .

10.3.1.1 Methodology – Principal variables

Given a collection of outputs, y1:q, with correlation matrix R, the principal variable selection procedure operates by assigning a score h i = j=1 q Corr[ y i , y j ] 2 to each output yi. The first principal variable is then identified as the output which maximises this score. Subsequent outputs are then selected using the partial correlation given the set of identified principal variables. This allows for the choice of additional principal variables to be made having removed any effects of those variables already selected. To calculate this partial correlation, we first partition the correlation matrix into block form

R=( R 11 R 12 R 21 R 22 ),

where R11 corresponds to the correlation matrix of the identified principal variables, R22 is the correlation matrix of the remaining variables, and R12 and (p. 248) R21 are the matrices of correlations between the two groups. We then determine the partial correlation matrix R22·1 as

R 22:1 = R 22 R 21 R 11 1 R 12 .

The process continues until sufficient variables are chosen that the partial variance of each remaining output is small, or a sufficient proportion of the overall variability of the collection has been explained. In general, outputs with large values of hi have, on average, large loadings on important principal components of the correlation matrix and thus correspond to structurally important variables.

10.3.1.2 Application and results

The outputs from the hydrocarbon reservoir model have a group structure, with groups formed by the different wells, and different time points. We intend to retain all time points at a given well to allow for a multivariate temporal treatment of the emulation. Therefore we make our reduction in the number of wells in the model output by applying a modified version of the above procedure where, rather than selecting a single output, yi, at each stage, we instead select all 12 outputs corresponding to the well with highest total hi score. We then continue as before, though selecting a block of 12 outputs at each stage. The results from applying this procedure to the 10 wells in the model are given in Table 10.2.

We can see from the results that there is a substantial amount of correlation among the outputs at each of the wells, as the first identified principal well accounts for 77.7% of the variation of the collection. Introducing additional wells into the collection of principal outputs only increases the amount of variation of all the outputs explained by the principal variables by a small

Table 10.2 Table of summary statistics for the selection of principal wells.

Well name

Well hi

Cumulative % of variation

B2

4526.0

77.7

A3H

26.5

81.8

B1

19.5

84.6

B5

14.1

87.1

B10A

10.5

94.7

B7

7.0

95.2

A2AH

7.3

99.2

A17

1.1

99.7

B4

1.0

99.7

A1H

1.1

100.0

(p. 249) amount. On the basis of this information one could choose to retain the first four or five principal wells and capture between 87% and 95% of the variation in the collection. For simplicity, we choose to retain four of the ten wells, namely B2, A3H, B1 and B5.

10.3.2 Representing beliefs about F using emulators

10.3.2.1 Methodology – Coarse model emulation

We express our beliefs about the uncertainty in the simulator output by constructing a stochastic belief specification for the deterministic simulator, which is often referred to as an emulator. Our emulator for component i of the coarse simulator, Fc(x), takes the following form:

(10.5)

F i c ( x )= j β ij c g ij ( x )+ u i c ( x ) .

In this formulation, β i c =( β i1 c ,, β i p i c ) are unknown scalars, g i ( x )=( g i1 ( x ),, g i p i ( x ) ) are known deterministic functions of x (typically monomials), and u i c ( x ) is a stochastic residual process. The component g i ( x ) T β i c is a regression term which expresses the global variation in F i c namely that portion of the variation in F i c ( x ) which we can resolve without having to make evaluations for F i c at input choices which are near to x. The residual uc(x) expresses local variation, which we take to be a weakly stationary stochastic process with constant variance.

Often, we discover that most of the global variation for output component F i c is accounted for by a relatively small subset, x[i] say, of the input quantities called the active variables. In such cases, we may further simplify our emulator, as

(10.6)

F i c ( x )= j β ij c g ij ( x [ i ] )+ u i c ( x [ i ] )+ υ i c ( x )

where u i c ( x [ i ] ) is now a stationary process in the x[i] only, and υ i c ( x ) is an uncorrelated ‘nugget’ term expressing all of the residual variation which is attributable to the inactive inputs. When variation in these residual terms is small, and the number of inactive inputs is large, this simplification enormously reduces the dimension of the computations that we must make, while usually having only a small impact on the accuracy of our results.

The emulator expresses prior uncertainty judgements about the function. In order to fit the emulator, we must choose the functions gij(x), specify prior uncertainties for the coefficients βci and update these by carefully chosen evaluations of the simulator, and choose an appropriate form for the local variation uci(x). For a full Bayesian analysis, we must make a full prior specification for each of the key uncertain quantities, { β i c , u i c ( x [ i ] ), υ i c ( x ) } often choosing a (p. 250) Gaussian form. Within the Bayes linear formulation, we need only specify the mean, variance and covariance across each of the elements and at each input value x. From the prior form (10.6), we obtain the prior mean and variance of the coarse emulator as

(10.7)

E[ F i c ( x ) ]= g i ( x [ i ] ) T E[ β i c ]+E[ u i c ( x [ i ] ) ]+E[ υ i c ( x ) ],

(10.8)

Var[ F i c ( x ) ]= g i ( x [ i ] ) T Var[ β i c ] g i ( x [ i ] )+Var[ u i c ( x [ i ] ) ]+Var[ υ i c ( x ) ],

where a priori we consider { β i c , u i c ( x [ i ] ), υ i c ( x ) } as independent. There is an extensive literature on functional emulation (Sacks et al., 1989; Currin, Mitchell, Morris, and Ylvisaker, 1991; Craig, Goldstein, Rougier, and Seheult, 2001; Santner et al., 2003; O’Hagan, 2006).

As the coarse simulator is quick to evaluate, emulator choice may be made solely on the basis of a very large collection of simulator evaluations. If coarse simulator evaluations had been more costly, then we would need to rely on prior information to direct the choice of evaluations and the form of the collection G i = i,j { g ij ( ) } (Craig, Goldstein, Seheult, and Smith, 1998). We may make many runs of the fast simulator, which allows us to develop a preliminary view of the form of the function, and therefore to make a preliminary choice of the function collection Gi and therefore to suggest an informed prior specification for the random quantities that determine the emulator for Fa. We treat the coarse simulator as our only source of prior information about Fa(x). This prior specification will be updated by careful choice of evaluations of the full simulator, supported by a diagnostic analysis, for example based on looking for systematic structure in the emulator residuals.

With such a large number of evaluations of the coarse model, the emulator (10.6) can be identified and well-estimated from the data alone. For a Bayesian treatment at this stage, our prior judgements would be dominated by the large number of model evaluations. In contrast, our prior judgements will play a central role in our emulation of Fa(x), as in that case the data are far more scarce.

In general, our prior beliefs about the emulator components are structured as follows. First, we must identify, for each F i c ( x ) the collection of active inputs which describe the majority of global variation, their associated basis functions G and the coefficients βc. Having such an ample data set allows for model selection and parameter estimation to be carried out independently for each component of F c and to be driven solely by the information from the model runs. The residual process u i c ( x [ i ] ) is a weakly stationary process in x[i] which represents the residual variation in the emulator that is not captured by our trend in the active variables. As such, residual values will be strongly correlated for neighbouring values of x[i]. We therefore specify a prior covariance structure (p. 251) over values of uci(x[i]) which is a function of the separation of the active variables. The prior form we use is the Gaussian covariance function

(10.9)

Cov[ u i c ( x [ i ] ), u i c ( x [ i ] ) ]= σ u i 2 exp( θ i c x [ i ] x [ i ] 2 ),

where σ u i 2 is the point variance at any given x, θci is a correlation length parameter which controls the strength of correlation between two separated points in the input space, and || · || is the Euclidean norm.

The nugget process υci(x) expresses all the remaining variation in the emulator attributable to the inactive inputs. The magnitude of the nugget process is often small and so is treated as uncorrelated random noise with Var[ υ i c ( x ) ]= σ u i 2 . We consider the point variances of these two processes to be proportions of the overall residual variance of the computer model given the emulator trend, σ i 2 so that σ u i 2 =( 1 δ i ) σ i 2 and σ u i 2 = δ i σ i 2 for some σ i 2 and some typically small value of δi.

10.3.2.2 Application and results

We have accumulated 1000 simulator runs and identified which production wells in the reservoir are of particular interest. Prior to emulation, the design was scaled so that all inputs took the range [−1, 1], and all outputs from F c were scaled by the model runs to have mean 0 and variance 1. We now describe the emulator of component i = (w, t) of the coarse simulator Fc, where w denotes the well, and t denotes the time associated with the ith output component of the computer model.

The first step in constructing the emulator is to identify, for each output component F i c the subset of active inputs x[i] which drive the majority of global variation in F i c . Using the large batch of coarse model runs, we make this determination via a stepwise model search using simple linear regression. We begin by fitting each Fci on all linear terms in x using ordinary least squares. We then perform a stepwise delete on each regression, progressively pruning away inactive inputs until we are left with a reduced collection x[i] of between 3 and 5 of the original inputs. The chosen active variables for a subset of the wells of F i c are presented in the third column of Table 10.3. We can see from these results that the inputs ϕ and crw are active on almost all emulators for those two wells, a pattern that continues on the remaining two wells. Clearly ϕ and crw are important in explaining the global variation of F c across the input space. Conversely, the input variable Ah appears to have no notable effect on model output.

The next stage in emulator construction is to choose the functions gij(x[i]) for each F i c ( x ) which form the basis of the emulator trend. Again, since we have an ample supply of computer evaluations we determine this collection by stepwise fitting. For each F i c ( x ) we define the maximal set of basis functions to

(p. 252)

Table 10.3 Emulation summary for wells B2 and A3H.

Well

Time

x[i]

No. model terms

Coarse simulator R2

Accurate simulator R ˜ 2

B2

4

ϕ, cr w, Ap

9

0.886

0.951

B2

8

ϕ, cr w, Ap

7

0.959

0.958

B2

12

ϕ, cr w, Ap

10

0.978

0.995

B2

16

ϕ, cr w, kz

7

0.970

0.995

B2

20

ϕ, cr w, kx

11

0.967

0.986

B2

24

ϕ, cr w, kx

10

0.970

0.970

B2

28

ϕ, cr w, kx

10

0.975

0.981

B2

32

ϕ, cr w, kx

11

0.980

0.951

B2

36

ϕ, cr w, kx

11

0.983

0.967

A3H

4

ϕ, cr w, Ap

9

0.962

0.824

A3H

8

ϕ, cr w, kx

7

0.937

0.960

A3H

12

ϕ, cr w, kz

10

0.952

0.939

A3H

16

ϕ, cr w, kz

7

0.976

0.828

A3H

20

ϕ, cr w, kx

11

0.971

0.993

A3H

24

ϕ, cr w, kx

10

0.964

0.899

A3H

28

ϕ, kz, Ap

10

0.961

0.450

A3H

32

ϕ, cr w, kz

11

0.968

0.217

A3H

36

ϕ, cr w, kx

11

0.979

0.278

include an intercept with linear, quadratic, cubic and pairwise interaction terms in x[i]. The saturated linear regression over these terms is then fitted using the coarse model runs and we again prune away any unnecessary terms via stepwise selection. For illustration, the trend and coefficients of the coarse emulator for well B1 oil production rate at time t = 28 are given in the first row of Table 10.4.

For each component F i c , we have now identified a subset of active inputs x[i] and a collection of pi basis functions gi (x[i]) which adequately capture the majority of the global behaviour of F i c . The next stage is to quantify beliefs

Table 10.4 Table of mean coefficients from the emulator of oil production rate at well B1 and time 28.

Model

Int

ϕ

ϕ2

ϕ3

crw

crw2

crw3

Coarse

0.663

−0.326

−1.858

2.313

−0.219

0.064

0.079

Accurate

0.612

−0.349

−0.599

0.811

0.313

−0.331

−0.822

Refocused coarse

−0.149

2.204

−0.614

−0.858

−0.586

0.386

0.119

Refocused accurate

0.678

0.402

−0.456

−0.098

−0.057

−0.055

0.053

ϕ × crw

Ap

kz

wc(x)

R2

R ˜ 2

Coarse

0.407

−0.008

0.904

Accurate

0.072

0.111

0.112

0.952

Refocused coarse

0.206

0.044

−0.037

0.905

Refocused accurate

−0.045

−0.034

−0.045

−0.109

0.945

(p. 253) about the emulator coefficients β i c . We fit our linear description in the selected active variables using ordinary least squares assuming uncorrelated errors to obtain appropriate estimates for these coefficients. The value of E[ β i c ] is then taken to be the estimate β ^ ij c from the linear regression and Var[ β i c ] is taken to be the estimated variance of the corresponding estimates. As we have 1000 evaluations in an approximately orthogonal design, the estimation error is negligible.

The results of the stepwise selection and model fitting are given in the first five columns of Table 10.3. We can see from the R2 values that the emulator trends are accounting for a very high proportion of the variation in the model output. We observe similar performance on the emulators of the remaining wells, with the exceptions of the emulators of well B5 at times t = 4 and t = 8, which could not be well-represented using any number or combination of basis functions. These two model outputs were therefore omitted from our subsequent analysis leaving us with a total of 34 model outputs.

The final stage is to make assessments for the values of the hyperparameters in our covariance specifications for ui(x[i]) and υi(x). We estimate σ i 2 by the estimate of variance of the emulator trend residuals, and then obtain estimates for θi and δi by applying the robust variogram methods of Cressie (1991). We then use these estimates as plug-in values for θ i , δ i 2 and σ i 2 .

For diagnostic purposes, we then performed a further 100 evaluations of Fc(x) at points reasonably well separated from the original design. For each of these 100 runs, we compared the actual model output, F i c ( x ) with the predictions obtained from our coarse emulators. For all emulators, the variation of the prediction errors of the 100 new points was comparable to the residual variation of the original emulator trend, indicating that the emulators are interpolating well and are not over-fitted to the original coarse model runs. Investigation of residual plots also corroborated this result.

10.3.3 Linking the coarse and accurate emulators

10.3.3.1 Methodology – Multiscale emulation

We now develop an emulator for the accurate version of the computer model Fa(x). We consider that Fc(x) is sufficiently informative for Fa(x) that it serves as the basis for an appropriate prior specification for this emulator. We initially restrict our emulator for component i of Fa(x) to share the same set of active variables and the same basis functions as its coarse counterpart F i c ( x ) . Since the coarse model Fc(x) is well-understood due to the considerable number of model evaluations, we consider the coarse emulator structure as known and fixed and use this as a structural basis for building the emulator of Fa(x). Thus we specify a prior accurate emulator of the form

(10.10)

F i a ( x )= g i ( x [i] ) T β i a + β ω i a ω i c ( x )+ ω i a ( x ),

(p. 254) where ω i c ( x )= u i c ( x [i] )+ υ i c ( x i ), ω i a ( x )= u i a ( x [i] )+ υ i a ( x i ) and we have an identical global trend structure over the inputs albeit with different coefficients. On this accurate emulator, we also introduce some unknown multiple of the coarse emulator residuals β ω i a ω i c ( x ) and include a new residual process ω i a ( x ) which will absorb any structure of the accurate computer model that cannot be explained by our existing set of active variables and basis functions. Alternative methods for constructing such a multiscale emulator can be found in Kennedy and O’Hagan (2000) and Qian and Wu (2008).

As we have performed a large number of evaluations of Fc(x), over a roughly orthogonal design, our estimation error from the model fitting is negligible and so we consider the β i c as known for each component i, and further for any x at which we have evaluated Fc(x), the residuals ω i c ( x ) are also known. Thus we incorporate the ω i c ( x ) into our collection of basis functions with associated coefficient β ω i a . Absorbing wc(x) into the basis functions and β ω i a into the coefficient vector β i a we write the prior expectation and variance for the accurate simulator as

(10.11)

E[ F i a ( x ) ]= g i ( x [ i ] ) T E[ β i a ]+E[ ω i a ( x ) ],

(10.12)

Var[ F i a ( x ) ]= g i ( x [ i ] ) T Var[ β i a ] g i ( x [ i ] )+Var[ ω i a ( x ) ],

where now g i ( x [ i ] )=( g i1 ( x [ i ] ),, g i p i ( x [ i ] ), ω i c ( x ) ) , and β i a =( β i1 a ,, β i p i a , β ω i a ) . We also specify the expectation and variance of the residual process wa(x) to be

(10.13)

E[ ω i a ( x ) ]=0,

(10.14)

Cov[ ω i a ( x ), ω i a ( x ) ]= σ u i 2 exp( θ i c x [ i ] x [ i ] 2 )+ σ υ i 2 I,

where the covariance function between any pair of residuals on the accurate emulator has the same prior form and hyperparameter values as that used for uc(x[i]) in (10.9).

We now consider the prior form of β i a in more detail. If we believe that each of the terms in the emulator trend corresponds to a particular qualitative physical effect, then we may expect that the magnitude of these effects will change differentially as we move from the coarse to the accurate simulator. This would suggest allowing the contribution of each gij(x[i]) to the trend of Fa(x) to be changed individually. One prior form which allows for such changes is

(10.15)

β ij a = ρ ij β ij c + γ ij

where ρij is an unknown multiplier which scales the contribution of β ij c to β ij a and γij is a shift that can accommodate potential changes in location. We consider ρij to be independent of γij. In order to construct our prior form for F i a ( x ) , we must specify prior means, variances and covariances for ρij and (p. 255) γij. We develop choices appropriate to the hydrocarbon reservoir model in Section 10.3.3.2.

As our prior statements about Fa(x) describe our beliefs about the uncertain value of the simulator output, we can use observational data, namely the matrix Fa(Xa) of evaluations of the accurate simulator over the elements of the chosen design Xa, to compare our prior expectations to what actually occurs. A simple such comparison is achieved by the discrepancy ratio for F i a ( X a ) the vector containing accurate simulator evaluations over Xa for the ith output component, defined as follows

(10.16)

Dr( F i a ( X a ) )= { F i a ( X a )E[ F i a ( X a ) ] } T Var [ F i a ( X a ) ] 1 { F i a ( X a )E[ F i a ( X a ) ] } rk{ Var[ F i a ( X a ) ] } ,

which has prior expectation 1, and where rk{ Var[ F i a ( X a ) ] } corresponds to the rank of the matrix Var[ F i a ( X a ) ] . Very large values of Dr( F i a ( X a ) ) may suggest a mis-specification of the prior expectation or a substantial underestimation of the prior variance. Conversely, very small values of Dr( F i a ( X a ) ) may suggest an overestimation of the variability of F i a ( x ) .

Given the prior emulator for F i a ( x ) and the simulator evaluations F i a ( X a ) we now update our prior beliefs about F i a ( x ) by the model runs via the Bayes linear adjustment formulae (10.3) and (104). Thus we obtain an adjusted expectation and variance for F i a ( x ) given F i a ( X a ) .

(10.17)

E F i a ( X a ) [ F i a ( x ) ] =E[ F i a ( x1 ) ]+Cov[ F i a ( x ), F i a ( X a ) ]Var [ F i a ( X a ) ] 1 { F i a ( X a )E[ F i a ( X a ) ] },

(10.18)

Var F i a ( X a ) [ F i a ( x ) ] =Var[ F i a ( x ) ]Cov[ F i a ( x ), F i a ( X a ) ]Var [ F i a ( X a ) ] 1 Cov[ F i a ( X a ), F i a ( x ) ].

The constituent elements of this update can be derived from our prior specifications for Fa(x) from (10.11) and (10.12), and our belief statements made above.

10.3.3.2 Application and results

We first consider that the prior judgement that the expected values of the fine emulator coefficients are the same as those of the coarse emulator is appropriate, and so we specify expectations E[ρij] and E[γij] We now describe the covariance structure for the ρij and γij parameters. Every ρij (and similarly γij) is associated with a unique well w and time point t via the simulator output component F i a ( x ) . Additionally, every ρij is associated with a unique regression basis function gij(·). Given these associations, we consider there to be two sources of correlation between the ρij at a given well. First, for a given well w we consider there to be temporal effects correlating all (ρij, ρij) pairs to a degree (p. 256) governed by their separation in time. Secondly, we consider that there are model term effects which introduce additional correlation when both ρij and ρij are multipliers for coefficients of the same basis functions, i.e. g ij ( ) g i j ( ) .

To express this covariance structure concisely, we extend the previous notation and write ρij as ρ(w,t,k) where w and t correspond to the well and time associated with F i a ( x ) and where k indexes the unique regression basis function associated with ρij, namely the single element of the set of all basis functions G= i,j { g ij ( ) } . Under this notation, for a pair of multipliers (ρ(w,t,k), ρ(w,t’,k’)) then k = k if and only if both are multipliers for coefficients of the same basis function, say ϕ2, albeit on different emulators. On this basis, we write the covariance function for ρ(w,t,k) as

(10.19)

Cov[ ρ ( ω,t,k ) , ρ ( ω, t , k ) ]=( σ ρ1 2 + σ ρ2 2 I k= k ) R T ( t, t ),

where σ ρ1 2 governs the contribution of the overall temporal effect to the covariance, and σ ρ1 2 controls the magnitude of the additional model term effect, R T ( t, t )=exp{ θ T ( t t ) 2 } is a Gaussian correlation function over time, and I p is the indicator function of the proposition p. Our covariance specification for the γ(ω,t,k) takes the same form as (10.19) albeit with variances σ γ1 2 and σ γ2 2 .

To complete our prior specification over Fa(x) we assign σ ρ1 2 = σ γ1 2 =0.1 and σ ρ2 2 = σ γ2 2 =0.1 for all output components, which correspond to the belief that coefficients are weakly correlated with other coefficients on the same emulator, and that the model term effect has a similar contribution to the covariance as the temporal effect. We also assigned θT = 1/122 to allow for a moderate amount of correlation across time.

We now evaluate a small batch of 20 runs of the accurate simulator. The runs were chosen by generating a large number of Latin hypercube designs and selecting that which would be most effective at reducing our uncertainty about βa by minimising tr{ Var[ β ^ a ] } by least squares. Considering the simulator output for each well individually, since information on F ω a ( x )=( F ( ω,4 ) a ( x ), F ( ω,8 ) a ( x ),, F ( ω,36 ) a ( x ) ) for each well w is now available in the form of the model runs F ω a ( X a ) over the design Xa, we can make a diagnostic assessment of the choices made in specifying prior beliefs about F ω a . In the case of our prior specifications for the multivariate emulators for each well, we obtain discrepancy ratio values of 0.86, 1.14, 0.67, and 1.07 suggesting our prior beliefs are broadly consistent with the behaviour observed in the data. For more detailed diagnostic methods for evaluating our prior and adjusted beliefs see Goldstein and Wooff (2007).

Using the prior emulator for F ω a ( x ) and the simulator evaluations F ω a ( X a ) , we update our beliefs about F ω a ( x ) by the model runs via the Bayes linear adjustment formulae (10.17), (10.18). To assess the adequacy of fit for the (p. 257) updated accurate emulators of the outputs updated using the 20 runs of Fa(x), we calculate a version of the R2 statistic by using the residuals obtained from the adjusted emulator trend g i ( x [ i ] ) T E F i a ( X a ) [ β i a ] , denoted R ˜ 2 . These are given in the final column of Table 10.3. It is clear that the majority of the accurate emulators perform well and accurately represent the fine simulator, except for the emulators at well A3H and times t = 28, 32, 36, which display poor fits to the fine model due to the behaviour of Fc(x) at those locations being uninformative for the corresponding accurate model. For additional illustration, the coefficients for the coarse emulator and the adjusted expected coefficients of the accurate emulator of well B1 oil production rate at time t = 28 are given in the first two rows of Table 10.4. We can see from these values in this case that both emulators have good fits to the simulator despite the different coefficients.

10.3.4 History matching and calibration

10.3.4.1 Methodology – History matching via implausibility

Most models go through a series of iterations before they are judged to give an adequate representation of the physical system. This is the case for reservoir simulators, where a key stage in assessing simulator quality is termed history matching, namely identifying the set X of possible choices of x for the reservoir model (i.e. those choices of input geology which give a sufficiently good fit to historical observations, relative to model discrepancy and observational error). If our search reveals no possible choices for x, this is usually taken to indicate structural problems with the underlying model, provided that we can be reasonably confident that the set X is indeed empty. This can be difficult to determine, as the input space over which we must search may be very high dimensional, the collection of outputs over which we may need to match may be very large, and each single function evaluation may take a very long time. We now describe the method followed in Craig, Goldstein, Seheult, and Smith (1997).

History matching is based on the comparison of simulator output with historical observations. If we evaluate the simulator at a value, x, then we can judge whether x is a member of X by comparing F(x) with data z. We do not expect an exact match, due to observational error and model discrepancy, and so we only require a match at some specified tolerance, often expressed in terms of the number of standard deviations between the function evaluation and the data. In practice, we cannot usually make a sufficient number of function evaluations to determine X in this way. Therefore, using the emulator, we obtain, for each x, the values E[F(x)] and Var [F(x)]. We seek to rule out regions of x space for which we expect that the evaluation F(x) is likely to be a very poor match to observed z.

(p. 258) For a particular choice of x, we may assess the potential match quality, for a single output Fi, by evaluating

(10.20)

( i ) ( x )= | E[ F i ( x ) ] z i | 2 Var[ E[ F i ( x ) ] z i ] ,

which we term the implausibility that Fi(x) would give an acceptable match to zi. For given x, implausibility may be evaluated over the vector of outputs, or over selected subvectors or over a collection of individual components. In the latter case, the individual component implausibilities must be combined, for example by using

(10.21)

M ( x )= max i ( i ) ( x ).

We may identify regions of x with large IM(x) as implausible, i.e. unlikely to be good choice for x. These values are eliminated from our set of potential history matches, X.

If we wish to assess the potential match of a collection of q outputs F, we use a multivariate implausibility measure analogous to (10.20) given by

(10.22)

( x )= ( E[ F( x ) ]z ) T Var [ E[ F( x ) ]z ] 1 ( E[ F( x ) ]z ) q ,

where I(x) is scaled to have expectation 1 if we set x = x. Unlike (10.20), the calculation of I(x) from (10.22) requires the specification of the full covariance structure between all components of z and F, for any pair of x values.

For comparison, a direct Bayesian approach to model calibration is described in Kennedy and O’Hagan (2001). The Bayesian calibration approach involves placing a posterior probability distribution on the ‘true value’ of x. This is meaningful to the extent that the notion of a true value for x is meaningful. In such cases, we may make a direct Bayesian evaluation over the reduced region X, based on careful sampling and emulation within this region. If our history matching has been successful, the space over which we must calibrate will have been sufficiently reduced that calibration should be tractable and effective, provided our prior specification is sufficiently careful. As a simple approximation to this calculation, we may re-weight the values in this region by some function of our implausibility measure. The Bayes linear approach to prediction that we will describe in Section 10.3.5.1 does not need a calibration stage and so may be used directly following a successful history matching stage.

10.3.4.2 Application and results

We now use our updated emulator of Fa(x) to history match the reservoir simulator. At a given well, we consider outputs corresponding to different (p. 259) times to be temporally correlated. Thus we apply the multivariate implausibility measure (10.22) to obtain an assessment of the potential match quality of a given input x at each well. Incorporating the definitions of z and y from (10.1) and (10.2) into the implausibility formulation, we can write the implausibility function as

(10.23)

( x ) = { E F a ( x ) [ F a ( x ) ]z } T { Var F a ( x ) [ F a ( x ) ]+Var[ e ] +Var [ ] 1 }{ E F a ( x ) [ F a ( x ) ]z },

which is a function of the adjusted expectations and variances of our emulator for Fa(x) given the model evaluations, combined with the corresponding observational data, z, and the covariances for the observational error, e, and the model discrepancy, ϵ.

We now specify our prior expectation and variance for the observational error e and the model discrepancy ϵ. We do not have any prior knowledge of biases of the simulator or the data therefore we assign E[e] = 0 and E[ϵ] = 0. It is believed that our available well production history has an associated error of approximately ±10%, therefore we assign 2 × sd (ei) = 0.1 × zi for each emulator component F i a ( x ) and we assume that there is no prior correlation between observational errors. Assessing model discrepancy is a more conceptually challenging task requiring assessment of the difference between the model evaluated at the best, but unknown, input, x, and the true, also unknown, value of the system. For simplicity, we assign the variance of the discrepancy to be twice that of the observational error to reflect a belief that the discrepancy has a potentially important and proportional effect. In contrast to observational errors, we introduce a relatively strong temporal correlation over the ϵi = ϵ(w,t) such that Corr[ϵ(w,t), ϵ(w,t′))] = exp{−θT(tt′)2}, where we assign θT = 1/362 to allow the correlation to persist across all 12 time points spanning the 36-month period. We specify such a correlation over the model discrepancy since we believe that if the simulator is, for example, substantially under-estimating the system at time t, then it will be highly likely that it will also under-predict at time t + 1.

To assess how the implausibility of input parameter choices changes with x, we construct a grid over the collection of active inputs spanning their feasible ranges and we evaluate (10.23) for each of the four selected wells, at every x point in that grid. We then have a vector of four implausibilities for every input parameter combination in the grid. To collapse these vectors into a scalar for each x, we use the maximum projection (10.21) where we maximise over the different wells to obtain a single measure IM(x). This gives a conservative measure for the implausibility of a parameter choice, x, since if x is judged implausible on any one of the wells then it is deemed implausible for the collection. Thus the implausibility scores are combined in such a way that (p. 260) a particular input point x is only ever considered a potential match to the simulator if it is an acceptable match across all wells.

Thus we obtain a quantification for the match quality for a representative number of points throughout the possible input space. The domain of the implausibility measure is a five-dimensional cube and, as such, it is hard to visualize the implausibility structure within that space. To address this problem, we project this hypercube of implausibility values down to 2D spaces in every pair of active inputs using the method described in Craig et al. (1997). If we partition x such that x = (x′, x″), then we obtain a projection of ^ ( x ) onto the subspace x′ of x by calculating

min x M ( x ),

which is a function only of x′.

M(x) is a Mahalanobis distance over the four time points for each well. We produce the projections of the implausibility surface in Figure 10.2, colouring by appropriate X 4 2 quantiles for comparison. The first plot in Figure 10.2(a) shows the approximate proportion of the implausibility space that would be excluded if we were to eliminate all points x with IM(x) greater than a number of the standard deviations from the re-scaled X2 distribution. For example, thresholding at three standard deviations, corresponding to M ( x )4 , would excludes approximately 90% of of input space. The subsequent plots in Figure 10.2(b) to Figure 10.2(f) are a subset of the 2D projections of the implausibility surface onto pairs of active variables. It is clear from these plots that there are regions of low implausibility corresponding to values of ϕ less than approximately 0.8 which indicates a clear region of potential matches to our reservoir history. Higher values of ϕ are much more implausible and so would be unlikely history matches. Aside from ϕ, there appears to be little obvious structure on the remaining active variables. This is reinforced by Figure 10.2(f), which is representative of all the implausibility projections in the remaining active inputs. This plot clearly shows that there is no particular set of choices for kx or kz that could reasonably be excluded from consideration without making very severe restrictions of the input space. Therefore, we decide to define our region of potential matches, X, by the set {x : IM(x) ≤ 4}. Closer investigation revealed that this set can be well-approximated by the restriction that ϕ should be constrained to the sub-interval [0.5, 0.79].

10.3.4.3 Re-emulation of the model

Given the reduced space of input parameters, we now re-focus our analysis on this subregion with a view towards our next major task – forecasting. Our intention for the forecasting stage is, for each well, to use the (p. 261)

Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experimentsClick to view larger

Fig. 10.2 Implausibility summary and projections for the hydrocarbon reservoir simulator.

(p. 262) last four time points in our existing series to forecast an additional time point located 12-months beyond the end of our original time series at t = 48 months. Therefore, we no longer continue investigating the behaviour of well B2 since it ceases production shortly after our original three-year emulation period.

To forecast, we require emulators for each the four historical time points as well as the additional predictive point, which we now construct over the reduced space X. To build the emulators, we follow the same process as described in Section 10.3.2.2 and Section 10.3.3.1, requiring a batch of additional model runs. Of our previous batch of 1000 evaluations of Fc(x) 262 were evaluated within X and so can be used at this stage. Similarly, of the 20 runs of Fa(x) a total of six remain valid. These runs will be supplemented by an additional 100 evaluations of Fc(x) and then by an additional 20 evaluations of Fa(x).

Adopting the same strategy as Section 10.3.2.2, we construct our coarse emulators from the information contained within the large pool of model evaluations, albeit with two changes to the process. Since we have already emulated these output components in the original input space (with the exception of our predictive output), we already have some structural information in the form of the x[i] and the gi(x[i]) for each F i c ( x ) that we obtained from the original emulation. Rather than completely re-executing the search for active variables and basis functions, we shall begin our searches using the original x[i] and gi(x[i]) as the starting point. We allow for the emulators to pick up any additional active variables, but not to exclude previously active inputs; and we allow for basis functions in the new x[i] to be both added and deleted to refine the structure of the emulator.

An emulation summary for Fc(x) within the reduced region X is given in Table 10.5 for wells A3H and B1. We can see that the emulator performance is still good with high R2 indicating that the emulators are still explaining a large proportion of the variation in model output. Observe that many of the emulators have picked up an additional active input variable when we re-focus in the reduced input space.

Considering the emulator for Fa(x), we make a similar belief specification as before to link our emulator of Fc(x) to that of Fa(x). We choose to make the same choices of parameters (σρ1, σρ2, σγ1, σγ2, θT) to reflect a prior belief that the relationship between the two emulators in the reduced space is the similar to that over the original space. Comparing this prior with the data via the discrepancy ratio (10.16) showed that it was again reasonably consistent with Dr(Fa(Xa)) taking values of 2.14, 0.98, and 2.02, although perhaps we may be slightly understating our prior variance. The prior emulator was then updated by the runs of the accurate model. Looking at the final column at Table 10.5 we see that the emulator trend fits the data well, although the R ˜ 2 values appear to be decreasing over time. Example coarse and accurate emulator coefficients for (p. 263)

Table 10.5 Re-focused emulation summary for wells A3H and B1.

Well

Time

x[i]

No. model terms

Coarse trend R2

Accurate trend R ˜ 2

A3H

24

ϕ, cr w, kx, kz

8

0.981

0.974

A3H

28

ϕ, cr w, kx, kz, Ap

11

0.971

0.989

A3H

32

ϕ, cr w, kx, kz

11

0.973

0.958

A3H

36

ϕ, cr w, kx

10

0.958

0.917

A3H

48

ϕ, cr w, kx

10

0.981

0.888

B1

24

ϕ, cr w, kx, Ap

11

0.894

0.982

B1

28

ϕ, cr w, kz, Ap

9

0.905

0.945

B1

32

ϕ, cr w, kx

11

0.946

0.953

B1

36

ϕ, cr w, kx

11

0.953

0.927

B1

48

ϕ, cr w, kx, Ap

11

0.941

0.880

the re-focused emulator are also given in the bottom two rows of Table 10.4, which shows more variation in the coefficients as we move from coarse to fine in the reduced space and also shows the presence of an additional active input.

10.3.5 Forecasting

10.3.5.1 Methodology – Bayes linear prediction

We wish to predict the collection y p of future well production using the observed well history zh. This is achieved by making an appropriate specification for the joint mean and variance of the collection (yp, zh), and so our prediction for yp using the history zh is the adjusted expectation and variance of yp given zh. This Bayes linear approach to forecasting is discussed extensively in Craig et al. (2001).

The Bayes linear forecast equations for yp given zh are given by

(10.24)

E zh [ y p ]=E[ y p ]+Cov[ y p , z h ]Var [ z h ] 1 ( z h E[ z h ] ),

(10.25)

Var zh [ y p ]=Var[ y p ]Cov[ y p , z h ]Var [ z h ] 1 Cov[ z h , y p ].

Given the relations (10.1) and (10.2), we can express this forecast in terms of the ‘best’ simulator run F * = F a ( x * ) , the model discrepancy ϵ, the observed history zh, and the observational error e. From (10.2) we write the expectation and variance of y as E[ y ]= E F a ( X a ) [ F * ]+E[ ] and Var[ y ]= Var F a ( X a ) [ F * ]+Var[ ε ] , namely the adjusted expectation and variance of the best accurate simulator run F, given the collection of available simulator evaluations Fa(Xa) plus the model discrepancy. For simplicity of presentation, we introduce the shorthand notation μ * = E F a ( X a ) [ F * ], * = Var F a ( X a ) [ F * ], ε =Var[ ε ] and e =Var[ e ] and we again use the subscripts h, p to indicate the relevant subvectors and submatrices of these quantities corresponding to the historical and predictive components. We also assume E[ ε ]=0 to reflect the belief that there are no systematic (p. 264) biases in the model known a priori. The Bayes linear forecast equations are now fully expressed as follows

(10.26)

E z h [ y p ]= μ p * +( ph * + ph ε ) ( h * + h ε + h e ) 1 ( z h μ h * ),

(10.27)

Var z h [ y p ]=( p * + p ε )( ph * + ph ε ) ( h * + h ε + h e ) 1 ( hp * + hp ε ).

Given a specification for ε and e we can assess the first and second order specifications E[ F i a ( x ) ],Cov[ F i a ( x ), F i a ( x ) ] from our emulator of F a for every x, x X * . We may therefore obtain the mean and variance of F * = F a ( x * ) by first conditioning on x and then integrating out with respect to an appropriate prior specification over X for x. Hence E F a ( X a ) [ F * ] and Var F a ( X a ) [ F * ] are calculated to be the expectation and variance (with respect to our prior belief specification about x) of our adjusted beliefs about F a ( x )atx= x * given the model evaluations F a ( X a ) . Specifically, this calculation requires the computation of the expectations, variances and covariances of all g ij ( x [ i ] * ) and ω i a ( x ) , which, in general, may require substantial numerical integration.

This analysis makes predictions without a preliminary calibration stage. Therefore, the approach is tractable even for large systems, as are search strategies to identify collections of simulator evaluations chosen to minimize adjusted forecast variance. The approach is likely to be effective when global variation outweighs local variation and the overall collection of global functional forms g(x) for F h a and F p a are similar. It does not exploit the local information relevant to the predictive quantities, as represented by the residual terms ω i a ( x ) in the component emulators. If some quantities that we wish to predict have substantial local variation, then we may introduce a Bayes linear calibration stage before forecasting, whilst retaining tractability (Goldstein and Rougier, 2006).

10.3.5.2 Application and results

We now apply the forecasting methodology, as described in Section 10.3.5.1, to the three wells under consideration from the hydrocarbon reservoir model. The goal of this forecasting stage is to forecast the collection of future system output, yp, using the available historical observations zh. For a given well in the hydrocarbon reservoir, we will consider the vector of four average oil production rates at times t = 24, 28, 32, and 36 as historical values, and the quantity to be predicted is the corresponding production rate observed 12 months later at time t = 48. As we actually have observations zp on yp, these may act as a check on the quality of our assessments.

By history matching the hydrocarbon reservoir, we have determined a region X in which we believe it is feasible that an acceptable match x should lie. For our forecast, we shall consider that x is equally likely to be any input point contained within the region X. This means that we take our prior for x to (p. 265) be uniform over X. As we take the gij(x) to be polynomials in x, then the expectations, variances and covariances of the gij(x) can be found analytically from the moments of a multivariate uniform random variable which greatly simplifies the calculations of μ and , the adjusted mean and variance for F * = F a ( x * ) . We now refine our previously generous specification for Var [e]. Since each output quantity is an average of four monthly values, we now take Var [e] to be 1/4 its previous value to reflect the reduced uncertainty associated with the mean value.

Before we can obtain a prediction for yp given zh we require an appropriate belief specification for the model discrepancy ϵ, both at the past time points, and also at the future time point to be predicted. The role of the model discrepancy, ϵ, is important in forecasting as it quantifies the amount by which we allow the prediction to differ from our mean simulator prediction, μ * =E[ F * ] , in order to move closer to the true system value yp. If the specified discrepancy variance is too small, then we will obtain over-confident and potentially inaccurate forecasts located in the neighbourhood of μ. If the discrepancy variance is too large then the forecast variances could be unfavourably large or we could over-compensate by the discrepancy and move too far from yp. We now briefly consider the specification of Var [ϵ].

Consider the plots in Figure 10.3 depicting the outputs from the reservoir model over the time period of our predictions. Observe that for time points 24 to 36 at wells B1 and B5, the mean values μ (indicated by the solid black circles) underestimate the observational data (indicated by the thick solid line). However at well A3H, the simulator ‘overestimates’ observed production. Furthermore, we observe that for well A3H the size of |μz| is a decreasing function of time, for well B1 this distance increases over time, and for well B5 |μz| appears to be roughly constant.

Given the specification for the observational error used in Section 10.3.4.2 and the observed history, we can compare Var [eh] to the observed values of ( μ * z h ) 2 at each well and historical time point to obtain a simple order of magnitude data assessment for the discrepancy variance at the historical time points. To obtain our specification for Var [ϵ], we took a weighted combination of prior information in the form of our belief specification for Var [ϵ] from Section 10.3.4.2 and these order of magnitude numerical assessments. As the value of zp is unknown at the prediction time t = 48, we must make a specification for Var[ ε p ] in the absence of any sample information. To make this assessment, we performed simple curve fitting to extrapolate the historical discrepancy variances to the forecast point t = 48. The resulting specification for Var [ϵ] is given in Table 10.6; the correlation structure over the discrepancies is the same as in Section 10.3.4.2

Given all these specifications, we evaluate the forecast equations (10.26) and (10.27). The results of the forecasts are presented in Table 10.7 alongside the corresponding prior values, and the actual observed production zp at t = 48. (p. 266)

Bayes linear uncertainty analysis for oil reservoirs based on multiscale computer experimentsClick to view larger

Fig. 10.3 Simulator outputs, observational data and forecasts for each well. The solid lines indicate z with error bounds of 2 × sd (e). The dotted and dashed lines represent the maximum and minimum values of the runs of Fc(x) and Fa(x) respectively in X. The solid black dots correspond to μ. The forecast is indicated by a hollow circle with attached error bars.

(p. 267)

Table 10.6 Table of specified values for 2 × sd ϵ(w,t).

2 × sd ϵ(w,t)

t = 24

t = 28

t = 32

t = 36

t = 48

A3H

504.9

390.4

124.5

71.5

14.7

B1

130.5

142.0

260.4

245.1

408.0

B5

284.3

239.3

305.8

214.7

260.8

The forecasts and their errors are also shown on Figure 10.3 by a hollow circle with attached error bars.

We can interpret the prediction Ezh [yp] as the forecast from the simulator for yp which is modified in the light of the discrepancy between Ezh [yh] and the observed zh. In the case of the wells tabulated, the simulator for well A3H over-estimates the value of zh during the period t = 24, , 36 resulting in a negative discrepancy and a consequent downward correction to our forecast. However, interestingly in the intervening period the simulator changes from under-estimating to over-estimating observed well production, and so on the basis of our observed history alone we under-predict the oil production rate for this well. The wells B1 and B5 behaved in a more consistent manner with a constant under-prediction of the observed data compared to the observed history being reflected by an increase to our forecast. Note that whilst some of the best runs of our computer model differ substantially from zp, the corrections made by the model discrepancy result in all of our forecast intervals being within the measurement error of the data.

In practice, the uncertainty analysis for a reservoir model is an iterative process based on monitoring the sizes of our final forecast intervals and investigating the sensitivity of our predictions to the magnitude of the discrepancy variance and correlation, for example by repeating the forecasts using the discrepancy variance values from Table 10.6 scaled by some constant α, for various values of α and varying the degree of temporal correlation across discrepancy terms. If we obtain forecast intervals which are sufficiently narrow to be useful to reservoir engineers then the we can end our analysis at this

Table 10.7 Forecasts of yp at t = 48 using zh for three wells in the hydrocarbon reservoir.

A3H

B1

B5

Observation

zp

190.0

1598.6

227.0

2 × sd (e)

19.0

159.9

22.7

Prior

E [yp]

170.2

1027.1

69.4

2 × sd (yp)

62.9

595.1

270.3

Forecast

Ezh [yp]

170.3

1207.1

299.8

2 × sdzh (yp)

39.1

348.9

167.4

(p. 268) stage. If, however, our prediction intervals are unhelpfully large then we return to and repeat earlier stages of the analysis. For example, introducing additional wells and other aspects of historical data into our analysis could be helpful in further reducing the size of X, and allow us to refocus again and therefore reduce the uncertainties attached to our emulator, and so narrow the forecast interval.

Appendix

A. Broader context and background

The Bayes linear approach is similar in spirit to conventional Bayes analysis, but derives from a simpler system for prior specification and analysis, and so offers a practical methodology for analysing partially specified beliefs for large problems. The approach uses expectation rather than probability as the primitive for quantifying uncertainty; see De Finetti (1974, 1975). In the Bayes linear approach, we make direct prior specifications for that collection of means, variances and covariances which we are both willing and able to assess. Given two random vectors, B, D, the adjusted expectation for element Bi, given D, is the linear combination a0 + aT D minimising E[(Bia0aT D)2] over choices of a0, a. The adjusted expectation vector, E D [B] is evaluated as

E D [ B ]=E[ B ]+Cov[ B,D ] ( Var[ D ] ) 1 ( DE[ D ] ).

(If Var [D] is not invertible, then we use an appropriate generalised inverse). The adjusted variance matrix for B given D, is

Var D [ B ]=Var[ B E D [ B ] ]=Var[ B ]Cov[ B,D ] ( Var[ D ] ) 1 Cov[ D,B ].

Stone (1963), and Hartigan (1969) were among the first to discuss the role of such assessments in partial Bayes analysis. A detailed account of Bayes linear methodology is given in Goldstein and Wooff (2007), emphasizing the interpretive and diagnostic cycle of subjectivist belief analysis.

The basic approach to statistical modelling within this formalism is through second order exchangeability. An infinite sequence of vectors is second-order exchangeable if the mean, variance and covariance specification is invariant under permutation. Such sequences satisfy the second order representation theorem which states that each element of such a sequence may be decomposed as the uncorrelated sum of an underlying ‘population mean’ quantity and an individual residual, where the residual quantities are themselves uncorrelated with zero mean and equal variances. This is similar in spirit to de Finetti’s representation theorem for fully exchangeable sequences but is sufficiently weak, in the requirements for prior specification, that it allows us to construct (p. 269) statistical models directly from simple collections of judgements over observable quantities.

Within the usual Bayesian view, adjusted expectation offers a simple, tractable approximation to conditional expectation, and adjusted variance is a strict upper bound for expected posterior variance, over all prior specifications consistent with the given moment structure. The approximations are exact in certain special cases, and in particular if the joint probability distribution of B, D is multivariate normal. Adjusted expectation is numerically equivalent to conditional expectation when D comprises the indicator functions for the elements of a partition, i.e. each Di takes value one or zero and precisely one element Di will equal one. We may therefore view adjusted expectation as a generalization of de Finetti’s approach to conditional expectation based on ‘called-off’ quadratic penalties, where we remove the restriction that we may only condition on the indicator functions for a partition. Geometrically, we may view each individual random quantity as a vector, and construct the natural inner product space based on covariance. In this construction, the adjusted expectation of a random quantity Y, by a further collection of random quantities D, is the orthogonal projection of Y into the linear subspace spanned by the elements of D and the adjusted variance is the squared distance between Y and that subspace. This formalism extends naturally to handle infinite collections of expectation statements, for example those associated with a standard Bayesian analysis.

A more fundamental interpretation of the Bayes linear approach derives from the temporal sure preference principle, which says, informally, that if it is necessary that you will prefer a certain small random penalty A to C at some given future time, then you should not now have a strict preference for penalty C over A. A consequence of this principle is that you must judge now that your actual posterior expectation, ET [B], at time T when you have observed D, satisfies the relation ET [B] = E D [B] + R, where R has, a priori, zero expectation and is uncorrelated with D. If D represents a partition, then ED [B] is equal to the conditional expectation given D, and R has conditional expectation zero for each member of the partition. In this view, the correspondence between actual belief revisions and formal analysis based on partial prior specifications is entirely derived through stochastic relationships of this type.

Acknowledgements

This study was produced with the support of the Basic Technology initiative as part of the Managing Uncertainty for Complex Models project. We are grateful to Energy SciTech Limited for the use of the reservoir simulator software and providing the Gullfaks reservoir model.

References

Craig, P. S., Goldstein, M., Rougier, J. C., and Seheult, A. H. (2001), Bayesian forecasting for complex systems using computer simulators, Journal of the American Statistical Association, 96, 717–729.Find this resource:

    Craig, P. S., Goldstein, M., Seheult, A. H., and Smith, J. A. (1996). Bayes linear strategies for history matching of hydrocarbon reservoirs. In Bayesian Statistics 5 (eds. J. M. Bernardo, J. O., Berger, A. P., Dawid and A. F. M., Smith), Clarendon Press, Oxford, UK, pp. 69–95.Find this resource:

      Craig, P. S., Goldstein, M., Seheult, A. H., and Smith, J. A. (1997). Pressure matching for hydrocarbon reservoirs: A case study in the use of Bayes linear strategies for large computer experiments. In Case Studies in Bayesian Statistics (eds. C., Gatsonis, J. S., Hodges, R. E., Kass, R., McCulloch, P., Rossi, and N. D., Singpurwalla), Springer-Verlag, New York, vol. 3, pp. 36–93.Find this resource:

        Craig, P. S., Goldstein, M., Seheult, A. H., and Smith, J. A. (1998). Constructing partial prior specifications for models of complex physical systems. Applied Statistics, 47, 37–53.Find this resource:

          Cressie, N. (1991). Statistics for Spatial Data. John Wiley, New York.Find this resource:

            Cumming, J. A. and Wooff, D. A. (2007). Dimension reduction via principal variables, Computational Statistics & Data Analsysis, 52, 550–565.Find this resource:

              Currin, C., Mitchell, T., Morris, M., and Ylvisaker, D. (1991). Bayesian prediction of deterministic functions with applications to the design and analysis of computer experiments. Journal of the American Statistical Association, 86, 953–963.Find this resource:

                De Finetti, B. (1974), Theory of Probability, Vol. 1, New York: John Wiley.Find this resource:

                  De Finetti, B. (1975). Theory of Probability. Vol. 2. John Wiley, New York.Find this resource:

                    Goldstein, M. and Rougier, J. C. (2006), Bayes linear calibrated prediction for complex systems, Journal of the American Statistical Association, 101, 1132–1143.Find this resource:

                      Goldstein, M. and Rougier, J. C. (2008), Reified Bayesian modelling and inference for physical Systems. Journal of Statistical Planning and Inference, 139, 1221–1239.Find this resource:

                        Goldstein, M. and Wooff, D. A. (2007). Bayes Linear Statistics: Theory and Methods. John Wiley, New York.Find this resource:

                          Hartigan, J. A. (1969). Linear Bayes methods. Journal of the Royal Statistical Society, Series B, 31, 446–454.Find this resource:

                            Kennedy, M. C. and O’Hagan, A. (2000). Predicting the output from a complex computer code when fast approximations are available, Biometrika, 87, 1–13.Find this resource:

                              Kennedy, M. C. and O’Hagan, A. (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society, Series B, 63, 425–464.Find this resource:

                                McKay, M. D., Beckman, R. J., and Conover, W. J. (1979). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21, 239–245.Find this resource:

                                  O’Hagan, A. (2006). Bayesian analysis of computer code outputs: A tutorial. Reliability Engineering and System Safety, 91, 1290–1300.Find this resource:

                                    Qian, Z. and Wu, C. F. J. (2008). Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments. Technometrics, 50, 192–204.Find this resource:

                                      Sacks, J.,Welch,W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design and analysis of computer experiments. Statistical Science, 4, 409–435.Find this resource:

                                        Santner, T. J., Williams, B. J., and Notz, W. I. (2003). The Design and Analysis of Computer Experiments, Springer-Verlag, New York.Find this resource:

                                          Stone, M. (1963). Robustness of non-ideal decision procedures. Journal of the American Statistical Association, 58, 480–486.Find this resource: