## Abstract

This article presents an integrated workflow to model the evolution of ancient turbidity currents on a 3D structurally reconstructed palaeo-seafloor, allowing ancient turbidite sediment distributions to be estimated. Effective use of such approaches requires efficient model-inversion procedures so that model parameters (e.g. flow dimensions, densities etc.) can be estimated from any available data. It is shown that a directed Monte Carlo approach (i.e. a simple genetic search algorithm) is very effective. A case study of a Mesozoic prospect in the UK North Sea shows the power of these methods to discriminate between potentially attractive sediment-source locations. The main power of this approach lies in its ability to exclude many, otherwise attractive, sedimentation scenarios.

## INTRODUCTION

The increasing importance of turbidite reservoirs over the last 30 years (Weimer *et al*. 2000) has resulted in the development of seismic-based techniques for the evaluation of sand distribution in turbidity current deposits. Seismic attribute maps have proved to be particularly effective, as have isopach maps formed between closely spaced horizons (see Fonessu (2003) for excellent examples of both these techniques). This paper proposes an alternative approach which does not require the exceptionally high data quality necessary for attribute/isopach-based techniques. In outline, the workflow involves:

3D seismic interpretation of major horizons and faults;

structural reconstruction of the palaeo-seafloor at the time of turbidite deposition;

modelling of turbidity current flow across the palaeo-seafloor to determine areas of sand deposition;

re-imposition of the structural deformation to move the predicted turbidite deposits back to their present-day locations.

This paper will describe this methodology in more detail and demonstrate its application to a specific North Sea prospect.

Three-dimensional structural reconstruction of palaeosurfaces is now a well-established technique (Barr 1985; Buddin *et al*. 1997; Williams *et al*. 1997; Rouby *et al*. 2000; Yin & Groshong 2006), which has been developed from 2D ‘section balancing’ (see Buchanan (1996) for a review). The objective is to reconstruct the palaeogeometry of a volume by successively stripping off each top layer and removing the effects of faulting, folding and compaction on all remaining horizons.

Forward modelling of particulate gravity currents is equally well established in the geological literature, where it has been used for modelling pyroclastic flows and debris flows as well as for turbidity currents (Parker *et al*. 1986; Dade & Huppert 1995; Mulder *et al*. 1997; Huppert 1998; Iverson & Denlinger 2001; Pratson *et al*. 2001; Waltham & Davison 2001; Bursik *et al*. 2005; Sheridan *et al*. 2005). However, stratigraphic modelling in general has not yet established itself as a widely used technique in hydrocarbon exploration. It should be noted that in forward modelling the distribution of deposits is predicted from supplied parameters, such as flow volume and density. In inverse modelling, on the other hand, parameters are determined from knowledge of the sediment distribution. In practice, both forward and inverse modelling are needed since modelling parameters are first estimated from any limited data that may be available and then forward modelling allows the consequences of these parameters to be determined for areas lacking data.

The aim of this paper is to present a general methodology for using stratigraphic models to reduce risk. This paper concentrates upon an inverse modelling strategy which is quite general and could be adapted for use with many other forward models. The details of this particular stratigraphic model are therefore not central to the argument but, for completeness, are outlined in the Appendices. The next section describes a specific Moray Firth (UK North Sea) exploration problem and outlines the structural reconstruction used as input for subsequent stratigraphic modelling. The paper then discusses the inverse modelling approach and shows how it resolves Moray Firth exploration issues.

## STRUCTURAL RECONSTRUCTION AND THE EXPLORATION PROBLEM

Client confidentiality prevents specification of the precise location and age of the dataset, which covers a Mesozoic prospect lying near the Moray Firth region of the North Sea off northeast Scotland. The 3D seismic data cover an area of *c*. 20×25 km, within which ten key reflectors were picked and depth-converted by the client company.

Structural reconstruction was undertaken by removing the top interval and decompacting the remaining beds (Sclater & Christie 1980). The effect of thermal subsidence and isostasy was removed by bulk rotation of the entire model. The deformation at every new upper surface was restored using vertical shear and fault block translation and rotation. This sequence of operations was repeated for each successive upper surface until the formation of interest reached the seafloor. Figure 1 shows the resulting accommodation space estimate. Note that this represents the maximum bathymetric variation that could have existed at this point in time assuming that space had been created rapidly and slowly filled in. Figure 1 also shows the locations of five wells for which there are net-to-gross (NTG) estimates (defined as fractional thickness of clean sand) in the overlying formation, based upon interpretation of wireline logs. The NTG values themselves are given in Table 1. Note that wells C and D have good sand content but that the remaining wells contain no clean sand at all. The exploration issue was to determine a sediment transport path to the high NTG wells (C, D) that did not place sand in the remaining wells (A, B, E). The ultimate objective was to improve understanding of the exploration risk associated with further drilling in the area.

The key features of the resulting surface are a general deepening towards the north and the existence of a possible north–south channel supplying sand to wells C and D. A flow starting at the south-entry point in Figure 1 and moving along this channel towards wells C and D could result in high-quality sand deposits within this channel and so it is important to investigate whether such a scenario is plausible. Figure 2 shows a forward model of such a flow. However, note that there are no significant barriers to this flow reaching wells A, B and E. Hence, the question becomes: does the absence of sand in wells A, B and E preclude the possibility that sediment was supplied to wells C and D by a flow along the channel? If this sediment path is not viable then an alternate sediment supply for locations C and D is needed. The stratigraphic modelling therefore needs to investigate two alternatives: (1) sediment supply is from the south and deposits good quality sand in the channel; (2) sediment supply is from close to the sand-rich wells and never enters the channel. Flow-entry points for both scenarios are indicated in Figure 1.

## INVERSE MODELLING OF TURBIDITY CURRENTS

The forward model is controlled by the seafloor bathymetry together with the 14 parameters described in Table 2. If these parameters are given, then the forward model can be used to predict the coarse-fraction distribution across the modelled area for comparison with the known NTG in the wells. Note that this comparison is valid only if coarse-deposits are predominantly clean (i.e. not contaminated by simultaneous fine deposition) and if the multiple, successive flows responsible for a particular turbidite unit are fairly similar to one another (i.e. the properties of a single deposit are reasonably representative of the bulk properties of stacked multiple deposits). The first assumption could be avoided by using a more sophisticated measure of forward-model NTG but the second assumption is fundamental since modelling of (possibly) hundreds of very different flows would require the specification of thousands of parameters (i.e. 14 for each flow).

For the specific case of the Moray Firth problem investigated in this paper, it might be thought that finding 14 parameters given five constraints (i.e. the NTG values at the well locations) is a severely under-constrained problem. However, all 14 parameters are constrained to a greater or lesser extent by physical reasonableness. For example, a flow with an initial thickness of 10 km would not be plausible. Given loose constraints, it is perfectly possible for a nonlinear problem to have no solutions at all (e.g. the parabola *y=x*^{2} has no solution for real *x* given the loose constraint that *y* must be negative). On the other hand, if a solution does exist it is likely to be highly non-unique (e.g. consider the foregoing parabola example but with the constraint that *y* be positive). Hence, the objective with modelling is not to determine the precise set of parameters that produced a given deposit but, rather, to establish whether *any* set of parameters could produce the observed deposit given a specified sediment-input location.

Inverse modelling starts by defining an objective function, i.e. a measure of misfit between model and data. In the Moray Firth example the constraints were in the form of NTG estimates and so an objective function given by the root mean square (rms.) error (rmse) was defined with (1) where *w _{i}* is the NTG at the

*i*th well and

*m*is the modelled coarse-fraction at the same location. This, in turn, was calculated using (2) where

_{i}*C*is the thickness of coarse sediment modelled at the location of the

_{i}*i*th well,

*F*is the corresponding fine thickness and

_{i}*P*is the thickness of pelagic background sediment deposited between flows. Note that pelagic sedimentation is therefore assumed to have a spatially uniform thickness and to be entirely fine grained. The grain diameter defining the threshold between coarse and fine sediments is user defined (see Table 2) and is assumed to be known.

Hence, given a particular model output (controlled by all the parameters in Table 2 except *P*), the value for *P* which minimizes the rms. error can be found and this is the rms. error for that model run. In practice, equations (1) and (2) can be evaluated very rapidly for a large number of different values for *P* and so a simple exhaustive search approach is viable. This has the advantages of being very robust, being unaffected by local minima and being easy to constrain within geologically plausible limits (e.g. 0–10 m thick pelagic deposits).

This leaves the problem of minimizing rms. error with respect to the remaining 13 parameters in Table 2 (i.e. excluding pelagic thickness). Of these, three are assumed to be known (the sand cut-off, the grain density and the ambient water density) and so this leaves ten parameters to be found. Exhaustive search is not feasible since, even searching over just five values for each parameter, would require 5^{10} ∼ 10 million simulations. A typical forward-model simulation takes the order of one minute (on a mid-range PC) and so searching would take approximately 20 years even at this very crude parameter-resolution. Gradient-descent techniques (Press *et al*. 2002) are likely to become trapped in shallow, local minima given the highly nonlinear nature of the problem and so more robust approaches are needed. Here, a Monte Carlo approach was modified.

Starting with a user-defined range for all parameters, the forward model was run repeatedly using sets of parameters obtained by randomly choosing values within the allowed range. The power of this approach was increased greatly by making a modification that changes the Monte Carlo approach into a very simple genetic search algorithm. The search range within which parameters were chosen was decreased, as the simulations proceeded, with the reduced range centred upon the best set of parameters found so far. Hence, an initially large parameter search-volume was gradually collapsed onto a good solution.

The upper curve in Figure 3 illustrates this process for the case of the sorting index, which controls the width of the grain-size distribution for the suspended sediments in the flow as they enter the model. This shows the progress of inversion for an inflow location in the northwest of the model (see Fig. 1). In the early part of the search, the sorting index fluctuates over the entire user-supplied range. However, as the simulations proceed, the fluctuations become smaller and slowly converge. The lower curve in Figure 3 shows how the rms. error changes during the same set of simulations. The rms. error is initially relatively large but, as improved solutions are found, all model parameters are adjusted to concentrate searching in regions of good solutions and there is overall a very gentle decrease in the lowermost rms. error values. After simulation of 200 flows (taking typically1–10 hours on a laptop PC, depending upon the exact parameters ranges chosen) an excellent match between model and wells is found, as can be seen from the very small rms. error (0.005) and also by direct comparison of the modelled sand fractions to the well values (Table 1). Note that Figure 3 shows the rms. error for each of the 200 simulations but that the final model chosen is the single simulation that happens to have the smallest rms. error. This will not generally be the final iteration (i.e. iteration 200 here) but, because of the gradually narrowing search-range, will tend be one of the later simulations (in fact, the minimum does occur for the very last simulation here but that is a coincidence).

The excellent match of model to data is emphasized by Figure 4, which shows the coarse-fraction distribution across the entire modelled area. The superimposed wells are colour-coded using the same scheme as the underlying map and so any data/model discrepancies would show up very clearly. Figure 5 shows a similar map but for the case where parameter-searching was carried out for an inflow assumed to be from the south and along the channel. Note that the match between model and wells is very much poorer, a point emphasized by the much higher final rms. error (0.29). Observation of the 200 individual flows that were tested during the production of Figure 5 shows that the key problem is in getting sand into well C without also supplying sand to well E. Flows large enough to reach C invariably also reach E and, in fact, many of them also flood wells A and B. Moreover, even when sand does get to well C, well D always has the higher coarse-fraction and this contrasts sharply with the true situation in the wells. However, when the inflow comes from the north (i.e. Fig. 4) it is very easy to supply sand to wells C and D without the flows reaching wells A, B or E and, provided the inflow location is chosen appropriately, it is also quite easy to obtain a higher coarse-fraction in well C than in well D. Indeed, Figure 3 shows that there are several different northerly-sourced solutions that produce very low rms. errors (e.g. iteration 182 has rmse=0.0056).

## DISCUSSION

The foregoing modelling confirms the expectation that, for some parameter constraints (e.g. a flow from the south), there are no parameter sets which produce a good match of model to data whilst, for others (e.g. a flow from the north), there are multiple possible solutions. This suggests that a good way to apply such models is to use them to *exclude* possibilities. In addition, even although no single good solution can be extracted, the modelling can be used to draw conclusions about the likely properties of the true solution by looking at features that all good solutions have in common. The unknown, correct solution is likely to share those common properties. Finding properties common to all solutions is a powerful method that has been applied to the modelling of shallow-marine clastic sequence stratigraphy (Waltham *et al.* 2003) and to the very different problem of unravelling the causes of oceanic strontium isotope fluctuations (Waltham & Grocke 2006). Applying this methodology to turbidite deposition is a much more complex problem, however, and remains the topic of ongoing research.

The specific problem addressed in this paper was unusually poorly constrained. However, the software has been used in a similar fashion for investigating datasets containing more than thirty wells and still produced useful outputs that matched most wells extremely closely. It should be noted that adjacent wells tend to have similar NTG values and so it is not correct to state that 30 wells give 30 independent constraints. As a result, the foregoing analysis holds and parameter searches still separate into those having multiple solutions and those having none. Hence, it is not believed that the algorithm discussed above is useful only in very under-constrained problems.

The algorithm presented here for parameter searching has therefore proved to be extremely robust. It also has the significant advantages of being easy to program (and hence more likely to be error-free) and of being ideal for distributed processing. This search algorithm has been run simultaneously on 20 different PCs and produced excellent results in a small fraction of the time needed by a single PC. Hence, using a PC cluster, it is quite possible to produce good fits of model to data within a few minutes. The search algorithm also appears to be very efficient in that it can find reasonable solutions (when they exist) in as few as 20 iterations. However, this may be a consequence of the high degeneracy in solutions and remains the subject of further research.

In conclusion, a novel combination of structural reconstruction and turbidity current simulation has been presented, together with an effective inversion procedure. It has been demonstrated that these tools can be used to eliminate potential sediment-path scenarios and hence reduce the risk associated with possible further drilling.

## A GENERAL GRAVITY FLOW EQUATIONS

The starting equations are (Acheson 1990):

Conservation of mass for an incompressible fluid (A1) where *u _{i}* is velocity in the

*x*direction.

_{i}-Conservation of momentum (Cauchy's equations of motion) (A2) where *j* is 1, 2 or 3; *t* is time; *g*_{i} is gravity and (A3) where *T _{ij}* is stress in the

*i*-direction on the plane perpendicular to the

*j*-direction.

With depth-averaged equations, the study is interested only in *j*=1 or 2 assuming *j*=3 corresponds to the vertical direction. This removes the gravity term from the right-hand side of equation (A2). Integrating equations (A1) and (A2) over the flow depth, *H*, and using Leibniz's theorem (Abramowitz & Stegun 1965) then produces (A4) and (A5) where an overbar indicates a depth-averaged quantity. Finally, combining equations (A4) and (A5) gives (A6) which is simply a 2D version of equation (A2). Equations (A4) and (A6) are used to model the evolution of flow thickness and flow velocity, respectively. These expressions are quite general as they make no assumptions concerning rheological properties (e.g. non-zero viscosity or yield strength), flow style (e.g. laminar, turbulent or granular) or about how flow density varies in time and space. These factors enter only via the stress term on the right-hand side of equation (A6).

## B STRESSES IN A TURBULENT NEWTONIAN FLOW

A Newtonian fluid is defined by the stress relationship(Acheson 1990) (B1) where *p* is pressure and *μ* is viscosity. For the turbulent flow case considered in this paper, velocities should be understood as being ensemble averages (i.e. averages over a large numbers of identical flows) whilst *μ* is the eddy viscosity. For the low-concentration turbidity current case the flow density is nearly constant (i.e. equal to water density, *ρ*_{w}) so that equations (A3) and (B1) yield (B2) where and is horizontal shear stress in the *j* direction.

For a thin flow, in which characteristic horizontal scales are generally much greater than flow thickness, a hydrostatic approximation is appropriate for calculation of the pressure gradient in equation (B2). In addition, for simplicity, it is assumed that flow concentration is constant so that (B3) where Δ*ρ* is the density contrast between the flow and the ambient fluid and *h* is the height of the flow top.

The eddy-viscosity term in equation (B2) is formally equivalent to a velocity-diffusion equation and this therefore simply smoothes the resulting velocity field. The diffusion coefficient is given by *μ*/*ρ*_{w} which depends upon the flow speed and could be anisotropic. However, for simplicity, this is treated as a constant modelling parameter chosen to ensure numerical stability.

The horizontal shear stress terms in equation (B2) can be estimated using mixing-length theories (Duncan *et al*. 1960). Full details are given in Waltham (2008) where it is shown that the basal shear stress is given by (B4) where *k* is von Kármán's constant (note: the clear-water value of ∼0.4 is unlikely to be greatly in error for the low-concentration flows considered in this paper); *b* is a constant of order one; *f* is the turbulent boundary-layer fractional-thickness and is around ∼0.05 (Kneller *et al*. 1999); and *z*_{0} is the laminar sub-layer thickness which can be related directly to the observed roughness of the channel floor since the size of the elements generating channel floor roughness is approximately30*z*_{0} (Raudkivi 1998). For the 2D (after depth-averaging) flows used here, equation (B4) may be generalized to (B5) where *V* is the depth-averaged speed (i.e. *V*^{2}=ū_{1}^{2}+ū_{2}^{2}). Thus, the basal friction is controlled by two constants; *b* which is assumed to be unity and *z*_{0}*/f∼*2*r/*3 where *r* is the scale of the seafloor roughness. Note that this approach to calculating basal shear stress is formally equivalent to using a Chezy-type friction law except that the resulting Chezy-coefficient has a weak dependency on flow thickness and, more importantly, it is related directly to a quantifiable parameter (i.e. the seafloor roughness).

## C PARTICULATE CURRENT MODIFICATIONS

The preceding algorithm applies to any turbulent, thin, gravity underflow and so, to complete the description, one needs to add processes specifically related to sediment suspension and deposition. The modelling is concerned with deposition from the distal parts of the flow, thus it is assumed that the flow is fully developed in terms of sediment and ambient-fluid entrainment so that sediment deposition is the dominant process.

For sediment suspension, the turbulence-generated rms. fluctuations in vertical velocity should exceed the fall velocity, *v _{k}* (where

*k*runs over all grain diameters), (Raudkivi (1998), but see Leeder

*et al*. (2005) for a critique) and laboratory studies (e.g. Bagnold 1966; Kneller

*et al*. 1999) show that the rms. fluctuations are of similar magnitude to the shearing velocity, , given by (C1)

An entirely equivalent suspension criterion is that the Rouse number should be less than 2.5 (Rouse 1937; Allen 1997) since this also leads to the expectation of a suspension threshold when fall-velocity approximately equals the shear velocity. The sedimentation rate, *s _{k}*, is therefore zero for >

*v*but equal to

_{k}*c*when the flow is stationary (where

_{k}v_{k}*c*is the volumetric concentration of grain-size

_{k}*k*). The simplest mathematical model consistent with these end-members is (C2)

(C3)

The still-water fall velocity can be calculated using a large number of different formulae but, for the first-order model described in this paper, Stokes's Law is adequate. Each grain diameter is independently modelled using a modified form of equation (A4) which incorporates sediment loss: (C4) where *L _{k}* is the sediment load associated with grain-size

*k*, i.e. (C5)

Flow thickness is then recalculated using (C6) where the overall flow concentration, *c*, is assumed to be fixed for consistency with earlier model assumptions.

## Acknowledgements

Noble Energy (Europe) Ltd is acknowledged for generously allowing publication of the data used in this paper.

- 2008 EAGE/Geological Society of London