Publication History: This article is based on Chapter 9 of "The Log Analysis Handbook" by E. R. Crain, P.Eng., published by Pennwell Books 1986  Updated 2004. This webpage version is the copyrighted intellectual property of the author.

Do not copy or distribute in any form without explicit permission.

PRACTICAL Statistical Models

This page reviews the probabilistic methods used in some commercial software packages. In each case, the analyst must supply analysis parameters appropriate to the lithology and fluid properties in the interval. Logging tool uncertainties are also supplied to the program. Different models may be supplied for different depth intervals. The programs attempt to find a solution from the input log suite and the user supplied parameters that minimizes the error between reconstructs the raw logs and the original recorded raw logs.


The precise mathematical methods used in these programs are proprietary to the software developers. The most readable description of how they operate is given in "A Practical Approach To Statistical Log Analysis" by W. K. Mitchell and R. J. Nelson, SPWLA, 1988. This reference gives details of the solution equations and a FORTRAN code for the difficult parts. Download PDF

Statistical methods use error minimizing or probabilities to solve a set of over-determined equations for the “best” answer. To achieve an over-determined case, constraint equations are often imposed. Computer products such as Schlumberger's GLOBAL and ELAN, Dresser's OPTIMA, and Gearhart's ULTRA, are programs of this type.

The steps taken are as follows:
  1. Apply environmental corrections to raw log data.
  2. Calculate shale volume from any appropriate method.
  3. Calculate porosity from any appropriate method.
  4. Calculate invaded zone and un-invaded zone water saturations.
  5. Calculate lithology and mineral fractions
  6. Calculate theoretical log response based on above answers from appropriate logging tool response equations and their respective uncertainties.
  7. Compare calculated response with actual logs and calculate error statistics (coherence).
  8. Sum these errors and compare to previous iterations.
  9. If error is minimum go to Step 11
  10. If error is not minimum, adjust analysis parameters and re-analysis model with steps 1 through 6, then repeat Steps 7 through 10.
  11. Calculate other results desired with the final shale, porosity, mineral mixture, and saturation values.
  12. Compare all results to ground truth and re-compute with new assumptions if necessary.

If this method is used in an interactive program, the analyst can be involved in the decision process of Steps 10 and 12. If the computer program contains the decision making logic, it works best when the analyst has described the best model, and may fail to give reasonable results when an inappropriate model has been presented.

A sample analysis from such a program for a shale-feldspar-mica-quartz model is shown at the right.



The ELAN model is a popular probabilistic model. The following description is from a Schlumberger petrophysical analysis report.

The ELANPlus analysis is a statistical method designed for quantitative formation evaluation of open-hole logs. The evaluation is done by solving simultaneous equations described by one or more interpretation models. Log measurements, or tools, and response parameters are used together in response equations to compute volumetric results for formation components (minerals and fluids). The following system of equations is built to conduct a volumetric analysis:

   A, B, C, D, E, and F = minerals and fluid Volumes to be evaluated
   1,2,3,4,5,6, and 7 = tool responses
    αi,j = response parameters

Tool Responses








Neutron Porosity








Comp. Slowness (μs/ft)








Gamma Ray (gAPI)








Bulk Density (g/cc)
























Clay CEC








Typical response parameters for an ELAN analysis.

The above system of equations is solved at every depth level for minerals and fluid volumes. To help optimize the solution. A set of constraints can fix the upper and lower limits on the output volumes. Once the volumes are calculated, the tool responses are reconstructed using the same system of equations. The reconstructed logs are compared against input data to determine the quality of volumetric results. The deviation of reconstructed tools from the true log readings, taking into consideration the uncertainty of each tool, is called the incoherence function. It is this function that the solver tries to minimize to achieve the most probable answer.

Tool Responses

Uncertainty (error)

Uncertainty Weight

Neutron Porosity (cm3/cm3)



Compressional Slowness (μs/ft)



Gamma Ray (gAPI)



Bulk Density (g/cc)



Flushed Zone Resistivity (Ω.m)



Formation True Resistivity (Ω.m)









Typical tool measurement uncertainties used in ELAN to minimize the error in the reconstructed logs.
* = Computed based on water salinity, temperature and borehole pressure.
** = Depending on hole conditions

The response parameters (α values) for fluids are calculated at every depth level based on pressure, temperature, and formation water salinity. Oil and gas densities are given constant density values and do not vary with depth.

The fluid volumes are computed based on the Dual Water saturation equation. This equation is based on parallel resistivity modeling of current flow in a porous medium.
      1: Y = Vclay * (1- Vcbw) * (CECclay * DENSclay) / PHIt

      2: M = Mdw + Kdw * (0.258 * Y + 0.2 * (1 - e^(-16. 4 * Y)))
Then solve for SWt from:
      3: COND = (1 / A) * PHIt^M * SWt^N * (((SWt – SWb) / (SWt)) * CONDw + (CONDbw * SWb / SWt))

  A = Archie fluid factor
  N = saturation exponent
  PHIt = total porosity
  SWt = total water saturation
  SWb = bound water saturation
  CONDw = formation water conductivity (mS/m) - based on temperature and salinity
  CONDbw = clay bound water conductivity (mS/m) - based on temperature, salinity, CEC, and clay density
  M = cementation factor
  Mdw = dual water cementation factor
  Kdw = dual water cementation factort constant
  CECclay = clay cation exchange capacity (meq/cm3)
  DENSclay = clay density (g/cc)
  Vcbw = clay bound water volume

These calculations are performed using the ELANPlus software on the GeoFrame system. The computed total porosity (PHIt) is the summation of oil, water and clay bound water volumes. The computed effective porosity (PHIe) is PHIt without clay bound water.


Geologic Analysis via Maximum Likelihood System (GAMLS) is another error minimizing software package using multi-variant clustering of log data properties to define lithologic flow units. Combined with Matrix and Fluid Balancing with GAMLS (MFBG), the output includes the typical petrophysical properties versus depth of clay, porosity, saturation, permeability, and mineral composition. Unlike GLOBAL, GAMLS uses core and XRD data to control the porosity and mineral calculations. The following description was provided by Bob Everett and Eric Eslinger who jointly developed and use this software in their consulting practice. A more detailed sescription and example from a shale gas can be found in "
Petrophysics in Gas Shales" by E. Eslinger and R. V. Everett, 2012, in AAPG Memoir 97, p. 419 – 451.

The model parameters from cored wells are stored for use in un-cored wells.

The fluid balancing guarantees that bound water plus irreducible water plus free porosity is equal to total porosity. Also, SW is tested and parameters changed if SW is higher than 1.0. since these come from independent sources, this is a good test of porosity and water saturation. The mineral balancing is accomplished by reconstructing the neutron, density, sonic, and PE logs from the results. If they do not match the original logs (in good hole condition), the minerals or mineral properties are adjusted and another iteration is run. If core grain density is available, the computed matrix density is also tested to guide the next iteration.

A number of other software packages use reconstructed logs as a test in the iteration to minimize errors, such as Saraband, Coriband, and Elan, but they did not use clustering to aid in deciding what to change for the next iteration.

Here is the processing sequence:

  1.  Cluster
  2.  Assign lithologies to each cluster
  3.  Model to obtain (main goals):
    a.  Porosity
    b.  Sw
    c.  Perm
    d.  Net Pay
  4.  Assign mineralogy to each mode  (= cluster = rock type)
  5.  Compute matrix density and matrix response for NPHI, PEF, GR, DT;  Compute matrix CEC and So
  6.  Determine porosity  (TPOR, VIRR, VWB) & Sw
  7.  Compute porosity response for NPHI, PEF, DT
  8.  Compute bulk rock RHOB, NPHI, PEF, DT, GR  for each mode.  Use clustering probability assignments
         along with mean values of RHOB, NPHI
          compute RHOB, NPHI profiles, core sample RHOB  = Sum (Pi * mean RHOBi)
  9.  Check “balances” (objective functions)
     a.  Sw <=1.0    
     b.  In “tight” rocks  TPOR ~ (VIRR + VWB) and  Ro ~ Rt
     c.  M_GD ~ G_GD
     d.  M_RHOB ~ G_RHOB
     e.  M_NPHI ~ G_NPHI
Add (if whole core data is available):
     f.  Modeled core por, perm, grain density, surface area = Core values
Also, add (if logs are available):
     g.   M_PEF ~ G_PEF,  and M_DT ~ G_DT
  10.  Adjust mineralogy as needed to obtain balances
  11.  Iterate on porosity, permeability, Sw, net pay ….
  12.  Calibrate against core data  (Por, Perm, GD)
        (i.e., adjust inputs as needed to “match”)
  13.  Iterate on porosity, permeability, Sw, net pay ….
  14.  Compute complete profiles for RHOB, NPHI ...
            Model RHOB = (matrix density)(1-TPOR) + (TPOR)(Sw)(RHOwater)(1-Xmf) + (TPOR)(1-Sw)(GOR)
             (RHOgas)(1-Xmf) + (TPOR)(1-Sw)(1-GOR)(RHOoil)(1-Xmf) + (TPOR)(RHOfiltrate)(Xmf)
              where Xmf = fractional mud filtrate invasion
  15.  Rw can vary with rock type
  16.  Adjust “invasion factor”  (for NPHI balance)
  17.  Set probability “target”  (eliminates “fuzzy” data points)
Page Views ---- Since 01 Jan 2015
Copyright 2022 E. R. Crain, P.Eng. All Rights Reserved