User Guide
CMOST Enhance & Accelerate Sensitivity Analysis, History Matchin Matchin g, Optimi Optimi zation zation & Uncertainty Uncertainty A nalysis Vers Versio ion n 2013
Computer Modelli Modelli ng Group Ltd.
This publication and the application described in it are furnished fur nished under license exclusively to the licensee, for internal use only, and are subject to a confidentiality agreement. They may be used only in accordance with the terms and conditions of that agreement. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic, ele ctronic, mechanical, or otherwise, including photocopying, recording, or by any information storage/retrieval system, to any party other than the licensee, without the written permission of Computer Modelling Group. The information in this publication is believed to be accurate in all respects. However, Computer Modelling Group makes no warranty as to accuracy or suitability, and does not assume responsibility for any consequences resulting from the use thereof. The information contained herein is subject to change without notice. Copyright
1987-2014 Computer Modelling Group Ltd.
All rights reserved. The license management portion of this program is based on: Reprise License Manager (RLM)
Copyright 2006-2014, Reprise Software, Inc. All rights reserved ™ Trademark of Computer Modelling Group Ltd. † Other company, product and service names are the properties of their respective owners. Computer Modelling Group Ltd.
200, 1824 Crowchild Trail N.W. Calgary, Alberta Canada T2M 3Y7
Tel: (403) 531-1300
Fax: (403) 289-8502
E-mail:
[email protected]
Contents 1 What’ s New New in CMOST CMOST 1.1
1.2
What’s New in CMOST 2013.12 ...................................................................... 1 1.1.1 Differential Differ ential Evolution Evolutio n .................................. ................ ................................... ................................... ...................... .... 1 1.1.2 Copy Parameter Data to Other Parameters ......................................... 1 1.1.3 Resolve Reuse Pending for Multiple Experiments ............................. 1 1.1.4 Input Section Head Nodes .................................................................. 2 1.1.5 Editing Master Dataset Using Builder ................................................ 2 What’s New in CMOST 2013.11 ...................................................................... 2 1.2.1 Basic Concepts Conc epts................................... ................. ................................... ................................... ................................ .............. 2 1.2.2 User Interface Inte rface .................................. ................ ................................... ................................... ................................... ................. 4 1.2.3 Study Type and Engine ....................................................................... 5 1.2.4 Creating and Editing Input Data ......................................................... 9 1.2.5 Managing Experiments Expe riments .................................... .................. ................................... ................................. ................ 10 1.2.6 Reusing and Restarting ..................................................................... 10 1.2.7 Proxy Dashboard .............................................................................. 11 1.2.8 Viewing and Analyzing Results ....................................................... 11 1.2.9 Converting Old CMOST Files to New CMOST Files ...................... 11
2 Welcome 2.1 2.2
2.3 2.4
15
Introduction ..................................................................................................... 15 What You Need to Use CMOST ..................................................................... 15 2.2.1 General .............................................................................................. 15 2.2.2 Configuring Launcher and CMOST ................................................. 15 2.2.3 Computers and Licenses ................................................................... 15 About this Manual ........................................................................................... 15 Getting Help .................................................................................................... 16
3 CMOST CMOST Overvi ew 3.1 3.2
1
17
Introduction ..................................................................................................... 17 What is CMOST? ............................................................................................ 17 3.2.1 Sensitivity Analysis (SA).................................................................. 17 3.2.2 History Matching (HM) .................................................................... 18 3.2.3 Optimization (OP) ............................................................................ 18 3.2.4 Uncertainty Assessment (UA) .......................................................... 18
CMOST User Guide
Content s
i
3.3 3.4
3.5 3.6 3.7
Generalized CMOST Study Process ............................................................... 19 CMOST Components and Concepts ............................................................... 20 3.4.1 Project Components ......................................................................... 20 3.4.2 Base Files ......................................................................................... 20 3.4.3 File System ....................................................................................... 21 3.4.4 Study Types and Engines ................................................................. 23 3.4.5 Study Workflow ............................................................................... 23 CMOST Master Dataset (.cmm) ..................................................................... 24 CMOST User Interface ................................................................................... 30 Best Practices for Using CMOST ................................................................... 32
4 Getting Started 4.1 4.2 4.3 4.4 4.5
4.6
4.7
Introduction ..................................................................................................... 35 Opening and Navigating CMOST .................................................................. 35 Opening a CMOST Project ............................................................................. 37 Creating a CMOST Project ............................................................................. 39 Using the Study Manager................................................................................ 41 4.5.1 To Create a New Study .................................................................... 41 4.5.2 To View a Study ............................................................................... 46 4.5.3 To Change the Display Name of a Study ......................................... 46 4.5.4 To Add an Existing Study to the Current Project Session................ Session................ 46 4.5.5 To Load/Unload a Study .................................................................. 46 4.5.6 To Exclude a Study .......................................................................... 47 4.5.7 To Import Data from a Study ........................................................... 48 4.5.8 To Copy a Study ............................................................................... 49 Common Screen Operations and Conventions ............................................... 49 4.6.1 Buttons and Icons ............................................................................. 49 4.6.2 Plots .................................................................................................. 49 4.6.3 Names.................. ........................... .................. .................. .................. .................. .................. .................. .................. .............. ..... 52 4.6.4 Required Fields ................................................................................ 52 4.6.5 Default Field Values ......................................................................... 53 4.6.6 Tab Display ...................................................................................... 53 4.6.7 Tables ............................................................................................... 54 4.6.8 Validation tab ................................................................................... 57 Closing CMOST ............................................................................................. 57
5 Creating and Editi ng Input Data Data 5.1 5.2
ii
Conten ts
35
59
Introduction ..................................................................................................... 59 General Properties ........................................................................................... 59 5.2.1 General Information Area ................................................................ 59 5.2.2 Base SR2 Information Area ............................................................. 60 5.2.3 Field Data Information Area ............................................................ 60 5.2.4 Advanced Settings ............................................................................ 61 CMOST User Guid e
5.3
5.4
5.5
Fundamental Data............................................................................................ 61 5.3.1 Original Time Series ......................................................................... 62 5.3.2 User-Defined Time Series ................................................................ 64 5.3.3 Property vs. Distance Series ............................................................. 67 5.3.4 Fluid Contact Depth Series ............................................................... 70 Parameterization .............................................................................................. 72 5.4.1 Parameters......................................................................................... Parameters......................................................................................... 72 5.4.2 Parameter Correlations ..................................................................... 78 5.4.3 Hard Constraints Constraint s ................................... ................. ................................... ................................... ........................... ......... 80 5.4.4 Pre-Simulation Pre-Simulat ion Commands ................................. ................ ................................... .............................. ............ 82 Objective Functions Fu nctions ................................... .................. ................................... .................................... ................................... ...................88 5.5.1 Characteristic Date Times ................................................................. 88 5.5.2 Basic Simulation Results .................................................................. 90 5.5.3 History Match Quality ...................................................................... 91 5.5.4 Net Present Values.................. ........................... ................... ................... .................. .................. .................. ............ ...96 5.5.5 Advanced Objective Functions ....................................................... 100 5.5.6 Global Objective Function Candidates ........................................... 105 5.5.7 Soft Constraints .............................................................................. 107
6 Runnin g and Contro lling ll ing CMOST CMOST 6.1 6.2 6.3
6.4
6.5
6.6
111
Introduction ................................................................................................... 111 Control Centre ............................................................................................... 111 Engine Settings Settin gs ................................... ................. ................................... ................................... .................................... ........................ ...... 114 6.3.1 Introduction ..................................................................................... 114 6.3.2 General Genera l Settings .................................. ................ ................................... ................................... ........................... ......... 116 6.3.3 Engine-Specific Engine-Spe cific Settings Sett ings ................................... .................. ................................... .............................. ............ 117 Simulation Settings........................................................................................ 126 6.4.1 Schedulers ....................................................................................... 127 6.4.2 Simulator Settings Se ttings ................................. ................ ................................... .................................... ........................ ...... 129 6.4.3 Job Record and File Management .................................................. 131 Experiments Experiment s Table ................................... ................. .................................... ................................... ................................... ....................131 6.5.1 Navigating the Experiments Table Table .............. ....................... .................. .................. ................. ........132 6.5.2 Creating Experiments ..................................................................... 139 6.5.3 Configuring the Experiments Table ................................................ 143 6.5.4 Checking Experiment Quality ........................................................ 146 6.5.5 Exporting the Experiment Table to Excel ....................................... 147 6.5.6 Viewing the Simulation Log ........................................................... 147 6.5.7 Reprocessing Reproce ssing Experiments Expe riments ................................. ................ ................................... ............................. ........... 147 Proxy Dashboard ........................................................................................... 147 6.6.1 Opening the Proxy Dashboard ........................................................ 147 6.6.2 Building a Proxy Model through the Proxy Dashboard ................. ...................149 6.6.3 Interacting with the Proxy Model ................................................... 151 6.6.4 Changing the Proxy Role ................................................................ 152
CMOST CMOST User Guide
Contents
iii
6.7
Simulation Jobs ............................................................................................. 152
7 Viewing and Analyzing Results 7.1
7.2
7.3 7.4 7.5
General Information ...................................................................................... 155 7.1.1 Display of Multiple Plots ............................................................... 155 7.1.2 Screen Operations .......................................................................... 156 7.1.3 Navigating the Tree View .............................................................. 156 Parameters ..................................................................................................... 157 7.2.1 Run Progress .................................................................................. 157 7.2.2 Histograms ..................................................................................... 158 7.2.3 Parameter Cross Plots ..................................................................... 159 Time Series ................................................................................................... 160 7.3.1 Observers ........................................................................................ 160 Property vs. Distance .................................................................................... 162 7.4.1 Observers ........................................................................................ 162 Objective Functions ...................................................................................... 163 7.5.1 Run Progress .................................................................................. 163 7.5.2 Histogram ....................................................................................... 164 7.5.3 Objective Function Cross Plots ...................................................... 165 7.5.4 OPAAT Analysis ............................................................................ 165 7.5.5 Proxy Analysis ............................................................................... 167
8 General and Advanced Operations 8.1
8.2 8.3
8.4
iv
Conten ts
155
173
CMM File Editor .......................................................................................... 173 8.1.1 Introduction .................................................................................... 173 8.1.2 Working with Comments ............................................................... 175 8.1.3 Working with Include Files ............................................................ 175 8.1.4 Navigation Tools ............................................................................ 177 8.1.5 Other Functions .............................................................................. 177 8.1.6 Keyboard Shortcuts ........................................................................ 180 Handling Large Files..................................................................................... 181 Formula Editor .............................................................................................. 181 8.3.1 Parts of a Formula .......................................................................... 181 8.3.2 Constants in Formulas .................................................................... 181 8.3.3 Functions in Formulas .................................................................... 181 8.3.4 Variables in Formulas .................................................................... 182 8.3.5 Operators in Formulas .................................................................... 182 8.3.6 Formula Calculation Order ............................................................. 183 8.3.7 List of Built-in Functions in CMOST ............................................ 183 Using Jscript Expressions in CMOST .......................................................... 189 8.4.1 Transferring Data from CMOST to User JScript code ................... 190 8.4.2 Accessing Simulation Job Input and Output Files ......................... 191
CMOST User Guid e
8.4.3 8.4.4
Transferring Data from JScript Code to CMOST...........................192 Starting a New Line in the Dataset .................................................192
9 Configuring Launcher and CMOST to Work Together 9.1 9.2
Introduction ................................................................................................... 193 Configuring Launcher ................................................................................... 193 9.2.1 Launcher ......................................................................................... 193 9.2.2 CMG Job Service ............................................................................ 193 9.2.3 Use Launcher Embedded Mode for Submitting Jobs .....................194 9.2.4 Use CMG Job Service for Submitting Jobs .................................... 195 9.2.5 Submitting Jobs to a Remote Computer ......................................... 196
10 Troubleshooting 10.1 10.2 10.3
11.2 11.3
11.4
199
Introduction ................................................................................................... 199 Failed and Abnormal Termination Jobs ........................................................ 199 Exception Reports ......................................................................................... 201
11 Theoretical Background 11.1
193
203
Probability Distribution Functions ................................................................ 203 11.1.1 Uniform Distribution ...................................................................... 203 11.1.2 Triangle Distribution ...................................................................... 203 11.1.3 Truncated Normal Distribution ....................................................... 203 11.1.4 Truncated Log Normal Distribution ............................................... 204 11.1.5 Deterministic Distributions ............................................................. 204 11.1.6 Custom Distribution ........................................................................ 204 11.1.7 Discrete Probability Distribution .................................................... 205 Objective Functions ....................................................................................... 205 11.2.1 History Match Error ........................................................................205 11.2.2 Net Present Value ...........................................................................207 Sampling Methods.........................................................................................209 11.3.1 One-Parameter-at-a-Time Sampling ............................................... 211 11.3.2 Latin Hypercube Design ................................................................. 212 11.3.3 Classical Experimental Design ....................................................... 216 11.3.4 Parameter Correlation .....................................................................217 Proxy Modeling ............................................................................................. 218 11.4.1 Response Surface Methodology ..................................................... 218 11.4.2 Types of Response Surface Models ................................................ 218 11.4.3 Normalized Parameters (Variables)................................................219 11.4.4 Response Surface Model Verification Plot ..................................... 220 11.4.5 Summary of Fit Table ..................................................................... 220 11.4.6 Analysis of Variance Table ............................................................. 222 11.4.7 Effect Screening Using Normalized Parameters.............................223
CMOST User Guide
Content s
v
11.5
11.4.8 Linear Model Effect Estimates ....................................................... 224 11.4.9 Quadratic Model Effect Estimates ................................................. 225 11.4.10 Reduced Model Effect Estimates ................................................... 228 Optimizers ..................................................................................................... 229 11.5.1 CMG DECE ................................................................................... 229 11.5.2 Latin Hypercube plus Proxy Optimization ..................................... 230 11.5.3 Particle Swarm Optimization ......................................................... 232 11.5.4 Differential Evolution .................................................................... 233 11.5.5 Random Brute Force Search........................................................... 233
12 Glossary
235
13 Index
245
vi
Conten ts
CMOST User Guid e
1 What’s New in CMOST
1.1 What’s New in CMOST 2013.12 The differences between CMOST 2013.12 and 2013.11 are outlined below:
1.1.1 Differenti al Evolut ion The Differential Evolution (DE) optimization algorithm has been introduced as a new engine for use in minimizing/maximizing objective functions during history matching and optimization tasks. DE is a powerful global optimization algorithm that was introduced by Storn and Price (1995). It has three control parameters: Scaling Factor (F), Crossover Rate (Cr), and Population Size (NP). See Differential Evolution (DE) for further detail. 1.1.2 Copy Parameter Data to Other Parameters This feature allows users to copy data from a parameter with a specific source type to another parameter with the same source type, as shown below: Copy Data from
Copy Data To
Copied Data
Continuous Real
Continuous Real
Data Range Settings, Discrete Sampling and Prior Distribution Settings
Discrete Real
Discrete Real
Real Value and Prior Probability
Discrete Integer
Discrete Integer
Integer Value and Prior Probability
Discrete Text
Discrete Text
Text Value, Numerical Value and Prior Probability
Formula
Formula
Jscript Code
Refer to Copying Parameter Data for further information.
1.1.3 Resol ve Reuse Pending for Multi ple Experiment s In the previous version, if new parameters were added, you needed to resolve “reuse pending” experiments by providing parameter values for each selected experiment. In this release, you can select multiple experiments to be resolved then provide parameter values only once. Refer to the Status | Reuse Pending in Experiments Table Columns for further information. CMOST User Gui de
What’s New in CMOST
1
1.1.4 Input Secti on Head Nodes Descriptive information has been added to the input section head nodes: Fundamental Data, Parameterization, and Objective Functions. Users can find explanations about subnodes on these pages. Refer to CMOST User Interface for further information. 1.1.5 Editi ng Master Dataset Using Build er The Builder button in the Parameters page can be used to open the master dataset (.cmm) in Builder for parameterizing the dataset.
1.2 What’s New in CMOST 2013.11 The differences between CMOST 2013.11 and earlier versions are highlighted in the following sections:
1.2.1 Basic Concepts A CMOST 2013.11 project consists of a number of studies, which can, for example, be a mix of sensitivity analyses, history matches, optimizations, uncertainty assessments, and user-defined studies, each with its own engine settings. A study contains all of the input information that CMOST needs to run a particular kind of task. Information can be copied between studies. Study types can easily be switched. The new study type will use as much information from the previous study as possible. Studies consist of experiments, and each experiment is based on a distinct set of input parameters and objective functions. Experiment details are stored and tracked in the study’s Experiments table, described in Experiments Table. Project
Study 1
Experiment 1.1
Experiment1.2 ... Experiment1.n
Study2 ... Studym
Experiment2.1 ... Experimentm.1 ...
NOTE: Users have flexibility in the naming of projects and studies.
2
What’s New in CMOST
CMOST User Gui de
1.2.1.1
File System and Folder Struc tur e
At the highest level, a CMOST project folder is organized as shown in the following example:
ProjectName:SAGD_2D_UA ProjectFolder:SAGD_2D_UA.cmpd BestPractice:Allfilesrelatedtotheproject shouldbestoredintheprojectfolder. ProjectFile:SAGD_2D_UA.cmp
The files in the project folder are as shown in the following example:
StudyName:BoxBen StudyFolder:BoxBen.cmsd Warning:Donotmodifyordeletefilesinthe studyfolderunlessyouunderstandthe ramifications. StudyFile:BoxBen.cms StudyFileAutoBackup:BoxBen.bak NewCMOSTmasterdataset(CMM),base dataset,andbaseSR2filesarestoredinthe projectfolder.
If there is an error during the run, CMOST will try to save the study file to a .bak file. The .bak file is the last valid file and it has the same format as a study file.
CMOST User Gui de
What’s New in CMOST
3
The files in a study folder are as shown in the following example:
VectorDataRepositoryFile:*.vdr Note:VDRfilesstorecompressedsimulation datarequiredforobjectivefunctioncalculations Warning:DonotmodifyordeleteVDRfiles manually
VDR files are compressed simulation data that is used to calculate objective functions. The files are compressed to reduce disk space and runtime. 1.2.1.2
Study Data Model
CMOST is hierarchically organized onto different pages, with information grouped together on the pages. Pages are accessed through the tree view, as shown below:
Whiletheengineis running,thisdatais“read only”.
Somedatacanbe changedduringtherun; forexample,experiments canbeadded.
Onceavailable,results canbeviewedwhileruns areinprogress.
1.2.2 User Interface The CMOST 2013 user interface is significantly different from previous versions. The main screen has a Study Manager tab through which you can create, add, load and unload, exclude, import, and copy studies. For further information, refer to Using the Study Manager .
4
What’s New in CMOST
CMOST User Gui de
In addition to the Study Manager tab, the main CMOST project screen contains study tabs, each of which has a tree view and, based on the type of node selected, a configuration, status, or results page. Tree view nodes are tagged to indicate errors and warnings. The main screen can be organized to accommodate the workflow, and for presentation purposes. For further information, refer to Getting Started . As mentioned above, if study-setting errors or issues are identified, these are highlighted in the associated tree node, and information about the error or warning is presented in colorcoded messages in the Validation tab at the bottom of the study tabs. For further information, refer to Validation tab.
1.2.3 Study Type and Engine The study types and engines available for CMOST 2013.11 are shown below: ResponseSurfaceMethodology SA OneParameterAtATime MonteCarloSimulationusingProxy UA MonteCarlousingReservoirSimulator DE(2013.12) DECE LHDPlusProxy HM&OP PSO RandomBruteForce ManualEngine UserDefined ExternalEngine
The study type and engine are specified through the New Study dialog box or through the Engine Settings page. The engine settings are configured through the Engine Settings page. 1.2.3.1
Featur es Avail able for All Engin es
The following Experiments Management settings are available for all engines:
Number of failed jobs to exclude an experiment. Number of perturbation experiments for each abnormal experiment. These experiments will appear in the Experiments table as Perturbed experiment .
Using any engine (SA, HM, OP, UA and User Defined), you can use both continuous and discrete parameters in the same study.
CMOST User Gui de
What’s New in CMOST
5
1.2.3.2
New Sensit ivity Analysis (SA) Workflow
The SA workflow has changed, as shown below: ResponseSurface Methodology(RSM) Define Input
Select Engine OneParameterAtA Time(OPAAT)
Resultsand Analysis
WhenredefiningSAinputs, theprevioussettingswillbe usedasthestartingpoint.
The advantages of the new SA workflow are as follows: Flexibility in the order in which the steps are carried out.
Parameters and objective functions can be added and the SA study rerun. Problematic job submissions are handled rationally and constructively, to obtain reliable results.
Using the RSM engine, you can specify: Desired accuracy, based on which the engine will create and run the necessary experiments.
1.2.3.3
New Uncertainty Assessment (UA) Workflow
The UA workflow has changed, as shown below: MonteCarloUsing Proxy Define Input
Select Engine MonteCarloUsing ReservoirSimulator
Resultsand Analysis
WhenredefiningUAinputs, theprevioussettingswillbe usedasthestartingpoint.
The advantages of the new UA workflow are as follows: Flexibility in the order in which the steps are carried out.
6
Parameters and objective functions can be added and the UA study rerun. Problematic jobs are handled rationally and constructively to obtain reliable results. Parameter correlations can be defined. Some parameters, based on their petrophysical meaning, are correlated with each other, for example, permeability and porosity. This correlation can be measured through other means, such as lab experiments. The user can enter the desired parameter rank correlations through the Parameter Correlations
What’s New in CMOST
CMOST User Gui de
page. CMOST algorithmically adjusts the rank correlation of the Monte Carlogenerated sets of parameters so they honour the desired rank correlation settings. For further information, refer to Parameter Correlation. When using the MCS-Proxy engine to perform a UA, you can specify:
Desired accuracy, based on which the engine will create and run the necessary experiments.
When using an MCS-Simulator engine to perform a UA, note the following: Engine performs a predefined number, set by the user, of Monte Carlo simulations, using the simulator.
1.2.3.4
Use the MCS-Simulator method if: - You want to validate an MCS-Proxy result. - Building a proxy is not feasible, for example, when multiple geostatistical realizations or history-matched models are used. Changes to HM and OP Al gor ith ms
HM and OP support the engines shown below: DE(2013.12) DECE LHDPlusProxy HM&OP PSO RandomBruteForce
HM and OP engines support the following optimization settings:
AllHM/OPenginesusethesamestop criterion GlobalObjectiveFunctionthattheuserwants tooptimize,asdefinedintheGlobal ObjectiveFunctionspage. MaximizeorMinimize
The following important change has been made to the DECE optimizer: DECE now uses, in the case of continuous parameters, the Total Number of Experiments, specified by the user, as the stop criterion. In the case of discrete parameters, the total number of parameter combinations takes priority.
CMOST User Gui de
What’s New in CMOST
7
The following important changes have been made to the proxy optimizer: Ability to handle continuous parameters together with discrete parameters.
In the case of continuous parameters, uses Total Number of Experiments, specified by the user, as the stop criterion. In the case of discrete parameters, the total number of parameter combinations takes priority. The following important changes have been made to the PSO optimizer: Ability to handle continuous parameters together with discrete parameters.
In the case of continuous parameters, uses Total Number of Experiments, specified by the user, as the stop criterion. In the case of discrete parameters, the total number of parameter combinations takes priority. PSO can make use of the results of all previous experiments, which helps PSO converge to the optimal solution more quickly. The following options are available with the DE optimizer (2013.12):
1.2.3.5
Ability to handle continuous parameters together with discrete parameters. In the case of continuous parameters, DE uses Total Number of Experiments, specified by the user, as the stop criterion. In the case of discrete parameters, the total number of parameter combinations takes priority. DE can make use of the results of all previous experiments, which helps DE converge to the optimal solution more quickly. User-Defined Study Type
CMOST 2013 supports user-defined study types, as shown below:
ManualEngine UserDefined ExternalEngine
Noautomaticcreationofexperiments.All experimentsarecreatedexplicitlybytheuser through: · Classicalexperimentaldesign · Latinhypercubedesign · Manualcreation Allowsuseofuser’sownoptimizationalgorithm
You may use the manual engine: To use classical experimental design for SA and UA. To have precise control of the number of Latin hypercube experiments.
To run additional experiments after a SA/UA/HM/OP run is complete.
Use the external engine to implement your own optimization algorithms. Refer to External Engine for further information.
8
What’s New in CMOST
CMOST User Gui de
1.2.4 Creatin g and Editi ng Input Data 1.2.4.1
Field Data Management
Field history file (FHF) and well log data need to be imported before they can be used by CMOST. Once imported, the data is stored internally and its use in defining the HM Error objective function will be performed automatically. You will not need to worry about which file contains which type of data. If further changes are made to the original FHF or well log files, you will need to click the Reload button in the General Properties tab to merge the changes. This allows weights to be preserved during the reload. If you want to reset the weights, you will need to clear all imported data and then reload. 1.2.4.2
Change in Use of Special Propert y
A property name is now required when the origin type is SPECIALS. This change is needed to support synthetic (SPECIALS) properties. 1.2.4.3
Parameter Definit ion
Continuous Parameters
In the case of continuous parameters, you can define:
Parameter lower and upper limits, which set the sampling range used by study engines to create experiments. Number of discrete levels, which are used by some engines to generate initial screening experiments. Parameter prior distribution, used only by Monte Carlo simulation (either proxy or simulator). Auto synchronization between prior distribution and data range settings. If set to True, changes in prior distribution settings will automatically be reflected in the data range settings, and vice versa. If set to False, then changes in one will not be reflected in the other.
Discrete Parameters
In the case of discrete parameters, you can:
Define whether the discrete parameters are real, integer, or text. Insert the desired number of discrete values in the parameter values table.
Enter prior probabilities for each parameter value (required only for UA). For each discrete text value, enter a corresponding numerical value. This is needed because all of the algorithms used by the CMOST optimizers work only with numerical values. For further information, refer to Parameters.
CMOST User Gui de
What’s New in CMOST
9
1.2.4.4
Characteri sti c Date Times
The use of characteristic date times makes defining objective functions easier and less prone to error. There are three types of characteristic date times: Built-in fixed date times, which are automatically input from the base dataset, for example, base case start and stop. Fixed date times, which are dates defined by the user.
Dynamic date times, which are dates based on the value of the data in an original or user-defined time series; for example, the date on which the cumulative oil produced by a certain well, or a group of wells, exceeds a certain value. Refer to Characteristic Date Times for further information.
1.2.4.5
User-Defined Time Series
You can define a time series, calculated from available SR2 time series and use it to calculate the value of an objective function. Once you have defined the time series, you can compare it graphically with field data (if available). Refer to User-Defined Time Series for further information.
1.2.5 Managin g Experiments You can manage experiments through Control Centre | Experiments Table. Experiment settings and status are displayed in the Experiments Table. For further information, refer to Experiments Table. 1.2.5.1
Experim ent Status and Result Status
Once the engine is started, the status of the experiments will be changed, as described in Experiment Status. 1.2.5.2
Experim ent Filter
If you have a large number of experiments, you can use the Experiment Filter to view or export experiments of interest. Filters are used to filter the Experiments Table view. They do not affect the data contained in the table; i.e., experiments that are filtered and hidden from view still exist in the table. Refer to Experiment Filter for further information. 1.2.5.3
Base Case Experim ent
By default, the base case experiment, defined by ID=0, is listed first in the table. It uses default parameter values, but this can be changed at any time through the context menu by selecting the base case experiment and then selecting Edit to open and edit the Experiment Parameter Values dialog box.
1.2.6 Reusi ng and Restarting After you finish running an engine, you can go back and change the input data. Experiments inside the study will automatically be reused.
10
What’s New in CMOST
CMOST User Gui de
If new parameters are added, you will need to resolve “reuse pending” experiments as outlined in Resolve Reuse Pending. You can reuse data from other studies, as long as they are in the same project. Refer to To Import Data from a Study for further information. After you finish the changes, click the Start the engine.
button in the Control Centre page to restart
1.2.7 Proxy Dashboard A Proxy Dashboard is provided so that users can more easily assess the fit of generated proxy models with simulation results. Through the Proxy Dashboard, you can: Use preliminary proxy models to begin predicting reservoir behavior. Investigate the effect of varying input parameter values (by entering a what-if scenario), thereby improving your understanding of the reservoir and how proxy modeling works. Define and add training or verification experiments to the study. Switch between and compare different proxy models.
1.2.8 Viewing and Analyzing Result s Results objects are created dynamically on-the-fly using results data stored in the Experiments Table. Results objects will vary with study type; for example, HM and OP will automatically have sensitivity and proxy results if enough experiments have been completed. 1.2.9 Converti ng Old CMOST Files to New CMOST Files You can convert CMOST task (CMT) and results (CMR) files to a CMOST .cmp project file. To convert a CMT file, note the following: Base case and corresponding SR2 files must exist; if not, you will need to open the old CMT file then select a base case for it. In old CMOST, base case is an optional field and may not have been entered; however, new CMOST needs this file, so you will have to provide it. Objective functions that use raw simulation time series objective term(s) cannot be converted. Formulas need to be checked manually to make sure they are correct after the conversion. To convert a CMR file, note the following: CMOST will first look for a CMT file with the same name within the same folder as the CMR file and convert the CMT file first.
CMOST User Gui de
What’s New in CMOST
11
If a CMT file is not found, CMOST will create a CMT file based on the CMR file then will convert it to a project file. CMOST will then import the results from the CMR file into the newly created CMOST study file. Note that imported experiments can be used to build up proxies in the proxy dashboard only if the corresponding time series data was stored in the CMR file; i.e., if observers are defined.
The procedure for converting old CMOST task files to new CMOST project and study files is outlined in the following example, where we will convert SAGD_2D_HM.cmt (and related files) to the new CMOST files: 1. Your starting CMOST files will be organized as shown in the following example:
2. Open the CMOST application. The main CMOST screen is displayed. 3. In the menu bar, select File | Convert CMT/CMR File. A Windows Explorer dialog box opens. 4. Browse to and select the CMOST task (CMT) file and then click Open. The task and results files will be converted to the new CMOST files and folders, as shown below for the above example: NewCMOSTprojectfolder
NewCMOSTprojectfile
CopyofnewCMOSTmasterfile
12
What’s New in CMOST
CMOST User Gui de
The CMOST project folder contains copies of the base dataset files, and the following CMOST files: NewCMOSTstudyfolder,initially empty.Asexperimentsarerun,VDR fileswillbesavedtothisfolder. NewCMOSTback-upfile NewCMOSTstudyfile
5. The converted project file will open in CMOST. You can browse the tree view to verify that the following data and settings have been imported: General Properties: Master dataset, base dataset, base session file, and field history file have been copied in to the project and their details recorded in this page. No changes necessary.
Fundamental Data | Original Time Series: In our example, there is an
error on this node because data with SPECIALS origin type now requires the property to be named. To clear this error, select Steam-oil ratio: SOR (Injector)/(PRODUCER) CUM in the Property column. Parameterization | Parameters: Parameters POR, PERMH, PERMV, HTSORW, and HTSORG have been imported from the old CMOST files. No changes necessary. Objective Functions | Characteristic Date Times: BaseCaseStart and BaseCaseStop have been generated automatically. No changes necessary. Objective Functions | History Match Quality: HM errors and original
time series terms have been imported from the old CMOST files. No changes necessary. Objective Functions | Global Objective Function Candidates: Global objective function candidate GlobalHmError has been imported from the old CMOST files. No changes necessary. Control Centre | Engine Settings: Settings have been imported from the old CMOST files. No changes necessary. Control Centre | Simulation Settings: There is a warning. Click the active check box beside the Local scheduler name to clear this warning. Control Centre | Experiments Table: No experiments have been
defined. In our example, a set of CMG DECE experiments will automatically be defined when you start the CMOST engine. 6. In the Control Centre node, click the Start button to start the history match. You can monitor the progress of the run in the Experiments Table. Refer to for Experiments Table more information. CMOST User Gui de
What’s New in CMOST
13
7. Once the run is complete, you can view the results through the Results & Analysis node. For further information, refer to Viewing and Analyzing Results. Important Information about the CMT Convertor
Some elements of previous CMOST task or result files cannot be converted: 1. If an objective function contains a time series type Raw Simulation Result Objective Term, consider using a user-defined time series objective function.
2. If multiple types (categories) of local objective functions are used to calculate the global objective function, the weight factor of each local objective function may not be converted automatically. We recommend you check the definition of the converted global objective function.
3. If a local objective function uses objective terms with a Conversion Factor , you will need to revise the formula of the converted local objective function because conversion factors for the terms are not automatically converted.
14
What’s New in CMOST
CMOST User Gui de
2 Welcome
2.1 Introduction This user guide provides information on how to use CMOST. A basic knowledge of other CMG simulation and visualization products is recommended.
2.2 What You Need to Us e CMOST 2.2.1 General To make best use of CMOST, you should develop a good understanding of the CMG reservoir model that you are working with; in particular, you should understand the parameters that need to be adjusted, and the impact of making these adjustments. You should also have a clearly defined project goal. 2.2.2 Config uri ng Launcher and CMOST CMOST relies on either CMG Launcher or the CMG Job Service to run jobs. Before CMOST can be used, you will need to configure Launcher and the CMG Job Service to work properly. Refer to Configuring Launcher and CMOST to Work Together for further information. 2.2.3 Comput ers and Licenses CMOST can make full use of available computers and licenses. Once a job has been created by CMOST, it will automatically submit simulation jobs and check their status periodically. Once simulations have completed, CMOST will automatically process the results required for the CMOST study.
2.3 About this Manual This manual is designed to quickly get existing CMOST users up to speed, and to help new users get started. The route you take through the manual will depend on your familiarity with CMOST. The key features and elements of the manual are summarized below: ● Important Information for Existing Users provides information to help existing CMOST users quickly get up to speed with the new version. New users should skip this chapter.
CMOST User Gui de
Welcome
15
●
CMOST Overview is intended to provide new users with a high-level overview of CMOST.
●
Getting Started provides an introduction to the new version of CMOST application, in particular, it shows users how to: -
Open the CMOST application and navigate the user interface. Open an existing project and create a new one. Use the CMOST Study Manager, to create, view, rename, add, load/unload, exclude, import, and copy studies.
-
Use common user-interface operations. Close the application.
As well, the chapter provides an overview of CMOST task processes and links to more detailed information. ●
●
●
●
●
●
The organization of the central chapters parallels the organization of the CMOST user interface tree view which, in turn, parallels the order in which users will configure, run and analyze CMOST studies: - Creating and Editing Input Data - Running and Controlling CMOST - Viewing and Analyzing Results General operations, applicable across CMOST processes are described in General Operations. Information about getting CMOST and Launcher to work together at your facility are described in Configuring Launcher and CMOST to Work Together . Troubleshooting provides directions for resolving common CMOST problems that you may experience. If you are having problems, try to resolve them or get as much information as you can before contacting CMG Support. Theoretical information is provided in Theoretical Background . Where appropriate, links to this information is provided from other chapters. Hyperlinks are provided throughout the manual to help you navigate the content.
A table of contents and index are provided to help you quickly find and link to information. ● A glossary of terms is provided to help new users understand CMOST terminology. To report errors in the manual or to provide suggestions for improvement, please contact CMG Support, as outlined in Getting Help. ●
2.4 Getting Help If you need help with the CMOST application or with this manual, please contact CMG Support using the contact information provided on our Web site at www.cmgl.ca. 16
Welcome
CMOST User Gui de
3 CMOST Overview
3.1 Introduction This chapter provides introductory information about the following: ● What CMOST is, and what it is used for ●
General CMOST process and work flow
●
●
CMOST inputs CMOST concepts and components CMOST user interface
●
Recommended practices for using CMOST effectively
●
The level of material presented in this chapter assumes the reader is familiar with CMG simulators and datasets but may not be familiar with CMOST.
3.2 What is CMOST? CMOST is a CMG application that works in conjunction with CMG reservoir simulators to perform sensitivity analyses, history matches, optimizations, and uncertainty assessments.
3.2.1 Sensi tivi ty Analys is (SA) Sensitivity analyses are used to determine the variation of simulation results under different values of the input parameters, reservoir properties for example, and to identify which parameters have the greatest effect on user-defined objective functions, such as history match error. Sensitivity analyses use a limited number of simulation runs to determine the parameters that should be varied in subsequent studies and over what range. This information is then used to design history matching or optimization studies, for example, which require a greater number of simulation runs.
CMOST User Gui de
CMOST Overvi ew
17
3.2.2 History Matching (HM) History matching provides an effective way to match simulation results with production history data. Using CMOST, users create and run experiments using a version of the base dataset which has embedded instructions that tell CMOST where to substitute parameter values. As simulation jobs complete, CMOST analyzes the results to determine how well they match production history. An optimizer then determines parameter values for new simulation jobs. As more simulation jobs complete, the results converge to one optimal solution which should provide a satisfactory history match if user-specified parameters and parameter ranges have been appropriately defined. 3.2.3 Opti mization (OP) Optimization studies are used to produce optimal field development plans and operating conditions that will produce either a maximum or minimum value for objective functions. These objective functions can be physical quantities such as cumulative oil produced, recovery factor, and cumulative steam-oil ratio. CMOST also allows monetary values to be assigned to these physical quantities, so optimizations can be carried out using net present value calculations. 3.2.4 Uncertainty Assess ment (UA) Uncertainty assessments are used to determine the variation in simulation results due to residual uncertainty, the uncertainty that remains after history matching and optimization, usually about the value of some reservoir variables. Uncertainty assessment involves the following: 1. Use available simulation results to develop a response surface (RS) for each objective function of interest (such as NPV, CSOR, and cumulative oil production) with respect to each of the uncertain variables (e.g. porosity, permeability, endpoint saturations, and oil viscosity). 2. Using the response surface, conduct a Monte Carlo simulation to select large numbers (tens of thousands) of variable value combinations and determine the value of the objective functions for each combination. The results of uncertainty assessments are probability and cumulative density functions for each objective function. In addition to Monte Carlo simulation results, effect estimates and response surface results are available from uncertainty assessment studies so that sensitivity information can be obtained. Detailed response surface statistics provide valuable information about the suitability of using the response surface model as the proxy for the reservoir model.
18
CMOST Overvi ew
CMOST User Gui de
3.3 Generali zed CMOST Study Proc ess The generalized CMOST study process is illustrated below: Startorenter fromotherstudy
DefineandSelect ParameterValues
AnalyzeResults
Useresultsasbasisfor decisionsorasinputfor otherstudies
STUDY PROCESS
Substitute ParameterValues intoSimulation Dataset
RunSimulation
The above diagram shows how CMOST supports the definition of parameters and parameter ranges which are, in turn, used to define study experiments. The experiments are run through the simulator. The simulation results, which will vary depending on the study type, are analyzed and used, as necessary, to: ●
Define further experiments
●
Form the basis for subsequent studies Formulate production plans
●
CMOST User Gui de
CMOST Overvi ew
19
3.4 CMOST Components and Concepts 3.4.1 Project Compon ents The hierarchy of CMOST project components is shown below: Project
Study 1
Experiment 1.1
Experiment1.2 ... Experiment1.n
Study2 ... Studym
Experiment2.1 ... Experimentm.1 ...
A CMOST project consists of a set of studies, which can, in turn, be a mix of sensitivity analyses, history matches, optimizations, uncertainty assessments, and user-defined studies. Studies are defined by the configuration of: ● ● ●
Type and source of the input data required for the study. CMOST engine used to process the input data. Type of output data produced by the study.
Configurations can be copied from one study to another. Study type can be changed, in which case the new study type will re-use as much of the information from the original study as possible. Studies consist of experiments, each of which is defined by a distinct set of input parameters and output objective functions.
3.4.2 Base Files Each CMOST study starts with a previously completed (base) simulation dataset. CMOST needs access to files from this base simulation dataset. It may also require or make use of other files, such as production history files in the case of history matching studies. The input files used by CMOST are described below.
20
CMOST Overvi ew
CMOST User Gui de
3.4.2.1
Base Dataset
A base dataset must be available before you configure and then run CMOST experiments. The base dataset can be any valid dataset for any CMG simulator. The base dataset is used to create the base SR2 (simulation results) files. The base dataset is also used to create the CMOST Master Dataset. 3.4.2.2
Base SR2 Files
The base IRF (.irf, Indexed Results File) provides CMOST with basic information about the dataset, such as the simulator type used, well lists, and simulator start and end dates. The base MRF (.mrf, Main Results File) is the binary file that contains the simulation data. The base SR2 files are required components. From the base SR2 files, CMOST can obtain and display observers (simulation outputs) and calculate base case objective functions (expressions or quantities that the user wants to minimize or maximize). By CMOST convention, the base case is defined as the experiment with ID=0. 3.4.2.3
Base Session File
A base session file can be used by CMOST but is not required. The base session file is created by CMG Results™ Graph using the base SR2 files. CMOST uses the base session file to quickly display plots in Results Graph using results of simulation runs generated by CMOST experiments. 3.4.2.4
Productio n History Files
Production history files, such as field history or well log files, are required for history matching. The files that are needed will depend on the type of history match being performed.
3.4.3 File System At the highest level, a CMOST project folder is organized as shown in the following example, for project SAGD_2D_UA:
ProjectName:SAGD_2D_UA ProjectFolder:SAGD_2D_UA.cmpd BestPractice:Allfilesrelatedtotheproject shouldbestoredintheprojectfolder. ProjectFile:SAGD_2D_UA.cmp
CMOST User Gui de
CMOST Overvi ew
21
The files in the project folder are as shown in the following example:
StudyName:BoxBen StudyFolder:BoxBen.cmsd Warning:Donotmodifyordeletefilesinthe studyfolderunlessyouunderstandthe ramifications. StudyFile:BoxBen.cms StudyFileAutoBackup:BoxBen.bak NewCMOSTmasterdataset(CMM),base dataset,andbaseSR2filesarestoredinthe projectfolder.
NOTE: If there is an error during a run, CMOST will try to save the study file to a .bak file.
The .bak file is the last valid file and it has the same format as a study file. An example of the files that are stored in the study folder is shown below:
VectorDataRepositoryFile:*.vdr Note:VDRfilesstorecompressedsimulation datarequiredforobjectivefunctioncalculations Warning:DonotmodifyordeleteVDRfiles manually
NOTE: VDR files are compressed simulation data that is used to calculate objective
functions. The files are compressed to reduce disk space and runtime.
22
CMOST Overvi ew
CMOST User Gui de
3.4.4 Study Types and Engines The study types and engines available with CMOST are shown below: ResponseSurfaceMethodology SA OneParameterAtATime(OPAAT) MonteCarloSimulationusingProxy UA MonteCarlousingReservoirSimulator DE DECE LHDPlusProxy HM&OP PSO RandomBruteForce ManualEngine UserDefined ExternalEngine
Information about the above engines can be found in Theoretical Background and directions for configuring them can be found in Engine Settings.
3.4.5 Study Workf low The study workflow varies depending on study type and the selected engine; for example, the simplified sensitivity analysis workflow: ResponseSurface Methodology(RSM) Define Input
Select Engine OneParameterAtA Time(OPAAT)
RunMultiple Simulations
Resultsand Analysis
WhenredefiningSAinputs, theprevioussettingswillbe usedasthestartingpoint.
CMOST User Gui de
CMOST Overvi ew
23
and the simplified uncertainty assessment workflow: MonteCarloUsing Proxy Define Input
Select Engine MonteCarloUsing ReservoirSimulator
RunMultiple Simulations
Resultsand Analysis
WhenredefiningUAinputs, theprevioussettingswillbe usedasthestartingpoint.
A more detailed general workflow can be found in CMOST User Interface.
3.5 CMOST Master Dataset (.cm m) The master dataset, which is a required component, is a version of the base dataset that has been modified with embedded instructions that tell CMOST where to substitute different input parameter values at runtime. You can create the master dataset in the following ways: ● CMOST CMM File Editor ●
●
Builder™ (refer to chapter “Setting Up Datasets for CMOST” in the Builder Users Guide for more information) Text editor, such as Notepad.
This section provides an overview and examples of inserting CMOST parameters into the master dataset using the CMM File Editor. CMM File Editor provides more information and detailed instructions for doing this.
24
CMOST Overvi ew
CMOST User Gui de
To illustrate the embedding of formulas in the master dataset, consider the following example, where we have inserted formulas for parameters Porosity, PERMH_L1, PERMH_L2 and KvKhRatio into the master dataset:
CMOST parametersadded
Anywhere CMOST is required to substitute a value or text into the master dataset, a CMOST formula should be entered. CMOST formulas can appear anywhere in the master dataset; however, each CMOST formula must be completed in a single line. NOTE: The first date/time keywords for a master dataset must be *DATE. Errors will be
encountered if *TIME keywords are used. The following diagram shows how CMOST produces and submits experiment datasets to simulators then receives and processes the results (for a single processor): MasterDataset(CMM)
CMOSTsubstitutesexperiment parametervaluesfromthe ExperimentsTableintotheCMM toproducetheexperimentdataset
CMOSTsubmitsthe experimentdatasettothe scheduler/simulator Scheduler/ Simulator
StudyExperiments Table
CMOST User Gui de
NextExperiment
CMOSTreceivesand processesresultsof experiment
CMOST Overvi ew
25
3.5.1.1
Master Dataset Syntax
The master dataset syntax is shown in the following examples: Example 1: In the original dataset, POR is defined as follows: POR CON 0. 20
In the master dataset, we want to vary the value of porosity across a number of experiments, so a formula is inserted into this line in the dataset, as follows:
Simulator Keywords
CMOST Start
Original(Default) ValueinDataset
Variable Name
CMOST End
NOTE: Spaces are not allowed in the CMOST portion, and variable names are case sensitive. Example 2:
Formulas can also be used with one or more variables, as follows: Simulator Keywords
CMOST Start
Formula
CMOST End
In the above example, parameter Por o si t yMul t i pl i er will be multiplied by 0.2 before the simulation is run. NOTE: Default values, equal to the original value entered in the base dataset, are optional. Example 3:
Values in regions of the reservoir can be modified using the MOD simulation keyword, as shown in the following example:
BlockRanges I:IJ:JK:K
26
CMOST Overvi ew
CMOST User Gui de
3.5.1.2
CMOST For mu las
All CMOST formulas that are entered into the master dataset must be nested within a start tag
and an end tag cmost >. It is optional to use t hi s [ Or i gi nal Val ue] = to start a formula. The original value indicates the value that was used by the base dataset. Also, if the syntax t hi s [ Or i gi nal Val ue] = is used, the variable t hi s can be used in the formula to reference the original value that was entered in the dataset. If the original value is a string (text), it should be enclosed in double quotation marks. For example, CMOST can correctly handle the following lines: I NCLUDE ' t hi s[ " por 50. i nc" ] =PORI NC cmost >' PERMI CON t hi s[ 5000] =PERMH cmos t >
If the CMOST original value will not be used in a formula, the above two lines could be simplified by omitting t hi s [ ] = as follows: I NCLUDE ' PORI NC cmost >' PERMI CON PERMH cmos t >
CMOST cannot handle the following line because the original text is not enclosed in double quotation marks. I NCLUDE
' t hi s[ por 50. i nc] =PORI NC cmost >'
CMOST formula syntax and a description of available built-in functions are provided in Formula Editor . 3.5.1.3
CMOST Formu la Examples
The following examples are included to illustrate the insertion of formulas into the master dataset for different purposes. Example 1: To substitute the value of parameter varA directly into the dataset: t hi s[ 1. 0] =var A cmost > var A cmost >
Example 2: To substitute the result of adding parameter varB to the original value: t hi s[ 1] = t hi s + var B cmost >
Example 3: To substitute the result of var A–var B+5: t hi s[ 26] =var A- var B+5 cmost > var A- var B+5 cmost >
var A Example 4: To substitute the result of 179.79 × var B
0.248
:
t hi s[ 203. 9] =179. 79*POWER( var A/ var B, 0. 248) cmost >
CMOST User Gui de
CMOST Overvi ew
27
Example 5: If the result of var A multiplied by varB is greater than 1200, the multiplied
parameter will be substituted; otherwise, 1200 will be substituted: t hi s[ 1800. 0] =MAX( var A*var B, 1200) cmost >
Example 6: If var A is greater than or equal to 600, OPEN will be substituted; otherwise,
CLOSED will be substituted: t hi s[ " OPEN" ] =I F( var A>=600, " OPEN" , " CLOSED" ) cmost >
Example 7: If var B matches any of the values in the first set of values, the corresponding
value in the second set of values will be substituted; for example, if varB was set to 7.3, the value 188.75 would be substituted: t hi s[ 402. 57] =LOOKUP( varB, {3. 0, 5. 0, 7. 3}, {524. 62, 402. 57, 188. 75}) cmost >
Example 8: If var A matches any of the entries in the first set, the corresponding entry in the
second set will be substituted; for example, if var A was set to " por Mi d. i nc ", per mMi d. i nc would be substituted into the dataset. t hi s[ " per mMi d. i nc"] =LOOKUP( varA, {"por Low. i nc" , "por Mi d. i nc", "por Hi gh. i nc"}, {"per mLow. i nc", "per mMi d. i nc", " per mHi gh. i nc"}) cmost >
Example 9: A range of acceptable values is set for varA. If varA is less than 0, the value 0 will be
substituted. If var A is greater than 1, the value 1 will be substituted. If var A is between 0 and 1, the value of varA will be substituted: t hi s[ 0. 68] =MAX( MI N( var A, 1) , 0) cmost >
3.5.1.4
Inserti ng Inclu de Files int o the Master Dataset
If large arrays of data need to be substituted, it may be easier to use include files; for example, porosity may have a value for each grid block in a reservoir. It would be unrealistic to create a parameter for each grid block’s porosity. Multiple include files can be used where each one can contain a different geostatistical realization for porosity. Include files can be used anywhere in a dataset. The syntax that is used to enter an include file into a master dataset is: *I NCLUDE ' Ar r ayI ncFi l e cmost >'
The parameter Ar r ayI nc Fi l e would then be defined as a Text parameter through the Parameters page with the include files listed as candidate values. The include files contain the block of text that is to be substituted into the dataset. For example, porosity could be used in the following include file for a reservoir model with dimensions ni = 10, nj = 3, nk = 2:
28
CMOST Overvi ew
CMOST User Gui de
*POR . 08 . 15 . 074 . 095 . 11 . 08
*ALL . 08 . 134 . 12 . 13 . 12 . 09
. 081 . 08 . 12 . 12 . 134 . 144
. 09 . 087 . 154 . 157 . 157 . 143
. 12 . 157 . 167 . 17 . 157 . 123
. 15 . 145 . 187 . 18 . 18 . 16
. 09 . 12 . 121 . 184 . 18 . 165
. 097 . 135 . 122 . 122 . 098 . 102
. 087 . 18 . 08 . 084 . 09 . 10
. 011 . 092 . 08 . 09 . 09 . 10
Other include files could be created with similar syntax but with different values entered. 3.5.1.5
Handlin g Files Referenced by Master Dataset
CMOST uses the following logic to handle a path and its associated files for INCLUDE, BINARY_DATA, and FILENAMES INDEX-IN keywords: If the path is an absolute path: CMOST will not copy the associated files to the corresponding study folder. CMOST will not change the path; for example, CMOST will keep the following line unchanged: FI LENAMES I NDEX- I N ' \ \ comput er 8\ d\ Test \ punq- hi st or y. i r f '
Since it is an absolute path, CMOST will not copy its referenced files (.irf and .mrf). If the path contains file name only (no directory information): CMOST will copy the associated files to the corresponding study folder. CMOST will not change the path; for example, CMOST will keep the following line unchanged: I NCLUDE ' Por Low. i nc'
CMOST will, however, copy file ‘PorLow.inc’ from the original directory to the study folder. If the path is a relative path: CMOST will not copy the associated files to the corresponding study folder. CMOST will re-base the relative path to the study folder when creating datasets; for example, CMOST will re-base the following relative path: I NCLUDE ' i ncf i l es\ PorLow. i nc'
to the study folder by modifying the line to: I NCL UDE ' . . \ i nc f i l es \ Por L ow. i nc '
Since it is a relative path, CMOST will not copy the referenced include file. NOTE: Even though BINARY_DATA not followed by a path name is acceptable to the
simulators, CMOST does not allow this in the master dataset because it would require making a copy of the binary data file for each dataset that is created by CMOST. This defeats the purpose of creating the .cmgbin file to save space.
CMOST User Gui de
CMOST Overvi ew
29
3.6 CMOST User Interface An example of the CMOST user interface is shown below:
The CMOST user interface has been designed to make the understanding of and access to the complex, powerful functionality of CMOST as intuitive and simple as possible. Commonly used operations are available from the menu bar and toolbar. For basic information about navigating the CMOST user interface, refer to Getting Started . The CMOST user interface is hierarchically organized through the tree view on the left in a way that parallels the general study workflow. Configuration and results pages are accessed through tree view nodes, as shown below:
Whiletheengineis running,thisdatais“read only”.
30
CMOST Overvi ew
Somedatacanbe changedduringtherun; forexample,experiments canbeadded.
Onceavailable,results canbeviewedwhileruns areinprogress.
CMOST User Gui de
Each major node in the Input section provides links to subnodes as well as information about the purpose of the subnode, as shown in the following example:
High-levelinformationabout theCharacteristicDate Timespage
Youcandouble-clickto opentheHistoryMatch Qualitypage
Through the Input node, you will:
Specify input files, including the base and master datasets, base session file, and field data files. Refer to General Properties for more information. Specify the fundamental data (time, distance or depth data) that is obtained or calculated from SR2 files and used for sensitivity analysis, history matching and optimization. Refer to Fundamental Data. Define and specify the input parameters that will be varied by CMOST in study experiments, including their insertion into the master dataset. Refer to Parameterization.
Define the objective functions that you want to minimize or maximize. In the case of history matching, for example, you may want to minimize the error between field data and simulation results. In the case of optimization, you may want to maximize net present value. Refer to Objective Functions. Through the Control Centre node, you will:
Define and configure study and engine types. Refer to Engine Settings. Specify and configure simulation settings. Refer to Simulation Settings. Specify the study experiments and, once the study engine has started, monitor their progress and, if necessary, make adjustments. Refer to Experiments Table.
CMOST User Gui de
CMOST Overvi ew
31
Start, stop, pause, and monitor the CMOST engine. Refer to Control Centre. Monitor the status of the proxy model development and, if necessary, make adjustments to the experiments. Refer to Proxy Dashboard . Monitor the progress of simulation jobs. Refer to Simulation Jobs. Through the Results & Analyses node, you will: View and interpret the results of your study. As they are produced, results can be viewed “on-the-fly”. The types of results that will be displayed will vary with the study type; for example, if you are running a sensitivity analysis using the OneParameter-At-A-Time engine, an OPAAT plot will be produced. Refer to Viewing and Analyzing Results for further information.
3.7 Best Practic es for Using CMOST Configuring the I/O Control Section of the Master File
Configure the Input/Output Control section of the .cmm file properly to keep the simulation output files (.irf , .mrf , .rst, .out) as small as possible. Large simulation output files slow down concurrent simulation runs and often trigger intermittent file I/O problems that cause jobs to fail. For details, refer to WRST, OUTSRF, WSRF, OUTPRN, and WPRN keywords in the appropriate simulator manual. Usually, it is unnecessary to write restart records at each DATE/TIME, GRID quantities (value per grid block), or the .out file for jobs submitted by CMOST. Running Multiple Concurrent Remote Jobs
If you want to run multiple (>=5) concurrent remote jobs, CMG recommends using a Windows 2003 or 2008 File Server to store CMOST input/output files (.cms, .vdr, and simulation input/output files). This is because, for Workstation-type Windows operating systems (Windows XP, Windows Vista, and Windows 7), the maximum number of allowed remote sessions from remote computers is 10. This limit includes all transports and resource sharing protocols combined. Therefore, for more than four concurrent remote jobs, a Windows workstation may not be adequate for use as a CMOST file server. For more information about the inbound connections limit in Windows XP, Windows Vista, and Windows 7, view Microsoft Knowledge Base article Q314882. Checking Available Disk Space
Occasionally, check the available disk space in the C: drive of all Windows compute nodes. If the C: drive of the compute node is low in available disk space, remove unwanted files in the C:\ProgramData\CMG\CopyLocalJobs (Windows 2008, Windows Vista, and Windows 7) or C:\Documents and Settings\All Users\Application Data\cmg\CopyLocalJobs (Windows 2003 and Windows XP) folder. For Linux compute nodes, check the available disk space in the /tmp folder. Unwanted simulation output files (.irf, .mrf, .rst, .out) can be removed from the /tmp folder.
32
CMOST Overvi ew
CMOST User Gui de
Using Cumulative Rate Data for History Matching Type Objective Functions
Rates (e.g. oil rate, water rate, or gas rate) are not recommended for use in constructing history matching error type objective functions because they are usually discontinuous step functions. The nature of a step function means that the rate value is not well defined at the boundary of each interval (small time disparities). The step function disparities could cause inaccurate calculation of the history matching error. Since cumulative quantities (cumulative oil, cumulative water, and so on) are continuous in time, they are recommended as a replacement for rates in history match error calculations. If the entire curve of a cumulative quantity is matched perfectly, this guarantees the corresponding rate curve will be matched as well. Similar recommendations can be made for other ‘instantaneous’ quantities, such as water cut, or instantaneous GOR or SOR. Use of History Matching Error for Sensitivity Analyses
In sensitivity analysis, the history matching error is not recommended as an objective function type; instead, direct physical quantities such as cumulative oil, pressure, or temperature are recommended. There are two reasons for this. First, history-matching errortype objective functions transform linear functions into non-linear functions. Second, for direct physical quantities, understanding and applying sensitivity analysis results can often be supported by the fundamental theory in reservoir simulation.
CMOST User Gui de
CMOST Overvi ew
33
4 Getting Started
4.1 Introduction This chapter provides basic information about the following: ● Opening and Navigating CMOST ●
Creating a CMOST Project
●
Using the Study Manager Common Screen Operations Closing CMOST
● ●
4.2 Opening and Navig ating CMOST To open CMOST:
Open Launcher then double-click the CMOST icon. The CMOST splash screen appears while the application is loading, then the CMOST main screen is displayed: Titlebar Menubar Toolbar
Statusbar
CMOST User Gui de
Getting Started
35
The elements of the CMOST main screen are as follows: Menu bar: ● -
Through the File menu, you can initiate the following commands: Command
New | Project New | Study
Open | Project Open | Existing Study Save | Current Study
Save changes to all studies.
Exclude Existing Studies
Exclude an existing study from the project. Refer to To Exclude a Study. Close the current CMOST project. You will be prompted to save your changes. Open one of the last (up to 5) projects that you have opened.
Recent Files Exit
Index Search Contents About
Getting Started
Convert CMOST task (CMT) and results (CMR) files to a CMOST CMP project file. Refer to Converting old CMOST Files to new CMOST Files.
Close the CMOST session. You will be prompted to save your changes.
Through the Help menu. You can initiate the following: Command
36
Create a new CMOST project. Refer to Creating a CMOST Project. This selection, available once you have a project open, will add a new study to the project. Refer to To Create a New Study. Open an existing CMOST project. Refer to Opening a CMOST Project. This selection, available once you have a project open, will open an existing study. Save the currently selected CMOST study.
Save | Save All Convert CMT/CMR File
Close Project
-
Description
Description
Open CMOST help information with the index showing. Open CMOST help information with the search window showing. Open CMOST help information with the table of contents showing. View information about this version of CMOST, link to CMG website, and an email address for CMG support.
CMOST User Gui de
●
Status bar: Displays CMOST status information, which will vary according to the
page selected. ●
Toolbar: The following buttons are provided in the toolbar. Some of these buttons will
not appear until you have opened or created a project, and opened or created a study: Button
Description
New Project
Open a dialog box through which you can create a new project. If you have another project open, you will be prompted, as necessary, to stop the study engines and save.
Open Project
Open an existing CMOST project. If you have another project open, you will be prompted, as necessary, to stop the study engines and save.
New Study
Once a project is open, you can click this button to open a dialog box through which you can create a new study.
Save Current Study
Save the selected study settings and results, but do not close the project.
Save All
Save the settings and results for all studies, but do not close the project.
Start Engine
Start the CMOST engine for the selected study. This button will not be available if the study engine is already running or is not ready to start. Pause the CMOST engine for the selected study. This button is only available if the study engine is already running.
Pause Engine
Stop Engine
Help
Stop the CMOST engine for the selected study. This button is only available if the study engine is already running. Open CMOST help information to the front page, with the table of contents displayed.
4.3 Opening a CMOST Project To open an existing CMOST project: 1. Click File | Open Project. NOTE: If you have another project open, you will be prompted to save it first.
CMOST User Gui de
Getting Started
37
2. Browse to the project file and then click Open. The project will open to the Study Manager folder. NOTE: If you open a project that is already opened by you or someone else, you will be
restricted from making edits or saving the project file, as indicated below.
Locksymbolsontabsindicate projectislockedbyanother activesession,andeditsand savesarenotpermitted.
MessageinStatusBarindicates editsandsavesarenot permitted.
NOTE: You can also open a CMOST project file in the following ways:
1) From Explorer, drag a .cmt or .cmr file onto the CMOST desktop icon. In this case, the file will only open if a .dat file is available. 2) From Explorer, drag a .cmp file onto the CMOST desktop icon. 3) Right-click a .cmp file then select Open with | CMG.CmostPlus.Studio.View.
38
Getting Started
CMOST User Gui de
4.4 Creating a CMOST Project To create a new CMOST project:
1. In the menu bar, click File | New | Project. The Create New Project dialog box is displayed:
NOTE: As shown above, required fields are outlined in red. As well, move the pointer over
the icon to view tips and information about the field setting. 2. Enter the fields in the Create New Project dialog box, as follows: ●
●
●
Project name (required): Enter the desired project name, any
combination of keyboard characters, including spaces. Base Dataset (required): Click Browse to the right of the field, browse to and select the base dataset, and then click Open. The location of the base dataset will be displayed. The SR2 files should also be in this location. Copy base case to project folder: If you select this check box, the base case files will be copied into the project .cmpd folder. If you do not select it, then the base case files will be used from their original location.
●
Project Location: Click Browse, then browse to and select the folder, and then click Open.
●
Project file: The project .cmp file will automatically be entered based on Project name and Location.
●
Project folder: The project .cmpd folder will automatically be entered based on Project name and Location.
●
Comments: Enter project comments as necessary. These comments will be shown in the Study Manager tab.
CMOST User Gui de
Getting Started
39
Once you have filled in the required fields, the OK button is enabled as shown in the following example:
3. Click OK to create and save the CMOST project file and project folder. The main screen will now appear as shown in the following example:
Copiedinfrom commentsentered inCreateNew Projectdialogbox
Nostudiescreated orimported
Studyinformation
As shown above, the comments you entered in the Create New Project dialog box are carried forward to the Project comments area in the Study Manager tab.
40
Getting Started
CMOST User Gui de
The base dataset files are copied into the project folder, which is designated with the extension .cmpd for CMOST project directory. The following shows an example of the folders and files that are created after you create a project: BaseDatasetFiles, copiedin contains
ProjectFolder ProjectFile
4.5 Using the Study Manager Through the Study Manager tab, you can: ● ● ● ● ●
Create a new study View study details Change the name of a study Add an existing study to the project Load/unload a study
●
Exclude a study Import data from a study
●
Copy a study
●
NOTE: Some Study Manager functions, such as creating a new study or adding an existing
study, are accessible through the File menu and by right-clicking in the study view; however, the procedures in this section are based on access to these functions through the buttons and icons in the Study Manager tab.
4.5.1 To Create a New Stud y 1. Click the New Study button in the right side of the Study Manager tab, the New Study button in the toolbar, or click File | New | Study in the menu bar. The New Study dialog box is displayed:
CMOST User Gui de
Getting Started
41
NOTE: As shown above, required fields are outlined in red. As well, move the pointer over
the icon to view tips and information about the field setting. Fill in the fields as follows: Name: Enter a name for the study. Once you click OK, a study icon is ● added to the Study Manager tab, and a study tab created, both based on this name. ●
Base dataset: Browse to and select the base dataset; for example, if you
have copied in the dataset, you should browse to and select the copy. ●
Automatically create master dataset (.CMM) using the base dataset: If
●
you select this check box, a master dataset will automatically be created from the base dataset. The master dataset will be saved in the same folder as the base case. If you do not select this check box, you will need to specify the master dataset through the General Properties subnode. Type: From the drop-down list, select the study type, one of:
NOTE: You can later change the study type through the Engine Settings node.
42
Getting Started
CMOST User Gui de
If you click the button to the right of Advanced Settings, you can access the following study settings:
AdvancedSettings
●
●
Special dictionary file: CMOST supports special simulator versions,
such as STARS-ME™. If a special dictionary file is required to process SR2 files produced by a special simulator, you will need to select Special dictionary file required for the study and then browse to and select the dictionary file. SR2 processing stack size: Stack size (MB) used by the SR2 reader to read SR2 files. The default stack size is 40 MB.
2. Click OK. The new study is created, in particular : ● New study files and folders are created, as shown in the following example: Studyfolder.Initiallyempty,willcontainstudy VDR,IRF,MRFandLOGfiles,asspecified inSimulationSettingsnode.
Ifthereisanerrorduringtherun,CMOSTwilltry tosavethestudyfiletoa.bakfile. Studymasterdataset,if“Automaticallycreatemaster dataset(.CMM)usingthebasedataset”wasselected CMOSTstudyfile
NOTE: As shown above, if there is an error during the run, CMOST will try to save the study
file to a .bak file. The .bak file is the last valid file and it has the same format as a study file.
CMOST User Gui de
Getting Started
43
●
A study tab is added to the project view as shown in the following example, which has the General Properties page open:
Studytree view
Inputnode icons
Studynotes
The study tree view contains the following primary nodes: Input: Through this node and its subnodes, you create and edit input data for the study. Control Centre: Through this node and its subnodes, you configure, run, monitor and control the CMOST study. Results and Analyses : Through this node and its subnodes, you can view and analyze the results of the CMOST study. To open study node pages, you can click the node or subnode in the study tree view or, in the case of the Input node page, shown above, click the subnode. In the Input page, numerical information may be superimposed on an icon. The following example indicates that two date times have been defined in the Characteristic Date Times node:
Status icons may also be superimposed on the icons that precede each node and subnode name in the study tree view:
44
Getting Started
CMOST User Gui de
Icon
Meaning
Error
Warning
Information
No errors or warnings ●
Errors have been identified with the node or subnode. Click the Validation tab at the bottom of the node pane for further information. Warning about the settings or configuration of the certain nodes. In the case of the Simulation Settings node, this icon is displayed if you have not selected a scheduler. Click the Validation tab at the bottom of the node pane for further information. Page contains information that may be useful for the user. Click the Validation tab at the bottom of the node pane for further information. There are no errors or warnings associated with the current node or subnode.
A study icon is added to the Study Manager tab, as shown in the following sample project, which has five studies – one sensitivity analysis (SA) study [shown selected], two history match (HM) studies, one optimization (OP) study, and one uncertainty assessment (UA) study: StudyTabs
StudyIcons
Informationaboutselected studyOPAAT
CMOST User Gui de
Getting Started
45
4.5.2 To View a Stud y Click the study icon to view study details at the bottom of the page. NOTE: If Auto load is checked, then when you open the project, the study will automatically be loaded, regardless of its prior status. If Auto load is not checked, the study will revert to the Load/Unload status that it had when you exited the last session.
Double-click the study icon, or right-click the icon and then select Go to Study, to view the study tab.
4.5.3 To Change the Disp lay Name of a Stud y Click and then edit the name of the study in the associated icon to change its display name. This changes the name on the study icon and study tab, but does not change the names of the associated study files and folders. 4.5.4 To Add an Exist ing Study to the Current Project Session Click the Add Existing button or click File | Open | Existing Study in the menu bar to open a Windows Explorer window, browse to the existing study, which has to be in the project folder, and then click Open. 4.5.5 To Load/Unload a Study You can unload or remove a study from the current project session when, for example: ● You need to reduce the amount of memory used by the project. ● Project contains studies that have been run and do not need to be run again. ● Project contains studies that are out of date or no longer valid. Once a study is unloaded: ● Study icon in the Study Manager will change to indicate that they study has been unloaded, as shown below. ● Study tab is removed from the session and the study details are not accessible. ● Study cannot be copied to a new study. ● Data cannot be imported from the study.
46
Getting Started
CMOST User Gui de
To unload a study, right-click its icon and then select Unload, or click the study and then click the Unload button. The study icon will change to indicate that it is now unloaded:
StudyLoaded
StudyUnloaded
To load the study, right-click the study icon and then click Load.
4.5.6 To Exclu de a Study You can exclude a study from the project, in which case it will not be viewable in the project screen and will not use any computer memory or processing. You may, for example, have created and run a study and no longer need to view it: 1. Right-click the study then select Exclude or click the study icon and then click the Exclude button on the right side of the Study Manager tab. In either case, you will be asked if you want to remove the selected record(s). 2. Click Yes. The study icon is removed from the Study Manager tab, and the study tab is removed. The study folder and files are not deleted so the study can be added back into the project later. You can also exclude multiple studies through the menu bar, as follows: 1. Click File | Exclude Existing Studies . The Exclude Existing Studies dialog box is displayed:
2. In the To Exclude column, select the study (or studies) you want to exclude and then click OK. The study is removed from the project view but the study folder and files are not deleted. You can add the study back into the project at any time through the Add Existing button in the Study Manager tab, as outlined above.
CMOST User Gui de
Getting Started
47
4.5.7 To Impor t Data fro m a Study You can import data from one (loaded in the same project) study into another: 1. Right-click the study that you want to import data to then select Import. The Import Study Data dialog box is displayed, showing the studies that are loaded and the fields available for import:
LoadedStudies
Fieldsavailablefor importintoselected study
2. Select the source study. 3. Set the fields you want to import to True and the ones you do not want to import to False, as shown in the following example:
Parameterizationand ExperimentTablewill beimportedfromthe OPAATstudy
4. Click OK. In the above example, Parameterization data and the Experiments Table are imported from the OPAAT study into the selected study.
48
Getting Started
CMOST User Gui de
4.5.8 To Copy a Study You can copy a (loaded) study to a new study in the project, as follows: 1. Right-click the study that you want to copy and then select Copy to New Study. The Change Name dialog box is displayed, with New study name set to the original study name appended with a copy number, as shown in the following example:
ChangeNewstudy nameasnecessary
2. As desired, change New study name to a name that is not already being used and then click OK. A new study will be created that is identical to the original study, i.e., all data is preserved. A new study icon is added to the Study Manager tab and a study tab is added.
4.6 Common Screen Operations and Conventio ns The information displayed varies, depending on the study node; however, a number of operations are common across all pages:
4.6.1 Butt ons and Icons When you click a button with a beside the label, it will open a drop-down list of options, as shown below:
4.6.2 Plots 4.6.2.1
To Copy an Image to the Clipb oard
To copy the image of a graph into a file: 1. Right-click anywhere in the plot then select Copy Image to Clipboard. 2. In the target application (Word, for example), click Paste (or CTRL+V). 4.6.2.2
To Save an Imag e
To save a plot to an image file: 1. Right-click anywhere in the plot then select Save Image. 2. In the dialog box, select the desired file type, browse to the folder, and then click Save. CMOST User Gui de
Getting Started
49
4.6.2.3
Abo ut Data Point s and Curves
Data curves and points are displayed in graphs using the following conventions. BaseCase FieldHistory GeneralSolution HighlightedExperiment OptimalSolution TrainingExperiment VerificationTest
If you move the pointer over a data point, it will be changed to red and the point’s data values will be displayed. If you click the data point, it will be changed to have a black border. 4.6.2.4
To Highlight an Experiment
To highlight an experiment, right-click the associated data point or curve then select Highlight the Experiment. If you highlight an experiment on one graph, then it will be highlighted where the experiment appears in other graphs. This allows you to view the results of an experiment across multiple graphs. To cancel, right-click the highlighted point or curve, then clear Highlight the Experiment. In some cases, you may need to click to refresh the graph. You can also highlight or unhighlight experiments through the Experiments Table. 4.6.2.5
To Zoom In and Out of Plot s
When viewing plots of CMOST Result Observers, you can zoom into any area of the plot, as shown below. To zoom in:
You can zoom in in two ways. First of all, you can define the area that you want to magnify: 1. In the plot, click the lower left corner of the area you want to zoom in on, then drag the cursor to the upper right of the area, for example:
50
Getting Started
CMOST User Gui de
2. Release the mouse button. The zoomed-in section will be displayed, for example:
If you have a mouse with a wheel button, you can zoom into an area, while maintaining the position of the x and y coordinates of the cursor. Move the cursor to the place in the plot that you want to maintain as fixed then rotate the wheel to zoom in. To zoom out to full size:
Right-click the plot and then select Un-zoom to 100%. CMOST User Gui de
Getting Started
51
4.6.3 Names Names must be entered for each defined parameter, objective function, objective function term, time-series observer, and fixed date observer. The following guidelines must be followed: ● First character of a name must be a letter. Remaining characters in the name can be letters, numbers, and underscore characters. ● Names are case sensitive. ● Spaces are not allowed. Underscore characters may be used as word separators, for example, perm_h and perm_v. ● Simulator keywords can be used as parameter names. This has advantages for users who are familiar with simulator keywords because it will clarify the meaning of such defined parameters. ● Do not use the following names because they are internal keywords used by CMOST: this, Status, Dataset, Scheduler, Computer, Pattern, Source, Average, Maximum, Minimum, and Target. ● Names must be unique; that is, if a name is already used for a parameter, it should not be used for an objective function or a result observer. 4.6.4 Requi red Fields In dialog boxes, required fields have red borders, for example: Requiredfields haveredborders
Movepointerover fortip
52
Getting Started
CMOST User Gui de
4.6.5 Default Field Values In default tables, when data is set to its default value, the field name will be bolded, as shown in the following example: Thesefieldsare notsettotheir defaultvalues
Thesefieldsare settotheir defaultvalues
4.6.6 Tab Disp lay If you right-click the Study Manager or a study tab, the following commands are available: Close: You cannot close the Study Manager tab, but you can close study tabs. By ● closing a study tab, you are closing the display of the study tab; i.e., you are not deleting the study. To re-open a closed study tab, double-click its icon in the Study Manager tab, or right-click the icon and then select Go to Study. You can also close a study tab by clicking Close in the tab. Float: A tab can be floated from the main screen, and it can then be dragged to any ● location within the main CMOST screen. You can float a tab in the following ways: - Drag the study tab up or down. As you do this, the study tab will change to a dialog box. - Right-click the tab and then select Float. Drag the study dialog box to the desired location. ● Dock: To dock a floating dialog box, right-click the study dialog box and then select Dock.
CMOST User Gui de
Getting Started
53
●
New Horizontal Tab Group, New Vertical Tab Group: You can organize tabs
into different groups, in either horizontal or vertical groupings, as shown in the following example:
You can have a combination of horizontal and vertical tab groups, and you can move tabs between groups. Click the Active Files button at the end of a tab group to view a list of tabs in the group and to select one of them. Regardless of which tab group the Study Manager is in, it displays all studies in the project.
4.6.7 Tables 4.6.7.1
To Insert, Delete and Repeat Rows
In general, click Insert to enter a row in a table below a selected row. Click Delete to delete the selected row from the table. Click Repeat to repeat the selected row but, for example, with a different Origin Type, if available. 4.6.7.2
To Enter Cell Data
Depending on the table cell, you will enter cell data in one of several ways: ● In some cases, the contents of the cell cannot be directly modified, for example, the specifications for a file that you have entered. ●
54
In some cases, when you click the cell, you are able to open a drop-down list of options. Double-clicking cells with drop-down lists will also open the list. Select the desired option.
Getting Started
CMOST User Gui de
●
4.6.7.3
In some cases, you type in the cell contents directly. To Adj ust Table Colum ns
In tables you can drag the sides of columns to adjust their widths. 4.6.7.4
To Order Table Headin gs
You can click on table headings to order the rows in the table, for example: Rows are displayed in the order in which they were originally entered. Rows are displayed in ascending order of the items in the selected column. Rows are displayed in descending order of the items in the selected column. 4.6.7.5
To Organize Table Rows and Colum ns
Some tables support sorting of rows on the basis of one or more columns, as illustrated in the following example: 1. Open a study to the Experiments Table:
2. You can reorder the columns by dragging the column headings to the left or right. In the following example, we have moved the parameter columns (ModKH1, ModCH13, and so on) to the left:
CMOST User Gui de
Getting Started
55
NOTE: If you export the Experiments Table to Excel, the column ordering will be
maintained. 3. You can drag column headings to the area above the table to hierarchically organize the experiments. In the following example, we are sorting the experiments on the basis of first ModKH1 and then ModKH13:
4. You can close groupings of rows to focus on certain groups of experiments. In the following example, we have only opened rows with experiments where ModKH1=0.25 and ModKH13=0.25:
56
Getting Started
CMOST User Gui de
4.6.8 Validation tab The Validation tab provides details of outstanding errors and warnings, as shown in the following example: Errorthatavalueisneeded foraGlobalObjective FunctionName Warningthatinthe SimulationSettingspage, noschedulerissettoActive Informationthatbecausea parameterhasbeenadded, thereareexperimentsthat willneedtobereprocessed
NOTE: Errors must be resolved before you can start the CMOST engine. Warnings are
optional.
4.7 Closing CMOST 1. You can click Save at any time to save your changes without closing the CMOST session. 2. When you want to end the session, click the Close button at the top right of the main screen or click File | Exit in the menu bar. You will be asked if you want to save your changes. Click Yes.
CMOST User Gui de
Getting Started
57
5 Creating and Editing Input Data
5.1 Introduction The following sections describe the configuration of the CMOST pages used to create and edit input data, and prepare it for CMOST runs.
5.2 General Properties Through the General Properties page, you enter the general data and files that will be used in the project studies. This page needs to be filled in for all study types. The General Properties page is shown below: Browsetoandenter masterdataset Enterlocationof basedataset
Clicktoview/editadvanced (study)settings Clicktoeditmasterdataset inCMMEditor
Enterlocationof basesessionand3tp files Read-onlydatafrom baseSR2file
ClicktoreloadSR2files Clicktoimportfieldhistory file Clicktoimportwelllogfile
Tableofimported fielddata
Clicktoreloadallfielddata files Clicktoremoveallfield datafilesfromstudy
5.2.1 General Infor matio n Area ● Unit system for reading SR2: Specifies the SR2 output data units. It does not affect the input and output data units used by the simulators. Therefore, the CMOST unit system only affects the data units of objective functions and observers defined in the study file. For example, if the unit system is chosen as SI , the unit for oil rate will be m3/day and the oil price for NPV calculation will be $/m 3. Similarly, if the unit system is Field , oil price will be $/bbl.
CMOST User Gui de
Creating and Editing Input Data
59
●
●
Master dataset relative path: The master dataset can be created automatically
when you create the new study or you can choose to use an existing master dataset file. The study master dataset (.cmm) is a version of the base dataset that has been modified to instruct CMOST where to enter parameter values into the dataset. The Master dataset relative path can be entered manually or by using the Browse button. If the file is not in the study folder, CMOST will copy the file into the folder automatically. The master dataset is a required component. Use the Edit button to open the master dataset in the CMM Editor for editing. Refer to the Master Dataset section for more information about the master dataset file. Base dataset relative path: The base dataset is entered when you create the new study. The base dataset is a required component. The Base dataset relative path can be entered manually, or by clicking the Browse button. See the Base Dataset section for more information about this file.
NOTE: Changes made in the base dataset will not be reflected in the master dataset or vice
versa. Each must be edited separately. ●
Base session file relative path: Base session and base 3tp files are not required
but are useful for analyzing simulation results, since they can be used as the basis for displaying plots in Results Graph and Results 3D of simulation runs created by CMOST. See the Base Session File section for details.
5.2.2 Base SR2 Infor mation Area The information in this area, which is read only, provides CMOST with basic information about the dataset, including the type of simulator that was used, and the simulation start and end dates. It also points to the base SR2 files, which can be displayed in CMOST plots for comparison purposes. If changes are made in the master dataset that were not included in the SR2 files when the study file was created, new SR2 files must be imported to update different sections of the study file. For example, if a new well is added to the master dataset after the study file was created, new SR2 files must be imported if the user wants CMOST to use results from the new well. To do this, the base dataset will need to be updated to contain the new well and then be run through a simulator. The new SR2 files can then be imported by clicking the Reload SR2 button. 5.2.3 Field Data Infor matio n Area In this area, you can import field history and well log files into the study folder. These files are most often required for history matching of objective functions, but they can also be used during sensitivity analysis. You can reload these files if they have been changed, or remove them from the study.
60
Creating and Editing Input Data
CMOST User Gui de
5.2.4 Advanced Settings You can click the Advanced button to open the Advanced Settings dialog box:
NOTE: Select fields in the Advanced Settings table to display information about the settings,
as shown in the above example. These settings are entered when a new study is created. ●
Stack Size (MB) for SR2 Reader: Stack size (MB) used by the SR2 reader to read
SR2 files. The default stack size is 40 MB. ●
Special Dictionary File Full Path : This needs to be specified only if a special
dictionary file is required to read the SR2 files; otherwise, leave this field blank. ●
●
●
●
Formula Coding Language : This setting applies to all formulas used in the study.
Only JScript is supported in the current version of CMOST. Data Compression Algorithm: Data compression algorithm used by CMOST object serialization and deserialization, either Deflate or NoCompression. SR2 Data Filtering: If checked, data filtering will be carried out on the SR2 data to reduce data redundancy. Validate User-Defined Jscript Formulas: If checked, CMOST will validate userdefined script formulas when starting the engine. If a validation error is found, the engine will not be started.
5.3 Fundamental Data Through the Fundamental Data pages, specify the simulator output data series that you are using in your study.
CMOST User Gui de
Creating and Editing Input Data
61
5.3.1 Origi nal Time Series Original time series are time series data produced directly from simulator SR2 files. To enter an original time series in a study, select the Fundamental Data | Original Time Series node. The Original Time Series page displays, as shown in the following example: Tableoforiginaltimeseries usedinthestudy
Clicktoviewfielddatafor selectedoriginaltimeseries,if available Plotlegendandsourcefilesfor simulatoroutputandfielddata
Forselectedoriginaltimeseries, plotcomparingdataproduced bysimulator(black)withfield data(bluecircles)
Original Time Series Table: -
-
-
62
To add an original time series to the table, first click the Insert button to insert a row and then select values for Origin Type, Origin Name and Property from the drop-down lists in the corresponding cells. If there are already rows in the table, clicking Insert will insert a row below the selected (shaded blue in the above example) row. To delete an original time series, select the row and then click Delete to remove it from the table. You can use the SHIFT and CTRL keys to select multiple rows for deletion. When you are prompted to confirm the deletion, click Yes to proceed, No to cancel. To repeat a row, select the row and then click Repeat. This only works if there is more than one origin name for the origin type. You will be prompted with a list of origins of the same type and with the same property defined, which do not already appear in the table.
Creating and Editing Input Data
CMOST User Gui de
Base Case and Field Data Plot :
-
For the selected origin, the plot compares data from the simulator output file (base SR2 files) with field data, if available. This plot shows how close the simulator output is to the field data for the selected original time series.
-
If you you click a point on a field history history curve curve (if one is available), the point point will turn red and its date, data value, and HM weight will be displayed, as shown in the following f ollowing example:
Field Data tab: Click the Field Data tab at the left of the plot to open a table of
field data for the selected origin, for example:
As shown above, the table contains a HM Weight for each data point, which is initially set to the default value, 1. HM Weight is used in the calculation of the History Match Error . If you select a point (row) in the Field Data table, the corresponding point will be highlighted in the plot in red with a black border. bor der. If you reduce the value of the HM Weight for the point, the size of the data point on the plot will also be reduced. r educed. This provide prov idess a visual visual ind indica icatio tionn of of the the import importanc ancee of of each each of the field field histor history y data. data. Through the Field Data table, you can edit the HM Weight for any field data points by one of the following following methods: -
Type the new new weight weight directly into the cell and and then then click click another cell. Select and and then then right-click a row or multiple rows in the the table. The Modify weight to dialog box is displayed:
CMOST User User Gui de
Creating Creating and Editing Input Data
63
Enter the new HM Weight and then press Enter. The HM Weight will be updated in the table and in the plot.
5.3.2 5.3.2 User-Defin ed Time Series Through the User-Defined Time Series page, you can define a time series that is not directly available from the SR2 files, but which can be derived from available SR2 data. To illustrate, consider the following example, where we define a cumulative GORSC time series: 1. Select Input | Fundamental Data | User-Defined Time Series. 2. In the User-Defined Time Series table at the top of the page, click to insert a new row in the table. 3. Select the new row and then define define the new time time series, as shown shown in the following following example:
-
Name: A name for the new time series, CumGORSC in in the above
example. -
Calculation Start, Calculation End : The start and end dates for the new
time series. -
Calculation Frequency: The times at which you want the time series to
be defined, one of:
-
Every Common Data Point
Time series data will be calculated at times where the data required for the calculation is available, between Calculation Start and Calculation End.
Every Minute, Every Hour, and so on
Time series data will be calculated at the times specified between Calculation Start and Calculation End. Since original time series data may not be available at all of these times, the calculation may be based on the setting of Transformation.
Transformation: If data is not available at all of the desired points, it
will derived using the transformation method, as shown below: None
64
Creating Creating and Editing Input Data
Time series data will only be calculated if data is available for the date.
CMOST User User Gui de
-
Numerical Integration
User-defined time series data is determined using the numerical integration method, for which case, you will need to specify the Numerical Integration Option (one of Backward Rectangle, Forward Rectangle, or Trapezoidal Rule ), and Time Interval Unit.
Numerical Differentiation
User-defined time series data is determined using the numerical differentiation method, for which case, you will also need to specify the Numerical Integration Option (one of Backward Difference, Forward Difference, or Central Difference), and Time Interval Unit.
Moving Average
User-defined time series data is determined using the moving average method, for which case, you will need to specify the Moving Average Window.
Unit Label: Units for the user-defined time series. These units will be displayed in the Base Case and Field Data Plot Preview. In our
example, the units are ft 3/bbl.
4. Specify the original original time time series data that will be be used to calculate the user-defined time series, using the drop-down lists for Origin Type, Origin Name, and Property, and by typing in the VarName. The VarName is the name that will be used for the variable in the formula. In the example, we have inserted two time series, Cumulative Gas SC (VarName (VarName CumGas) and Cumulative Oil SC (VarName (VarName CumOil), as shown below:
5. In the Formula Formula pane, enter the formula formula for the user-defined time series data where indicated. Refer to Formula Editor for Editor for information about entering CMOST formulas. Our example is shown below: Variablesthat areavailablefor theformulato use Formulaforuserdefinedtime series
CMOST User User Gui de
Creating Creating and Editing Input Data
65
6. Using the drop-down lists in the field data data table at the lower left, specify the field data Origin Type, Origin Name, and Property Name associated with the userdefined time series, as shown for our example:
7. To view the fit between the user-defined time series calculated from the base case and the field data, click the Base Case and Field Data Plot Preview tab. For our example:
This plot shows, for the user-defined time series, a comparison of data obtained from the SR2 files and field history data. 8. If you click the Field Data tab on the left, you will open a table of the field history data you are comparing with the user-defined time series:
66
Creating Creating and Editing Input Data
CMOST User User Gui de
Through this table, you can set HM Weight for each data point to values between 0 and 1, for use in the calculation of the history match error, as described in History Match Error .
5.3.3 5.3.3 Propert y vs. Distanc e Series A property vs. distance series can be used to calculate a history matching error. Property vs. distance data is retrieved from the SR2 files and compared with data from a well log file. The relative error between the simulated data (SR2 file) and field data (well log file) is calculated. To create a property vs. distance series:
1. Click Fundamental Fundamental Data | Property vs. Distance Series. 2. In the Property vs. Distance Series page, click to insert one or more property vs. distance distance series in the the table. The Insert Property vs. Distance Definitions dialog box is displayed:
3. Select the desired option, one of: - Insert new property vs. distance definitions: Select the number of items you want to insert then click OK. The rows will be entered with default settings. You will have to enter the Well Name and Property and then adjust the other settings as necessary. - If you select Insert multiple property vs. distance definitions using existing selected well log records…, available well log records will be displayed for selection. As noted in the dialog box, only well log records with standard property names (as defined in the CMG dictionary file) are supported by CMOST. Select the well log records then click OK. 4. Configure the settings settings in the table, table, as follows: - Name: Enter a name for the property vs. distance data, for example, GasRateRC_2001_12_17 , which incorporates both the property and the date. CMOST User User Gui de
Creating Creating and Editing Input Data
67
-
Well Name: In the drop-down list of available wells (available origin
names for the WELLS origin type), select the one that you want to use for the property vs. distance series. -
Property: Select the property. Not all properties in the drop-down list
may have property vs. distance or well log data available, in which case they cannot be used for this analysis. If the data does not exist, it will not be displayed in the plot. -
Log Date Time: Specify the date time on which you want to define the property vs. distance series. Again, the data will be available for certain date times only.
-
Data Path: Specify the method used to retrieve property vs. distance data from the SR2 files, one of Well Log, Linear Path, Well Path, or Trajectory. If Trajectory is selected, the source of the trajectory for the well must be specified, by clicking the Trajectory button.
See To import a trajectory file for information on how to import a well trajectory file. If a trajectory file is not available, the default option Well Path will be shown. -
TVD or MD: Specify either TVD (true vertical depth) or MD (measured
depth) as the distance coordinate. -
Use Block Center: Indicate whether the spatial property is to be read at block center only. If Use Block Center is not selected, spatial properties
-
are read at block entry and exit points. Use Accumulation Flow: Specify whether fluid volumes should be accumulated as you travel upwards from the deepest point in the well path.
-
Use Normalized Flow: Specify whether fluid volumes are accumulated
and then normalized with the total value as you travel upwards from the deepest point in the well path. - Smoothing Method: Select a method for smoothing the property vs. distance data, one of None, Moving Average, Linear Aitken, Akima, or Cubic Spine. Refer to Smoothing Methods for further information about these smoothing algorithms. To import a well trajectory file: 1. Click the Trajectory button. The Well Trajectory Files dialog box is displayed. 2. Click Insert to insert a row in the table:
68
Creating and Editing Input Data
CMOST User Gui de
3. Enter the trajectory file details as shown below, and then click OK. -
-
File Type: Select the trajectory file type from the drop-down list. File Path (Relative to Project Folder): Enter the file or relative path
and name of the trajectory file, relative to the project folder. 2nd File Path (Relative to Project Folder): Some trajectory file formats (Production Analyst Format, for example), have two files, an XY file and a Deviated file. Enter the full or relative path and name of the second file in this cell. File Type: Select the trajectory file type. XY Unit: Select the units used in the trajectory file to specify the x and y
-
coordinates. MD/Z Unit: Select the units used in the trajectory file to specify MD (measured depth) and the z coordinates.
-
Auto Traj Cleanup: Select to eliminate surplus trajectory nodes while at
the same time preserving deviation data. An example of a populated Property vs. Distance Series page is shown below:
CMOST User Gui de
Creating and Editing Input Data
69
5.3.4 Fluid Contact Depth Series If the SR2 files contain fluid saturation data, CMOST can calculate gas-oil, water-oil, and watergas contact depths at defined well locations. These depths, which are calculated for each time step that the fluid saturation data is available, are used as time series data for history matching. To set up a fluid contact depth series: 1. Select the Fundamental Data | Fluid Contact Depth Series page. The Fluid Contact Property table contains, for user selection, the fluid contact types that are available for CMOST studies:
2. In the Calculate column, select the fluid contact property or properties that you want to calculate. 3. Configure the fluid contact property as follows: -
Calculate: If selected, CMOST will calculate the contact depths at the
well location(s). -
Fluid Contact Property: The read-only name of the fluid contact
-
property, used in preview and observer plots. Saturation Property: Define the saturation type, one of SG, SO, or SW . The saturation type will be used by the algorithm to determine if there is a phase transition along the length of the well.
-
Method: Select the calculation method, one of Predefined Threshold, Maximum Derivative over n-Point Moving Average, or Maximum Different Between Previous and Next n-Point Averages (where n is defined by N Smoothing Points). For a description of these calculations, refer to Results Graph User Guide, in the section Using Results Graph | Working with Curves | Adding Curves | To create a fluid contact depth parameter.
-
N Smoothing Points : This parameter is used in the calculation of Maximum Derivative over n-Point Moving Average, or Maximum Different Between Previous and Next n-Point Averages.
-
Threshold: This parameter is used in the calculation of Predefined Threshold . It can be set to any value between and including 0 and 1.
-
First/Last: If this field is set to First , then the first point that meets the criterion is used as the contact depth. If the field is set to Last , then the
-
70
Porosity Type: Define the porosity type, one of Matrix or Fracture.
last point that meets the criterion is used as the contact depth. MD/TVD: Select true vertical depth (TVD) or measured depth (MD).
Creating and Editing Input Data
CMOST User Gui de
-
Data Path: Defines the path through the grid, one of:
Along Perforation: This option can be used if the data file contains well perforations. If LAYERXYZ well data is available, a well path is defined by joining perforation block entry and exit points. Alternatively, if LAYERXYZ is not available, a well path is defined by joining each perforation from block center to block center. Distances will be relative to the depth of the first perforation. Along Trajectory: This option can be used if a trajectory file is available. Click the button to open the Well Trajectory Files dialog box and import this file. Refer to To import a well trajectory file for further information. 4. Beside Preview Fluid Contact Data For, select the well for which you want to view the fluid contact depth series, for example:
As shown above, the predicted fluid contact data is shown compared with field history data. If you click the Field Data tab on the left, you can adjust the HM Weight setting for each data point.
CMOST User Gui de
Creating and Editing Input Data
71
5.4 Parameterization 5.4.1 Parameters Through the Parameters page, shown below, you enter parameters and specify their properties. Parameters are generally first entered into the master dataset, then imported into CMOST (refer to To import a parameter from the master dataset); however, in some cases, you may have reason to define the relationships between parameters using intermediate parameters that are not entered in the CMM file (refer to To add an intermediate parameter ). Clicktoinsertrowintable Clicktodeleterowintable Clicktomoveselectedrowup ordownintable Clicktoimportparametersfrom theCMMfile Clicktocreateandeditstudy parametersinCMMeditor ClicktoopentheCMMfilein Builder Studyparameters
Parametercandidatevalues Graphofselectedparameter priorprobability
NOTE: For further information about opening and editing the CMM file in Builder, refer to
chapter “Setting up CMOST Master Datasets” in the Builder User Guide. 5.4.1.1
Add ing New Parameters
To import a parameter from the master dataset:
1. Enter the parameters and their default values in the master dataset using the CMM File Editor . See Names for more information about using names in CMOST. 2. Save the CMM file. 3. Import the parameters from the master dataset into the Parameters table by clicking the Import button. If the master dataset is large, it may take several seconds for CMOST to read and find all of the parameters. For further information refer to Importing Parameters from the Master Dataset.
72
Creating and Editing Input Data
CMOST User Gui de
In the Parameters table, the imported parameter names and default values will be displayed, consistent with those in the master dataset. You should only change these through the master dataset. 4. Edit the parameter fields in the Parameters table as outlined below:
Name: The name of the parameter is imported from the master dataset.
You should not need to change the parameter name.
Comment: As necessary, record any pertinent information about settings,
assumptions and rationale. Active: When you import a parameter from the master dataset, Active will be checked by default. The Active check box determines whether CMOST will vary the value of the parameter when substituting it into the master dataset. If Active is checked, CMOST will assign candidate values to the parameter. If Active is not checked, CMOST will assign the default value to the parameter for every experiment that is run. Default Value: The parameter default value is imported from the master dataset. The default value should produce the original value that is entered in the base dataset. This value is only used when Active is not checked. If you need to edit a default value, click the Edit button, edit the default value in the master dataset, and then import the new parameter default value. If a more complicated CMOST formula is used, the default value may not be equal to the original value in the dataset. For example, if the following CMOST formula was entered into the master dataset: t hi s[ 1] =LOG10( Keq) cmost >
the default value for the parameter Keq will be 10 since this value produces the original value of 1 when the CMOST formula is evaluated.
Source: This column tells CMOST how to assign values to a parameter. Source can be set to one of: Continuous Real
In this case, you will need to configure the settings in the Candidate Values area at the lower left of the page: Refertoguidelinesinstep5. UsedinUncertainty Assessmentstogenerate MonteCarlostatistics IfTrue,changesmadeto DataRangeSettingswillbe synchronizedwiththePrior DistributionFunctionsettings andviceversa.
As you make changes, these will be reflected in the Prior PDF graph that is displayed to the right of the settings area. CMOST User Gui de
Creating and Editing Input Data
73
Discrete Real
In this case, you will need to enter a table of candidate discrete real values, and will be given the option of assigning a prior probability to each. See the note below.
Discrete Integer
You will need to enter a table of candidate integer values and a prior probability for each. See the note below.
Discrete Text
You will need to enter a table of text values, a unique numerical value for each text value, and optionally, a prior probability for each value. See the note below.
Formula
You will be able to enter a formula for the variable using the Formula Editor .
NOTE: The prior distribution is used to generate values for uncertainty assessments; i.e., the parameter values that are generated for the uncertainty analysis will follow this distribution. If you enter prior probability values, a Prior PDF graph will be displayed to the right.
When deciding what source type to use, consider the following guidelines: Continuous or Discrete Real
Use for parameters that can have decimal values such as those representing porosity or permeability. This is the default source type.
Discrete Integer
Use for parameters that cannot have decimal values such as those representing rock types or a block location.
Discrete Text
Use for parameters that have text values. The values should always be enclosed in double quotation marks (for example “OPEN”). The Import option will automatically assign the Discrete Text type to any parameter that has a default value enclosed in double quotation marks in the master dataset.
Formula
Formulas can be entered for a parameter, in which case a formula edit session will appear. Any of the other parameters can be used in the formula, as well as any CMOST function. Refer to Formula Editor for more details on entering formulas for parameters.
5. In the case of real, integer and text values, a Candidate Values table is displayed at the lower left of the Parameters page. Through the Candidate Values table, enter parameter values that you want CMOST to substitute into the master dataset. The candidate values that you choose will depend on the study type, as outlined below:
74
Creating and Editing Input Data
CMOST User Gui de
For sensitivity analysis studies, to investigate main (linear) effects, only two values need to be entered for each parameter. If interaction effects and non-linear (quadratic) effects are also to be studied, at least three different values will need to be entered for each parameter. All values should be within the reasonable range for the property represented by the parameter, and all sample values should be different.
Sensitivity Analysis
History Matching and Optimization
An unlimited number of discrete entries, or two values (lower and upper limits) for continuous entries, can be added to the Candidate Values table for history matching and optimization studies; however, it will take longer for the optimizer to converge on a solution as more candidate values are added.
Uncertainty Assessment
For an uncertainty assessment study, to capture interaction effects and non-linear (quadratic) effects, three different values are required for each parameter. The low value should represent a value near the lower limit for that parameter, the high value should represent a value that is near the upper limit, and the middle value should be somewhere in the middle of the range.
NOTE: If the parameter type is Discrete Text, an equivalent numerical value will have to be
added with the candidate value. If there is a value that fits with the text value, that number should be entered. To add an intermediate parameter: NOTE: Starting with CMOST 2013, a parameter has to be Active to be used as an intermediate
parameter. 1. Click Insert to enter a new row in the Parameters table. 2. Enter the name of the intermediate parameter and set the Source to Formula. 3. Define the formula for the intermediate parameter. 4. Define the formula for the parameter or parameters that depend on the value of the intermediate parameter. To illustrate, assume Parameter_A and Parameter_B are already defined in the master dataset. Let us also assume that Parameter_B is a function of an intermediate variable C, as follows:
2 _ =
CMOST User Gui de
_
_
Creating and Editing Input Data
75
Intermediate_Parameter_C is, in turn, a function of Parameter_A, as follows:
∗ _
_ = 4 log(
_ )
The Parameters table for the above would appear as follows:
where Intermediate_Parameter_C is defined through the formula editor as: and Parameter_B is defined as: The benefit of this feature is more evident when the relationships are more complex. 5.4.1.2
Prior Probability Distrib ution Functions
Prior Probability Distribution Functions are only available for continuous real numbers. The distribution given in this section should represent the probability that specified values can occur. There are several different distribution types available for CMOST: •
Unspecified
•
Deterministic
•
Uniform
•
Triangle
•
Normal
•
Log Normal
•
Custom All of the associated prior probability distribution function configuration items must be filled in for each parameter. More information for each of the prior probability distribution function types can be found in the Probability Distribution Functions section. 5.4.1.3
Deleting a Parameter
In the Parameters table, select any cell in the row of the parameter to be deleted then click the Delete button. If you delete a parameter, the corresponding column will be deleted in the Experiments Table, however, you will need to manually delete the parameter in the CMM file (if the parameter is used in the file). Multiple parameters can be removed from the table by clicking the cell to the left of the parameter Name and then dragging the cursor up or down. SHIFT and CTRL functionality is also supported. The Delete button or DELETE key can then be used to delete the rows.
76
Creating and Editing Input Data
CMOST User Gui de
5.4.1.4
Movin g Parameters in Table
To move a parameter row up or down in the Parameters table, select any cell in the row that you want to move, then click the Up or Down button. Parameters can be listed in any order. NOTE: Click the table headers to order the parameters alphabetically in ascending or
descending order. 5.4.1.5
Copyi ng Parameter Data
You can copy data from one parameter to another, as follows: 1. In the Parameters table, right-click the number of the parameter whose data you want to copy. 2. Select Copy Parameter Data to Other Parameters to open the Copy Data to Parameters dialog box, shown in the following example:
3. Select the parameters to which you want to copy the data and then click OK. The data that will be copied is summarized in the following table: Copy Data from
Copy Data To
Copied Data
Continuous Real
Continuous Real
Data Range Settings, Discrete Sampling and Prior Distribution Settings
Discrete Real
Discrete Real
Real Value and Prior Probability
Discrete Integer
Discrete Integer
Integer Value and Prior Probability
Discrete Text
Discrete Text
Text Value, Numerical Value and Prior Probability
Formula
Formula
Jscript Code
5.4.1.6
Editi ng a Master Dataset
The master dataset can be opened from the Parameters page to view or edit it. To open the master dataset in the CMM File Editor , click the Edit button.
CMOST User Gui de
Creating and Editing Input Data
77
5.4.1.7
Impor tin g Parameters fro m the Master Dataset
CMOST can automatically copy all parameters that are present in the master dataset to the study file. To do this, click the Import button (it may take a few seconds for CMOST to read large master dataset files). If the this[OriginalValue] syntax is used in the master dataset, the default value will usually be copied in from the file. Refer to Adding New Parameters for further information. CMOST will assume that the default value is equal to the original value when importing parameters from the master dataset. The imported default values should be checked since this may not always be the case. The parameter source type should also be checked for errors. When importing, CMOST will set any parameters that have original values with quotation marks to Discrete Text . All other parameters will be set to Continuous Real, so parameter types may need to be changed manually after importing. The Active check box is automatically checked when you import a parameter; however, Prior Probability Distribution Functions will have to be entered.
5.4.2 Parameter Correlati ons Using the Monte Carlo method, CMOST generates experiment sample sets for use in uncertainty assessments, for parameters which have been set, through the Parameters page, to Continuous Real and with defined prior probability distributions. When proxy models are used to calculate objective functions (i.e., simulators are not used), the number of experiments can be very high. In CMOST, the number of UA experiments for Monte Carlo Simulation Using Proxy is set to 65,000 (not configurable). Through the Parameter Correlations page, CMOST can algorithmically adjust the rank correlation of the Monte Carlo-generated sets of parameters so they honour the desired rank correlation settings. This requires that all parameters entered in the table of rank correlations have prior probability distributions. For further information, refer to Parameter Correlation.
78
Creating and Editing Input Data
CMOST User Gui de
If you have not yet specified rank correlation values, the Parameter Correlation page will be similar to the following: Tableofdesiredrank correlations(none entered) ClickApplyChangesif youhavemadechanges tothedesiredrank correlationmatrix
Tableofrank correlationsofthe MonteCarlogeneratedsamples
2
PlotshowsnoR correlationofselected samplepairs (POR,PERMV)
Plotshowsnorank correlationofselected samplepairs (POR,PERMV).
In the above example: The (POR, PERMV) cell in the Realized Rank Correlation table on the right has been selected.
The realized rank correlations are very close to the desired rank correlations. Click the Apply Changes button whenever you make changes to the desired rank correlation. When you click Apply Changes, the table of realized rank correlations, and the associated plots, will be refreshed. In our example, the prior probability of POR is normally distributed, with a mean of 0.28 and a standard deviation of 0.03. The prior probability of PERMV is also normally distributed with a mean of 2400 and a standard deviation of 387 . These prior probabilities are consistent with the Actual Monte Carlo Samples plot. No desired rank correlation for (POR, PERMV) has been specified and this is reflected in the Rank Monte Carlo Samples plot.
CMOST User Gui de
Creating and Editing Input Data
79
If you now define a desired rank correlation for (POR, PERMV), the UA samples will be algorithmically ranked accordingly. This is illustrated in the following example: Desired(POR,PERMV) rankcorrelationhas beenchangedfrom0to 0.9. OnceApplyChangesis clicked,samplesetsare algorithmicallychanged torealizethedesired rankcorrelation.
Plotofsample (POR,PERMV)pairs, 2 nowshowingR correlation. Plotshowingdesired rankcorrelation.
In the above example: The relative ranking of the samples has been algorithmically adjusted to be consistent with the desired rank correlation.
Sample sets are now ready for uncertainty assessment.
5.4.3 Hard Constraint s You can define hard and soft constraints for history matching and optimization studies. The purpose of hard constraints is to prevent unnecessary simulation runs when defined constraints are violated. Refer to Soft Constraints for information about soft constraints. Hard constraints may be appropriate if you have many simulations to run. If a hard constraint is violated, the simulation run will not take place because hard constraints are checked by CMOST before starting each run. For example, if a CMOST optimization study is set up to work with a SAGD case, it may be known beforehand that the production wells involved should not be less than a certain distance apart from each other: W1_I – W2_I > 40 For this particular case, W1_I and W2_I are parameters that refer to the block address of wells ‘W1’ and ‘W2’ in the ‘I’ direction, respectively. The constraint formula shown above indicates that there should always be at least 40 grid blocks between W1 and W2 in the I direction. If the condition ‘W1_I –W2_I <= 40’ is encountered, the simulation should not be run. Both ‘W1_I’ and ‘W2_I’ were specified as parameters to be modified by CMOST in the master dataset. This example is illustrated in the following procedure:
80
Creating and Editing Input Data
CMOST User Gui de
To specify a hard constraint:
1. Open the Parameterization | Hard Constraints page. 2. Click the button. A hard constraint is entered in the Constraints table. You can edit the name and enter a comment, if necessary. Selecting Active instructs CMOST to check for the constraint violation before each simulation run is started. 3. When you create a hard constraint, a CMOST Formula Editor session is also opened. Enter the formula for the hard constraint where indicated. The definition of a constraint may require more than one line. Formulas can be entered using any of the available functions and variables available by clicking the button, for example:
The list of variables includes previously defined parameters. An alternate method of entering the constraint formula is to manually type it in. For our example:
Variablesusedin formulaforhard constraint
Formulaforhard constraint
CMOST User Gui de
Creating and Editing Input Data
81
As entered, HardConstraint001: W1_I – W2_I > 40 must be met or the simulation will not be run.
5.4.4 Pre-Simul ation Commands Occasionally, users may need to modify the datasets created by CMOST before they are submitted to a scheduler. For example, users may want to adjust variogram parameters in history matching. In this case, an external geological modeling package, such as GOCAD, is used to generate porosity and/or permeability arrays for each dataset created by CMOST. After that, CMOST will submit a simulation job using the modified dataset. The process is illustrated in the following diagram:
Pre-simulation dataset processing commands are used to modify datasets before they are submitted to a simulator: 1. Commands will be executed sequentially once the original dataset is created by CMOST. 2. CMOST will wait for each command to exit before starting the next command. 3. CMOST will submit a simulation job using the final dataset, once all commands are executed.
82
Creating and Editing Input Data
CMOST User Gui de
Some important information regarding pre-simulation commands: •
•
•
5.4.4.1
If relative path is used to specify the path to the executable, it should be based on the directory of the project file. The working directory will be set to the study directory file when CMOST runs the executable. Command execution sequences are determined by their order in the Pre-Simulation Dataset Processing Commands (Commands) table. Add ing a New Pre-Simulat ion Dataset Process ing Command
Click Insert to inset a new pre-simulation dataset processing command in the Commands table. Three types of commands can be added: Run CMG Builder Silently , Run GOCAD Silently and Run User Defined Command. Name
The command name is used to identify each command. Command names should be unique if multiple commands are used. Command names are case sensitive. Type
There are three types of pre-simulation commands: •
Run CMG Builder Silently Run Builder to perform formula calculations specified in the dataset, or update relative permeability tables.
•
Run User Defined Command Execute a user-defined application to generate a job name tagged dataset by modifying the source dataset file. The source dataset can be generated from CMOST or the previous dataset processing command.
•
Run GOCAD Silently Trigger the Paradigm GOCAD program to carry out calculations using the GOCAD script and workflow (optional) file. GOCAD output files are exported and used as include files in datasets.
Active
If the Active check box is selected, CMOST will execute the presimulation dataset processing command; otherwise, the command will not be executed. Maximum Execution Time
CMOST will wait for a command to exit before running the next command (or submitting a job if the last command is executed). Maximum execution (waiting) time (in minutes) can be set for each command. If a command has not finished executing within its maximum execution time, the CMOST engine will stop and an error message will be displayed in the Engine Events table on the Control Centre page. CMOST User Gui de
Creating and Editing Input Data
83
5.4.4.2
Movin g Command s in Table
To move a command up or down in the Pre-simulation Dataset Processing Commands table, select any cell in the row of the parameter that needs to be moved, and then click the or buttons. Commands may be listed in any order, as required. NOTE: Command execution sequences are determined by their order in the Pre-simulation Dataset Processing Commands table. 5.4.4.3
Deleting a Command
To delete a command, the entire command row must be selected. To do this, click the grey cell to the left of the command name. The entire row will become highlighted and the Delete button will be enabled. Click the Delete button to delete the row. The DELETE key can also be used. Multiple commands can be removed by clicking on the grey cell to the left of a command’s name and dragging the cursor up or down. SHIFT and CTRL functionality is also available. The Delete button or DELETE key can then be used to delete the rows. 5.4.4.4
Run Builder Silently Command Configuratio n
Builder can perform formula calculations specified in the dataset, or update relative permeability tables. Once it is added, no further configuration needs to be done for the Run Builder Silently command.
5.4.4.5
Run User Defined Command Configur ation
Users can write their own program (executable) to modify the dataset file before it is sent to a simulator. CMOST provides user-defined commands with input information, such as Experiment Name Tagged Parameter File (.par), Experiment Name Tagged Temporary Dataset File (.tmp), or Experiment Name Tagged Dataset File (.dat). The user’s program can use these files as needed to modify the original dataset ( Experiment Name Tagged Dataset File or Experiment Name Tagged Temporary Dataset File) to generate a new version of the experiment name tagged dataset file to be submitted to a simulator. The Experiment Name Tagged Dataset File must be used as the output file for a Run User Defined command.
84
Creating and Editing Input Data
CMOST User Gui de
5.4.4.5.1 Executable Path
Set the path of the executable (.exe) file. Type in the path or click the Browse button to locate the executable. 5.4.4.5.2
Command Line Switches for the Executable
Enter any command line switches for the executable if needed. 5.4.4.5.3
Command Line Argument Switches
Select an Argument Switch cell to input argument switches required by an argument file. 5.4.4.5.4
Command Line Argument File Type
Experiment Name Tagged Dataset File (.dat)
The CMOST dataset file can be used as the input or output file for a generic command. When it is used as the input file, it refers to the CMOST-generated original dataset file or the modified dataset as a result of the previous command. NOTE: A job name tagged dataset file must be used as the output file for a Run User Defined command. Experiment Name Tagged Temporary Dataset File (.tmp)
The temporary dataset file has the same content as JobNameTaggedDatFile, but with a different extension. For example, file MyWork_00008.tmp has the same content as MyWork_00008.dat .
CMOST User Gui de
Creating and Editing Input Data
85
Experiment Name Tagged Parameter File (.par)
This file contains parameter values for a specific job, for example: File name: MyWork_00008.par Porosity Kv_kh_ratio
5.4.4.6
0.09 0.25
Run GOCAD Silently Command Configur ation
Paradigm GOCAD software can export SGrid object data to CMG include files (.inc) using GOCAD scripts. Both geometric and property information can be exported.
CMOST uses the Run GOCAD Silently command to link with GOCAD. CMOST will trigger GOCAD to run specific script and workflow file (optional) to generate CMG include files (.inc), then CMOST will use the generated include files in the simulation dataset file. Procedure for setting up the link: 1. Prepare the GOCAD files for the Run GOCAD Silently command, including the GOCAD project (.gprj) file, GOCAD Master Script (.script) file and (optional) GOCAD workflow (.xml) file. For more information on preparing GOCAD Master Script file, please refer to Preparing GOCAD Master Script File for Run GOCAD Silently Command . 2. Edit the CMOST master dataset (.cmm) file. Insert GOCAD generated include files at the desired place in the dataset file, for example: outputPoro outputKi PERMJ EQUALSI
86
Creating and Editing Input Data
**Porosity **Permeability I **Permeability J
CMOST User Gui de
3. Set up the Run GOCAD Silently command. 3.1 Add a Run GOCAD Silently command 3.2 Set GOCAD .gprj, .script, .xml files as shown above. 3.3 Extract CMOST parameters within the .script and .xml files by clicking the Extract button. New CMOST parameters will be added in the parameter list in the Parameters page. 5.4.4.6.1 Preparing GOCAD Master Script File for Run GOCAD Silently Command
1. There are different property export commands that are supported in GOCAD, however, for CMOST-GOCAD link, only “ write_sgrid_as_CMG_ascii_file” export command line can be used in master script file to export reservoir geometry or properties to CMG include file (.inc) format. 2. At least one “write_sgrid_as_CMG_ascii_file” export command line is used in the GOCAD Master Script file. In addition, CMOST keywords must be used as output file name in at least one export command line, for example: Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name "outputPoro" origin 0 switchIJ 0 vertical_scaling 1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties "POR+Porosity1" lgr_scenario "";
3. Only one property can be output in each export command line. If more than one property needs to be exported, multiple export command lines can be used, for example: Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name "outputPoro" origin 0 switchIJ 0 vertical_scaling 1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties "POR+Porosity1" lgr_scenario ""; gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name "outputKi" origin 0 switchIJ 0 vertical_scaling 1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties "PERMI+PermH1" lgr_scenario "";
4. CMOST keyword can be used at other parts in GOCAD Master Script file as required. 5. A workflow file is optional. Use the following command to load the master GOCAD workflow file if needed: gocad load_xml_parameters name "Property_study_loaded" file “Example_GOCAD_Master_XML_File.xml"
6. Use the following command at the end of the Master Script file to quit GOCAD: gocad quit really true
5.4.4.6.2 Extract Parameters from GOCAD Master Files
After you click the Extract button, CMOST will copy all parameters that are present in GOCAD script and XML files to the task file. If the this[OriginalValue] syntax is used, the default value will also be copied from the file. CMOST User Gui de
Creating and Editing Input Data
87
The following is an example of extracting a CMOST parameter from a GOCAD export command line: Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name "This[“poro.inc”]=outputPoro" origin 0 switchIJ 0 vertical_scaling 1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties "POR+Porosity1" lgr_scenario "";
After extraction, the source of the parameter outputPoro is automatically set to be FORMULA, the default value is “ poro.inc” and the formula value is set to be J obName + " outputPoro. i nc" . We recommend this formula value not be changed.
5.5 Objective Functions Through the Objective Functions node, you define the expressions or quantities that you want to minimize or maximize. In the case of history matching, for example, you will likely want to minimize the error between field data and simulation results. In the case of optimization, you may want to maximize net present value. For further information, refer to Objective Functions. The requirement to configure subnode pages in the Objective Functions node depends on the purpose of the study. Regardless, at least one of the following must be configured: Basic Simulation Results History Match Quality
Net Present Values Advance Objective Functions For example, if you want to perform a sensitivity analysis on NPV, the Basic Simulation Results page need not be filled in, however, you will need to configure the Net Present Values page. Likewise, if you want to examine the effect of input parameters on Cumulative Oil, you would need to configure the Basic Simulation Result page. Finally, you may want to define and analyze an advanced objective function if you want to calculate an objective function using Excel, user-defined code, or a user-defined executable.
5.5.1 Characterist ic Date Times Through the Characteristic Date Times page, shown below, you can specify dates on which you want to calculate the values of objective functions and terms. Defined dynamic date times can be used as objective functions:
88
Creating and Editing Input Data
CMOST User Gui de
Datetimesautomatically populatedfromthebaseSR2 files
Fixeddatetimesmanually enteredbytheuser
Dynamicdatetimesentered bytheuser,basedonan originaltimeseries
Dynamicdatetimesentered bytheuser,basedonauserdefinedtimeseries
As shown above, four types of dates can be defined through the Characteristic Date Times page: Built-in fixed date times: Date times that are automatically populated from the base SR2 files, BaseCaseStart and BaseCaseStop in the above example. Fixed date times: Specific dates named and entered by the user, by clicking Insert to the right of the table and then entering the Name and Date Time Value, as shown in the following example:
You can enter the Date Time Value in one of two ways: 1. Click the current Date Time Value then, using the TAB key to move to between the entries, type in the year, then the month, and so on. 2. Use the calendar drop-down to select the date, as shown below:
CMOST User Gui de
Creating and Editing Input Data
89
Dynamic date times from original time series: Dates that are based on the value
of the data in an original time series; for example, the date on which the cumulative oil produced by a certain well exceeds a certain value. In the following example, dynamic date Date_1 is the first date after BaseCaseStart on which the cumulative oil produced by well PRO-1 exceeds 1000 m3. Typedinbyuser
Selectedfromdrop-downlists
Dynamic date times from user-defined time series: Dates that are based on the
value of the data in a user-defined time series. In the following example, DynamicDateTime001 is defined as the date when CumGOR, a user-defined time series, was greater than 5 for the last time. DefaultEntry.Typein desiredname
Typedinbyuser
Selectedfromdrop-downlists
5.5.2 Basic Simul ation Results Refer to the general discussion at the beginning of Objective Functions for a discussion of the requirement to configure the subnode pages in the Objective Functions node. Through the Basic Simulation Results page, you can define objective functions that read directly from original time series or user-defined time series. These results can then be used as objective functions or to define global objective function candidates and soft constraints. You can define the following types of objective functions:
Basic Simulation Result from Original Time Series : Through this table, enter
results that are derived directly from an original time series in the SR2 files. In the following example, we have defined basic simulation results ProducerCumOil, ProducerCumWater and InjectorCumWater on Characteristic Time YearEnd2011 for the origin and properties noted:
90
Creating and Editing Input Data
CMOST User Gui de
Basic Simulation Result from User-Defined Time Series: Through this table,
you can define results that are derived from a user-defined time series. In the following example, we have defined CumGORSC_YearEnd2011 as the value of the user-defined time series CumGORSC on characteristic date YearEnd2011:
Characteristic Time Durations: The calculated duration between two characteristic times. In the following example, Duration001 is the duration, in days, between BaseCaseStart and YearEnd2011:
5.5.3 History Match Quality Refer to the beginning of Objective Functions for a discussion of the requirements for configuring the subnode pages in the Objective Functions node. History match quality indicates the fit of simulation results with historical data, such as field history files. For further information, refer to History Match Error . The procedure for defining a history match quality function is as follows: 1. Open the Objective Functions | History Match Quality page:
CMOST User Gui de
Creating and Editing Input Data
91
Inthisarea,definethe globalhistorymatch error.
Inthistable,defineand specifythelocalhistory matcherrorsthatyou willusetocalculatethe globalhistorymatch error.
Inthesetabs,define andspecifytheterms usedtocalculatethe selectedlocalhistory matcherror.
2. In the Global HM Error Definitions area, define the global HM error:
Global HM Error Name: Enter any acceptable name that provides a
clear description of the global objective function. Unit Label: Units displayed with the global HM error. This setting, which defaults to “%”, is optional and does not affect the calculated value of the global HM error. Calculation Method: Select one of Weighted Average or Get Maximum. If set to Weighted Average, CMOST will average all of the local HM errors used to calculate the global HM error using the Weight for each
local HM error used, as follows: Global HM Error =
∑ w LHME ∑w i
i
i
where LHME i is value of local HM error i and w i is its weight. If Get Maximum is selected, the global HM error will be equal to the largest of the local HM errors. 3. In the Local History Match Error Definitions table, define the local history match errors that will be used to calculate the global history match error, as follows: Name: Names of local HM errors must be unique. Unit Label: The units that should be displayed with the local HM error function. This setting, which defaults to “%”, is optional and does not affect the calculated value of the global HM error.
92
Creating and Editing Input Data
CMOST User Gui de
Active: The Active check box determines whether or not CMOST will
use the local HM error function to calculate the global HM error. If Active is checked, CMOST will use the error; otherwise, the error will be calculated but its result will not be used to calculate the global HM error. All inactive local HM errors will act as if they are observers.
Weight : Weight will give the local HM error more or less emphasis in the global HM error if Weighted Average is selected for the Calculation Method. The higher the Weight relative to the weight of other local HM
errors, the more emphasis the local HM error will have on the global HM error. HM Error Calculation Method: Select the HM Error Calculation Method, one of Weighted Average or Get Maximum. If set to Weighted Average, CMOST will average all of the terms used to calculate the local HM error using the Term Weight for each term used, as follows: Local HM Error =
∑w t ∑w i
i
i
where t i is value of term i and w i is its weight. If Get Maximum is selected, the HM error will be equal to largest of the term errors. 4. In the Local history match error definitions table, select the first local history match error. As appropriate, select the Original Time Series Terms used in the calculation of the local history match error, as shown in the following example:
Insert a row for each original time series term and enter the fields, as follows: Origin Type: From the drop-down list, select the type of data that will be retrieved for the local HM error term, one of WELLS , GROUPS , SPECIALS , SECTORS , LAYERS , or LEASES . All Origin Types come from the simulation results files.
CMOST User Gui de
Origin Name: The choices in the drop-down list are based on your selection of Origin Type. If there are no items corresponding to the Origin Type, the Origin Name list will be empty. If this is the case, that Origin Type cannot be used. For example, if Origin Type is WELLS , Origin Name will contain
a list of all the wells that are present in the dataset. Property: The choices in the drop-down list are based on your selection of Origin Name. For example, if Origin Type is WELLS , the Property cell would have a list of well properties such as Cumulative Oil SC or Gas Rate SC . Creating and Editing Input Data
93
Start Time: Select the characteristic date from which the data analysis should start. This date must be between the simulation start date and End Time. The default date that is entered is the simulation start date. End Time: Select the characteristic date on which CMOST will stop analyzing data. This date must be between Start Time and the simulation
stop time. The default date that is entered is the simulation stop time.
Reset Cumulative: In some history matching problems, users may find that
parts of the field data records are unreliable because measurements have either not been made or have been estimated. The result is that reliable parts of the record are intermingled with unreliable records. For example, in the following figure, the black portions of the cumulative water curve are considered reliable and the red portion is not – in other words, we do not know what has really happened to water produced over the red period. 250
200
) m ( C S 150 r e t a W e v i t 100 a l u m u C
3
50
BaseCaseStop BaseCaseStart
FixedDateTime001
FixedDateTime002
0 0
50
100
150
200
250
300
350
400
450
500
Time(days)
To match the cumulative water curve in the above plot, we need to define history matching error terms for the two time periods of the black portion, and ignore the time period for the red portion, which is deemed unreliable. To use the valid cumulative data correctly we also need to be able to calculate and use the individual cumulative for each time period, which means ‘starting’ each time period at zero cumulative and working with the delta of cumulative for each point within that segment. This correction is needed even if the cumulative quantities (e.g. cumulative oil/water/gas) are derived from rate quantities (e.g. oil/water/gas rate). To enable this correction, users can select Reset Cumulative for history matching error terms that contain erroneous historical data.
94
Creating and Editing Input Data
CMOST User Gui de
For the above cumulative water curve, history matching error Water1 can be defined as:
and Water 3 can be defined as:
From BaseCaseStart to FixedDateTime001, history matching error term Water1 is defined. It is optional to set Reset Cumulative for this term because the first four months of cumulative water data are deemed reliable. For the next five months (from FixedDateTime001 to FixedDateTime002), which is the red portion of the above cumulative water curve, no history matching error term is needed because the data in this time period is unreliable and should be ignored.
Absolute Measurement Error: Used to indicate the accuracy of the
production data. The value is considered to be half of the absolute error range, which means that if the simulated result is between (historical value – ME) and (historical value + ME), the match is considered to be satisfactory (or perfect because it is within the range of measurement accuracy). More information about how Measurement Error is used to calculate the history match error can be found in the History Match Error section.
Term Weight: The Term Weight gives the local HM error terms more or less emphasis. The higher the Term Weight relative to the other
terms’ term weight, the more emphasis the term will have on the local HM error. Normally, higher term weight should be given to wells that are important (good production, long history, near future development wells) and difficult to match.
Normalization: For information about this parameter, refer to History
Match Error . 5. As appropriate, select the User Defined Time Series Terms used in the calculation of the local HM error, as shown in the following example:
CMOST User Gui de
Creating and Editing Input Data
95
Insert a row for each user-defined time series term in the table and populate the cells, as follows:
User-Defined Time Series: Select the series, which you have already entered through Fundamental Data | User-Defined Time Series.
Start Time: As described above for Original Time Series Terms.
End Time: As described above for Original Time Series Terms.
Absolute Measurement Error: As described above for Original Time Series Terms.
Term Weight: As described above for Original Time Series Terms.
Normalization: As described above for Original Time Series Terms.
6. Select the Property vs. Distance Terms tab and, as appropriate select and specify property vs. distance terms used in the calculation of the local HM error, as shown in the following example:
Insert a row for each property vs. distance term in the table and populate the cells, as follows:
Property vs. Distance Name: Select the series, which must already have entered through Fundamental Data | Property vs. Distance Series. Absolute Measurement Error: As described above for Original Time Series Terms.
Term Weight: As described above for Original Time Series Terms.
Normalization: As described above for Original Time Series Terms.
7. Repeat steps 4-6 for the other local HM errors.
5.5.4 Net Present Values Refer to the beginning of Objective Functions for a discussion of the requirements for configuring the subnode pages in the Objective Functions node. Through the Net Present Values page, you can define a Field NPV global objective function, as shown below:
96
Creating and Editing Input Data
CMOST User Gui de
Inthisarea,definethe FieldNPV.
Inthistable,defineand specifytheLocalNPVs thatyouwilluseto calculatetheFieldNPV.
Inthesetabs,define andspecifytheterms usedtocalculatethe selectedLocalNPV.
To illustrate the configuration of the Net Present Values page, consider the following example: BaseCaseStart
Prediction Start
BaseCaseStop
ContinuousMonthly IncomefromNewWELL1 OilRateSC ContinuousMonthly CashOutlayfor NewWELL1Water RateSC DiscreteCashOutlay newW1_L3_UBAK
DiscreteCashOutlay newW1_nLayers
DiscountcashflowtermstoBaseCaseStart
NPV_newW1
1. CashflowtermsarediscountedbacktoBaseCaseStartatmonthlyrate calculatedfromyearlydiscountrates(allequalto0.1inthisexample). 2. PresentvaluesforcashflowsaresummedtodetermineNPV_newW1. 3. FieldNPVisdeterminedfromsumoflocalNPVs: NPV_newW1+NPV_newW2+NPV_oldWells
CMOST User Gui de
Creating and Editing Input Data
97
The following procedure illustrates the procedure for implementing the above global NPV objective function: 1. In the Field NPV Definition area, shown in the example below, define the Field NPV:
Field NPV Name: Enter a unique, descriptive name for the Field NPV.
Unit Label: Enter the units you want to display with the Field NPV; for
example, you may want to use M$ to indicate that net present values are in millions of dollars. This setting is optional and does not affect the calculated value of the Field NPV.
Calculation Method: This field is set to Sum of Active Net Present Values and cannot be changed. The Field NPV is the arithmetical sum of
the active Local NPVs. 2. In the Local NPV Definitions table, enter and configure the Local NPVs that you will use to calculate the Field NPV. For our example:
3. Select the first Local NPV by clicking the grey cell to the left of the row and fill in the fields, as follows: Name: Names of Local NPVs must be unique within the study.
Unit Label: The units that should be displayed with the Local NPV. This
setting does not affect the calculated value of the Field NPV.
98
Active: The Active check box determines whether or not CMOST will use the Local NPV to calculate the Field NPV. If Active is checked,
CMOST will use the Local NPV; otherwise, the Local NPV will be calculated but its result will not be used to calculate the Field NPV. All inactive Local NPVs will act as if they are observers. NPV Present Date: Select the characteristic date time to which the future cash flow terms are to be discounted to reflect, for example, the time value of money. Property Filter: This works as a filter to help users set cash flow terms belong to this local NPV; for example, if Daily Rate is selected, in the cash flow terms, in the property column, only daily rate properties will be listed in the drop-down box.
Creating and Editing Input Data
CMOST User Gui de
Calculation Method: This field is set to Sum of Net Present Value Terms and cannot be changed. The Local NPV is the arithmetical sum of
the net present values of the continuous and discrete cash flow terms. 4. Select the Continuous Cash Flow Terms tab and enter all of the continuous cash flow terms needed to calculate the Local NPV, as shown below for NPV_newW1 in our example:
Origin Type: From the drop-down list, select the type of data that will be retrieved for the continuous cash flow term, one of WELLS , GROUPS , SPECIALS , SECTORS , LAYERS , or LEASES . All Origin Types come
from the simulation results files. Origin Name: The choices in the drop-down list are based on your selection of Origin Type. If there are no items corresponding to the Origin Type, the Origin Name list will be empty. If this is the case, that Origin Type cannot be used. For example, if Origin Type is WELLS , Origin Name will contain a list of all the wells that are present in the dataset. Property: The choices in the drop-down list are based on your selection of Origin Name. For example, if Origin Type is WELLS , the Property cell would have a list of well properties such as Oil Rate SC – Monthly or Water Rate SC - Monthly. Start Time: Select the characteristic date time of the first cash flow term.
End Time: Select the characteristic date time of the last cash flow term.
Yearly Discount Rate: Enter the discount rate in decimal form; i.e., 0.1,
not 10%.
Unit Value: Enter the value of a unit of the property. By convention, revenues are positive, expenses are negative. In the above example, Oil Rate SC – Monthly is in units of barrels, which have a unit value of $60. Conversion Factor: Enter the factor needed to convert the cash flow
terms into common units. In our example, we are converting cash flows and discount values into millions of dollars. The unit value for Oil Rate SC – Monthly is $60 per barrel, so we need to multiply it by 0.000001 to convert it into millions of dollars. 5. Select the Discrete Cash Flow Terms tab and enter all discrete cash flow items needed to calculate the selected Local NPV into the table, as shown in the following example:
CMOST User Gui de
Creating and Editing Input Data
99
Parameter: Enter a unique, descriptive name for the discrete cash flow
item.
Cash Flow Time: Select the characteristic date time on which the
discrete cash flow will take place. Yearly Discount Rate: Enter the discount rate in decimal form; i.e., 0.1, not 10%. Unit Value: Enter the value of the discrete cash flow. By convention, revenues are positive, expenses are negative. In the above example, newW1_L3_UBAK has a unit value of -6000 (expense of 6000). Conversion Factor: Enter the factor needed to convert the discrete cash
flow term into common units. In our example, we are converting cash flows and discount values into millions of dollars. The unit values for our examples are in dollars, so we need to multiply them by 0.000001 to convert them into millions of dollars. 6. Repeat steps 3-5 for the remaining Local NPVs. Once done, you will have fully defined the Field NPV objective function.
5.5.5 Advanced Objective Functions Through the Advanced Objective Functions node, you can define advanced objective functions using: Excel spreadsheet calculation: You can configure CMOST to write parameter values and simulation results to a specific worksheet and cell in an Excel spreadsheet then read calculated objective function results from that spreadsheet. User-defined source code: Use Jscript code to calculate the objective function. Refer to Using Jscript Expressions in CMOST. User-defined executable calculation: You can use a third-party application, such as MATLAB to calculate and return the value of an objective function.
NOTE: For all three types of advanced objective functions, a Test
button is provided which will use the base dataset and parameter default values to calculate the advanced objective functions. 5.5.5.1
To enter an Excel spr eadsheet calcu latio n
Before starting this procedure, create the Excel spreadsheet, so you know which cells CMOST will write to and read from.
100
Creating and Editing Input Data
CMOST User Gui de
1. In the Advanced Objective Functions node, click Insert and then select Use Excel Spreadsheet Calculation. A new advanced objective function is added to the Advanced Objective Functions table using a unique default name. Tabs for defining the interface to the Excel spreadsheet are presented below the table, as shown in the following example:
2. In the Advanced Objective Functions table, enter the following: -
-
Name: Type in a name that describes the advanced objective function. Name will be displayed in results plots, and will be used in global
objective function calculations. Unit Label: Enter the units of the advanced objective function. These units will be displayed in results plots. Advanced Objective Function Type: This is read-only. Max. Execution Time (min) : Enter the maximum execution time in
minutes that you want CMOST to use. If the operation has not finished executing within the maximum execution time, the CMOST engine will stop and an error message will be displayed in the Engine Events table on the Control Centre page.
CMOST User Gui de
Creating and Editing Input Data
101
3. In the Objective Function From Excel tab, CMOST will read the value of the objective function from the Excel spreadsheet as defined in the following example:
In the example, we have browsed to the Excel spreadsheet, and specified the worksheet, column and row number of the cell from which CMOST will read the objective function. The calculation will be performed for each experiment, and the value of the advanced objective function will be displayed in the Experiments Table. 4. In the Write Parameter Values to Excel tab, specify the parameters that you want to submit to the Excel spreadsheet, and to which worksheet, column, and row, as shown in the following example:
In the above example, CMOST will write the value of INTOI used in the experiment to worksheet Sheet1 of the Excel spreadsheet, to cell $A$1. 5. In the Write Simulation Results to Excel tab, specify the simulator results that you want to submit to the Excel spreadsheet, and to which cell (worksheet, column, and row), as shown in the following example:
In the above example, CMOST will write the value of Cumulative Oil SC at the date time shown to the defined cell.
102
Creating and Editing Input Data
CMOST User Gui de
6. Click the Test button to test the Excel calculation. A Test.xlsx spreadsheet will be created. You can open this spreadsheet to view and confirm the correctness of the Excel calculation before starting the CMOST engine. 5.5.5.2
To enter a user-defined sourc e-code calculation
1. In the Advanced Objective Functions node, click Insert and then select Use User Defined Source Code Calculation. A new advanced objective function is added to the Advanced Objective Functions table using a unique default name. In the Advanced Objective Functions table, enter the following: - Name: Type in a name that describes the advanced objective function. Name will be displayed in results plots, and will be used in global objective function calculations. -
Unit Label: Enter the units of the advanced objective function. These
units will be displayed in results plots. -
Advanced Objective Function Type: This is read-only. Max. Execution Time (min) : Enter the maximum execution time in
minutes that you want CMOST to use. If the operation has not finished executing within the maximum execution time, the CMOST engine will stop and an error message will be displayed in the Engine Events table on the Control Centre page. A table is provided for entering any fixed-time variables that you may need for your Jscript calculation, and a screen through which you will enter the Jscript code. Refer to Using Jscript Expressions in CMOST for more information:
CMOST User Gui de
Creating and Editing Input Data
103
2. As necessary, insert and configure the fixed-time variables you will need in your Jscript calculation. 3. Where noted in the Source Code area, enter the Jscript code needed to perform the calculation. 4. Click the Test button to test the Jscript code before starting the CMOST engine. The calculation is working correctly if a value is returned in the Test Calculation Results dialog box. 5.5.5.3
To enter a user-d efin ed executabl e calcu latio n
You can use a third-party application to read information from the simulation results files, calculate the value of an objective function, and write this value to a file. CMOST can be configured to read the value from the file then store it in the Experiments Table. To illustrate, consider the following example, in which CMOST uses an executable, GetMatBalanceError.exe, to calculate an advanced objective function, UserExeCal001: 2
.logfile
.datfile
1
4
3 CMOST
GetMatBalanceError
5
6 ResultFile
1. CMOST creates the experiment dataset, for example, SAGD_2D_DynaGrid_Optimization_4_00001.dat . 2. The simulator runs the dataset, and generates the .log, .out, and SR2 files. 3. CMOST calculates the objective functions for the experiment. To calculate objective function UserExeCal001, CMOST triggers executable GetMatBalanceError.exe (located in folder D:/ResearchProjects/GetMatBalanceError ). CMOST passes the executable two arguments—the file path to the .log file, SAGD_2D_DynaGrid_Optimization_4_00001.log, and the file path for the result file, SAGD_2D_DynaGrid_Optimization_4_00001.UserExeCal001 . 4. GetMatBalanceError.exe reads the material balance error from the .log file.
104
Creating and Editing Input Data
CMOST User Gui de
5. GetMatBalanceError.exe writes the value of the material balance error to the result file. 6. CMOST reads the value from the result file and sets it to UserExeCal001. CMOST uses this value in algorithms, as appropriate, and inserts it into the Experiments Table in the UserExeCal001 column. To configure the above, the Advanced Global Objective page would appear as follows: Objectivefunctionto becalculatedbythe externalexecutable
Filepathtoexternal executable Commandline switchesasnecessary Commandline argumentsinthe requiredorder
Previewofcommand line
NOTE: You can select command line arguments then use the
and
buttons to arrange
them in the correct order. In the Advanced Objective Functions table at the top of the page, select the user executable calculation then click the Test button to verify that the calculation engine works correctly and returns a value in the Test Calculations Results dialog box.
5.5.6 Global Objective Function Candi dates Through the Global Objective Function Candidates node, you can view built-in nominal global objective function candidates, if you have defined them, as well as basic simulation results: History Match Quality Net Present Values Advanced Objective Functions Basic Simulation Results
CMOST User Gui de
Creating and Editing Input Data
105
As well, through the Global Objective Function Candidates node, you can define additional nominal global objective function candidates using the built-in nominal global objective function candidates as variables. In the following example, we have defined CJGOF as a function of several nominal global objective function candidates:
=
+2×
+ 0.4 ×
As with built-in nominal objective function candidates, you can, for example, specify the direction (minimization or maximization) of user-defined nominal global objective function candidates as the goal of an optimization study, through the Engine Settings node, as shown below:
106
Creating and Editing Input Data
CMOST User Gui de
5.5.7 Soft Constr aints You can define hard and soft constraints for history matching and optimization studies. The purpose of soft constraints is to allow users to change objective function values when constraints are violated. Refer to Hard Constraints for information about hard constraints. Through the Soft Constraints page, you can define soft constraint violations which, if they occur, will override objective function values. Checking for soft constraint violations is performed while simulations are being run and, if a soft constraint is violated, penalties will be applied to the objective function immediately, while the simulation is running. To specify a soft constraint: 1. Select the Constraint Formula tab. 2. Click the button to the right of the Constraints table. A soft constraint is entered. You can edit the name and enter a comment, if necessary. Selecting Active instructs CMOST to check for the constraint violation. 3. When you create a soft constraint, a CMOST Formula Editor session is opened in the Constraint Formula area. Enter the formula for the soft constraint violation, as shown in the following example:
Formulas can be entered using any of the available functions and variables available by clicking the button, for example:
CMOST User Gui de
Creating and Editing Input Data
107
The list of variables includes previously defined parameters, objective functions, and basic simulation results. An alternate method of entering the formula is to manually type it in. In the above example, Soft_Constraint_1 is violated if variable CSOR2019 is not less than or equal to 4. To define a penalty: The other part of specifying a soft constraint is to define the penalties that will be applied if the constraint is violated: 1. In the Constraints table, click the soft constraint to which you want to assign a violation penalty. For each of the soft constraints defined in the Constraints table, you should define one or more penalties. 2. Select the Constraint Penalties tab:
108
Creating and Editing Input Data
CMOST User Gui de
3. Click beside the Penalties for the constraints table to add a constraint violation penalty. 4. Select the Objective Function from the drop-down list in the table then, as appropriate, select the Active check box. 5. In the Formula Editor pane, enter the penalty, starting on the line indicated. Penalty definitions may require more than one line. In the above example, if Soft_Constraint_1 is violated, then NPV will be adjusted as follows: If objective function NPV is greater than or equal to 0, the value for NPV will be replaced by the value of: NPV
(CSOR 2019 − 3) If NPV is less than 0, its value will not be changed. If multiple soft constraints which affect the same objective function are violated, the value calculated from the last violated soft constraint in the list will be used for the objective function. Any constraint violations that occur will be mentioned in the messages displayed in the Engine Events table of the Control Centre page when the job is running.
CMOST User Gui de
Creating and Editing Input Data
109
6 Running and Controlling CMOST
6.1 Introduction The Control Centre node and its subnodes are used to set up the simulators and experiments and then run the CMOST jobs.
6.2 Control Centre Through the Control Centre page, you start, monitor the progress of, pause, and stop your CMOST jobs. Before you start the jobs, review and configure the settings in the following pages: Engine Settings Simulation Settings Experiments Table Once you have configured the above settings, you can run your CMOST jobs by clicking the
button in the Control Centre node or the one in the toolbar. If there are outstanding validation errors, the Validation Error Summary dialog box will be displayed: Start Engine
Validation errors must be resolved before you can start the engine. The resolution of warnings is optional. You can click the button to copy the contents of the Validation Error Summary table into the Windows clipboard for pasting into a Word or Excel file, or into the body of an email.
CMOST User Gui de
Running and Contr ollin g CMOST
111
Before you the engine has started, depending on the engine type, the Control Centre node will be similar to the following:
In the Engine events area at the bottom of the screen, select the engine event types you want to display in the table. Click the Run button to start the CMOST engine. While jobs are running: Engine events are displayed in the table in accordance with your selection. In the following example, we have selected Show all (engine events):
NOTE: If you move the pointer over one of the progress bars, the number of experiments in
that category will be displayed. 112
Running and Contr ollin g CMOST
CMOST User Gui de
NOTE: If the study is user-defined, a sensitivity analysis, or an uncertainty assessment, the Experiments progress area will contain progress bars similar to that shown above. If the study is a history matching or optimization, the Experiments progress area will consist of a
run progress plot similar to that shown below, which will show base case and optimal solutions. If you move the pointer over one of the data points, the experiment number and data-point value will be displayed.
You can open the Experiments Table to monitor the progress of experiments and their outputs. You can open the Simulation Jobs page to monitor the status of jobs being sent to the selected simulator. You can pause the CMOST engine at any time by clicking the Pause button. While the engine is paused, no new jobs will be submitted to the scheduler. All unfinished jobs will continue to run until the end and CMOST will process their results. You can stop the CMOST engine at any time by clicking the Stop button. No new jobs will be submitted to schedulers. All unfinished jobs will continue to run until the end, however, CMOST will not process their results. After you have clicked the Stop CMOST engine.
button, you can click the Run
button to restart the
You can refresh the CMOST engine status at any time by clicking the Refresh button. If you do not click this button, engine status will be updated automatically at an (internally calculated) updating frequency. You can clear the contents of the Engine Events table by clicking the Delete All Events
button. You can copy the contents of the Engine Events table by clicking the Copy All button then pasting the contents of the clipboard into an application such as Microsoft Excel or Word. Events to Clipboard
CMOST User Gui de
Running and Contr ollin g CMOST
113
If the run completes successfully, the Control Centre page will appear as follows:
In the above example, all experiments completed normally. If some of the experiments failed or completed abnormally, you can obtain further information through the Experiments Table of Simulation Jobs pages. When appropriate, you can view available results through the Proxy Dashboard and through the Results & Analyses node to review the calculation and analysis of results. Some of the results will be available during the run.
6.3 Engine Setting s Through this node, you define and specify the CMOST engine that will be used for your study.
6.3.1 Introduction Click the Control Centre | Engine Settings subnode and set the study type and engine name:
NOTE: Estimated no. of new experiments is read-only and is an estimate of the number of
new experiments that will be required based on the study type and engine name.
114
Running and Contr ollin g CMOST
CMOST User Gui de
The following engine choices are available: Engine
User Defined
HM
OP
CMG DECE
√
√
Particle Swarm Optimization
√
√
Latin Hypercube Plus Proxy Optimization
√
√
Differential Evolution
√
√
Random Brute Force
√
√
Manual Engine
√
External Engine
√
SA
Response Surface Methodology
√
One Parameter At A Time
√
UA
Monte Carlo Simulation Using Proxy
√
Monte Carlo Simulation Using Reservoir Simulator
√
Manual Engine: In the case of a user-defined study type, use of this engine means
no automatic creation of experiments and all experiments are created explicitly by the user through classical experimental design, Latin hypercube design, or manual creation. External Engine: In the case of a user-defined study type, use of this engine allows use of user’s own optimization algorithm. For further information, refer to External Engine and User-defined Executable. Response Surface Methodology: For Sensitivity Analysis and Uncertainty Assessment using classical experimental design or Latin hypercube design, a response surface methodology is applied. Response surface methodology (RSM) explores the relationships between input variables (parameters) and responses (objective functions). A set of designed experiments is used to build a proxy model (approximation) of the reservoir objective function. The most common proxy models take either a linear or quadratic form. After a proxy model is built, Tornado plots displaying a sequence of parameter estimates are used to assess parameter sensitivity. Refer to Response Surface Methodology for further information.
CMOST User Gui de
Running and Contr ollin g CMOST
115
One Parameter At A Time (OPAAT): Traditional method for performing
sensitivity studies, in which information about the effect of a parameter is determined by varying only that parameter. The procedure is repeated, in turn, for all parameters to be studied. Refer to One-Parameter-At-A-Time Sampling for more information.
CMG DECE: CMG-proprietary history matching and optimization method in
which parameter values are intelligently selected to achieve optimal solutions. For further information, see CMG DECE. Particle Swarm Optimization: History matching and optimization method in which the run is initialized with a population of random solutions. Navigation through the search space is guided by the best success so far, which usually results in a convergence towards the best solution. Refer to Particle Swarm Optimization for more information. Latin Hypercube Plus Proxy Optimization: Latin hypercube design is used to
construct experiments then an empirical proxy model is built using the training data obtained from the Latin hypercube design runs. The proxy model is then used to determine the optimal solution. See Latin Hypercube plus Proxy for further information. Differential Evolution (DE): History matching and optimization method in which the run is initialized with a population of random solutions or pre-defined known ones. DE attempts to find parameter values in an intelligent manner to get optimal solutions. Refer to Differential Evolution (DE) for more information. Random Brute Force: History matching and optimization method in which all combinations of parameter values are tested, with the starting point and path through the parameter values different for each run. See Random Brute Force Search for further information. Monte Carlo Simulation Using Proxy : Using Monte Carlo simulation, inputs are randomly generated from probability distributions to simulate the process of sampling from an actual population. These inputs are then fed into the response surface model, which is used to determine the uncertainty in the reservoir model. Monte Carlo Simulation Using Reservoir Simulator: In this case, the inputs
selected from the Monte Carlo simulation are run through the simulator to determine the uncertainty in the reservoir model.
6.3.2 General Settings Regardless of the engine type you select, the following general settings will be presented: Engine General: - Auto Save Result Interval: Specify the frequency with which results are to be saved to the CMOST study file while the CMOST engine is running.
116
Running and Contr ollin g CMOST
CMOST User Gui de
Experiments Management:
-
Default Keep SR2 Option for New Experiments: In the case of sensitivity analyses and uncertainty assessments, if this is set to Yes,
CMOST will keep SR2 files for all experiments. -
Number of Failed Jobs to Exclude an Experiment: If an experiment
has failed this many times, then the experiment will be excluded. -
Number of Optimum Experiments to Keep Simulation Files: In the
case of history matching and optimization studies, the number of optimum experiments for which CMOST will keep simulation input and output files. The engines will delete the simulation files for all other experiments. -
Number of Perturbation Experiments for Each Abnormal Experiment: A perturbation experiment is an experiment generated by
CMOST by slightly modifying the abnormal terminated experiment. This is helpful in cases such as numerical tuning, where a minor change in the input can change an experiment from an abnormal termination to a normal termination.
Optimization Settings: In the case of an optimization study:
-
Global Objective Function Name: Name of the global objective
function that is being optimized. -
Search Direction: If set to Minimize, the goal of the optimization will be to minimize the global objective function. If set to Maximize, the goal of
the optimization will be to maximize the global objective function. - Total Number of Experiments: This is the maximum number of experiments that will be carried out to determine the optimum solution. Once this number is reached, the engine will stop. This setting can be changed during the run. In the case of a user-defined study using an external engine, several configuration items will be presented. Refer to External Engine for further information. Random Seed: For engine functions that require a seed to generate random values, specify whether the seed is to be user-specified or not. If it is not specified by the user, a random seed, generated based on computer clock time, will be used. If it is user-specified, using the same seed for the same set of parameters will always lead to the same set of experiments. This means the result is repeatable.
6.3.3 Engine-Specific Setti ngs Depending on the engine type you select, the following settings will be presented: 6.3.3.1
CMG DECE Opti mizatio n
Honour Parameter Constraints: If set to True, then the engine will honour hard constraints when creating new experiments. If set to False, then experiments violating
hard constraints may be created by the engine, however, they will not be run. CMOST User Gui de
Running and Contr ollin g CMOST
117
Continuous Parameters Sampling: This field is not selectable by the user. The
DECE engine will always assume continuous parameters have uniform distributions.
6.3.3.2
Discrete Parameters Sampling: This is not selectable by the user. The DECE
engine will always treat discrete candidate values equally probable during history matching or optimization. Number of Initial Effect Screening Experiments: This is the number of initial experiments used by the DECE engine to test parameter effects. It is calculated internally and the user cannot change it. Latin Hypercube Plus Proxy Optimization
Continuous Parameters Sampling: In the case of a history match or optimization,
the method by which continuous parameters are to be sampled, one of the following: -
-
Discrete Sampling Using Pre-defined Levels: If this option is used, the
data range will be divided equally into pre-defined levels, and experiments will only choose discrete values. Continuous Uniform Sampling within the Data Range: Parameters are sampled uniformly within the data range. Continuous Sampling Using Prior Distribution: The user-defined
prior distribution is considered while sampling the search domain.
Discrete Parameters Sampling: In the case of a history match or optimization,
the method by which discrete parameters are to be sampled, one of the following: -
Treat Discrete Values Equally Probable: All candidate values are
treated as if they have the same probability. -
Honour Prior Distribution of Discrete Values: User-defined prior
distributions are honoured during the sample process.
Proxy Model Type: Set to Ordinary Kriging or Polynomial Regression.
Maximum time (minutes) allowed for proxy calculations in each iteration: The
6.3.3.3
maximum time allowed for the proxy to calculate and propose the next iteration of experiments. If, in some scenarios, users find that between iterations, it is taking too long to generate experiments for the next iteration, they can reduce this time; otherwise, we recommend keeping the maximum time at its default value. Number of Initial Proxy Training Experiments: Number of training experiments that are used to build initial proxy. This is calculated by CMOST and users are not able to change it. Monte Carlo Simulation Using Proxy
This uncertainty assessment method can be used when model uncertainty can be represented by a list of independent/dependent uncertain parameters.
118
Proxy Model Type: You can choose either Polynomial Regression or Ordinary Kriging. If Polynomial Regression is chosen, the following results will be available:
Running and Contr ollin g CMOST
CMOST User Gui de
-
Effect Estimates (tornado plots) Proxy Model Statistics (summary of fit, analysis of variance, effect screening, and polynomial equations) - Model Quality Check (Monte Carlo results) If Ordinary Kriging is selected, the following results will be available: - Proxy Models (variogram base, exponent, and weights, if the number of experiments is less than 100) -
Monte Carlo Results
If the Polynomial Regression proxy model is chosen, the following terms can be used as engine stop criteria:
Interested Terms [not applicable for ordinary kriging]: If set to Linear , then
the CMOST engine will check that the acceptable criteria for a linear polynomial proxy model are met. Once the criteria are met, the CMOST engine will stop. If set to Linear + Quadratic, then a Linear + Quadratic polynomial proxy model will be checked against the stop criteria. If set to Linear + Quadratic + Interaction, then a Linear + Quadratic+ Interaction polynomial proxy model will be checked against the stop criteria. Acceptable R-Square [not applicable for ordinary kriging]: R-square (R 2) indicates how well a proxy model fits observed data. An R2 of 1 occurs when there is a perfect fit (the errors are all zero). An R2 of 0 means that the proxy model predicts the response no better than the overall response mean. This field specifies the value that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality. Acceptable R-Square Adjusted [not applicable for ordinary kriging]: R-square adjusted is a modification of R2 that adjusts for the number of explanatory terms in a model. Unlike R2, the adjusted R2 increases only if the new term improves the proxy model more than would be expected by chance. The adjusted R2 can be negative, and it will always be less than or equal to R2. This field specifies the
value that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality.
Acceptable R-Square Prediction [not applicable for ordinary kriging]: R-
square prediction indicates how well a proxy model predicts responses for new observations. Ranging between 0 and 1, larger values suggest models of greater predictive ability. This field specifies the value of R-squared prediction that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for
CMOST User Gui de
Running and Contr ollin g CMOST
119
Improving Proxy, the CMOST engine will try to generate more experiments to
improve proxy quality.
Acceptable Relative Error of Proxy Verifications (%): This field specifies the
maximum acceptable error for the verification experiments. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality.
Percentage Limit of Extra Experiments for Improving Proxy (%): This stop
criterion determines maximum number of experiments that can be generated to try to meet all the above criteria. For example, for a certain study, the expected number of experiments is 100. If the percentage limit of extra experiments for improving proxy is set to be 25%, then a maximum of 25 experiments will be added to try to improve the quality of the proxy to meet the specified stop criteria. If 125 experiments have been run, and there are still criteria that are not met, the engine will stop anyway. If the Ordinary Kriging proxy model is chosen, Acceptable Relative Error of Proxy Verifications (%) and Percentage Limit of Extra Experiments for Improving Proxy (%) can be used as Engine stop criteria. 6.3.3.4
Monte Carlo Simulation Using Simulator
This uncertainty assessment method should be used when the model uncertainty is represented by discrete realizations (e.g., geostatistical realizations or history-matched models). In this case, it is inappropriate to use a proxy model for the uncertainty assessment because discrete realizations cannot be characterized by continuous numbers. 6.3.3.5
120
One-Parameter-At-A-Time
Reference Case Parameter Values: If set to Use Parameter Median Values, then the parameter’s median value is used in the reference case. If set to Use Parameter Default Values, then the parameter’s default value is used in the reference case. Continuous Parameter Testing: If set to Test All Discrete Levels, then each
discrete level of the parameter value will be used to generate experiments. If set to Test Lower and Upper Limit Only , then only the minimum and maximum values will be used to generate experiments. Discrete Parameter Testing: If set to Test All Candidate Values , then each candidate value will be used to generate experiments. If set to Test Lower and Upper Bound Only, then only the minimum and maximum values will be used to generate experiments.
Running and Contr ollin g CMOST
CMOST User Gui de
6.3.3.6
Particl e Swarm Opti mizatio n (PSO)
Refer to Particle Swarm Optimization for further information: Inertia Weight: This setting must be between 0.4 and 0.9. A large inertia weight facilitates a global search while a small inertia weight facilitates a local search. C1, C2: Cognition and social components in the PSO. These settings control the exploration/exploitation capability of the algorithm. In history matching, exploration refers to a wide search of the possible combination of unknown parameters that give a good match while exploitation is a deeper search of the previously found promising regions. Lower values for these parameters support exploration of the search space, but this can have an adverse impact on the convergence. It is highly recommended that you keep C1 and C2 between 1 and 2. Population Size: Set to 20 by default. A large population size can be used when the total number of simulations is high. It is suggested that the number of PSO iterations (the total number of simulations divided by the population size) should be greater than 20.
6.3.3.7
Diff erential Evolut ion (DE)
Refer to Differential Evolution (DE) for further information:
∈
F: This parameter, scaling factor F [0,4], causes perturbation to the vector
differences, which controls the rate at which the population evolves. A large value facilitates exploration, while a small value promotes exploitation. Set to 0.5 by default.
6.3.3.8
Cr: This parameter, crossover probability C r populations. Set to 0.8 by default.
∈
∈
[0,1], controls the diversity of the
Np: This parameter, population size Np [4, 200], is set to 30 by default. Random Bru te Force Search
Refer to Random Brute Force Search for further information: Continuous Parameters Sampling: In the case of a history matching or optimization, the method by which continuous parameters are to be sampled, one of the following:
-
Discrete Sampling Using Pre-defined Levels: If this option is used, the
data range will be divided equally into pre-defined levels, and experiments will only choose discrete values. -
Continuous Uniform Sampling within the Data Range: Parameters are
sampled uniformly within the data range. -
Continuous Sampling Using Prior Distribution: The user-defined
prior distribution is considered while sampling the search domain.
CMOST User Gui de
Running and Contr ollin g CMOST
121
Discrete Parameters Sampling: In the case of a history matching or optimization,
the method by which discrete parameters are to be sampled, one of the following: -
Treat Discrete Variables Equally Probable: All candidate values are
treated as if they have the same probability. -
Honour Prior Distribution of Discrete Values: User-defined prior
distributions are honoured during the sample process. 6.3.3.9
Response Surface Methodo log y
Interested Terms: If set to Linear , then the CMOST engine will check that the
acceptable criteria for a linear polynomial proxy model are met. If the criteria are met, the CMOST engine will stop. If set to Linear + Quadratic, then a Linear + Quadratic polynomial proxy model will be checked against the stop criteria. If set to Linear + Quadratic + Interaction, then a Linear + Quadratic+ Interaction polynomial proxy model will be checked against the stop criteria.
122
Acceptable R-Square: R-square (R 2) indicates how well a proxy model fits observed data. An R2 of 1 occurs when there is a perfect fit (the errors are all zero). An R2 of 0 means that the proxy model predicts the response no better than the
overall response mean. This field specifies the value that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality. 2 Acceptable R-Square Adjusted: R-square adjusted is a modification of R that 2 adjusts for the number of explanatory terms in a model. Unlike R , the adjusted R2 increases only if the new term improves the proxy model more than would be expected by chance. The adjusted R2 can be negative, and it will always be less than or equal to R2. This field specifies the value that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality. Acceptable R-Square Prediction: R-square prediction indicates how well a proxy model predicts responses for new observations. Ranging between 0 and 1, larger values suggest models of greater predictive ability. This field specifies the value of R-squared prediction that would be deemed acceptable. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality.
Running and Contr ollin g CMOST
CMOST User Gui de
Acceptable Relative Error of Proxy Verifications (%): This field specifies the
maximum acceptable error for the verification experiments. If this value is not reached and the total number of extra experiments has not exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the CMOST engine will try to generate more experiments to improve proxy quality.
Percentage Limit of Extra Experiments for Improving Proxy (%):This stop
criterion determines maximum number of experiments that can be generated to try to meet all the above criteria. For example, for a certain study, the expected number of experiments is 100. If the percentage limit of extra experiments for improving proxy is set to be 25%, then a maximum of 25 experiments will be added to try to improve the quality of the proxy to meet the specified stop criteria. If 125 experiments have been run, and there are still criteria that are not met, the engine will stop anyway. 6.3.3.10 External Engine and User-defi ned Execut able
In the case of a user-defined optimizer, you will need to define the interface between CMOST and your optimizer, in particular the Engine Settings and Input/Output Tables. These settings are used to define, for instance, the location of the table files used when an external engine is used in a user-defined study. The workflow is as follows: 1. CMOST calls the simulator and calculates the objective functions for a set of experiments. 2. Once all existing experiments are complete, CMOST outputs the experiment table in CSV (comma-separated values) file format and calls the user-defined optimizer. The iteration number, starting from 0, will be used as the only argument for the command call. 3. The user-defined optimizer reads the file and performs its optimization calculation. 4. The user-defined optimizer proposes a new set of experiments. The proposed new experiments are written in CSV format for CMOST to read. The user-defined executable exits. 5. CMOST reads the new experiments table file and, if the number of generations does not exceed the maximum number to be generated, adds the experiments to the Experiments Table and then submits them to the simulator. Go to step 2.
CMOST User Gui de
Running and Contr ollin g CMOST
123
The workflow for the user-defined optimizer is illustrated below: CMOST Setupastudy.Defineparameters, objectivefunctions,andsoon
StartEngine
Parameter Table (optional)
ExperimentsVerification*
New Experiments TableFile
User-Defined Optimizer (Executable)
Add,runexperiments
MaxNo. Generation?
No
Previous Generation Experiments TableFile
Yes
Stop
* Beforeaddingexperiments,CMOSTchecks ifthetotalnumberofexperimentsmatches thepopulationsizeornot.Forauser-defined engine,themaximumnumberofgenerations isusedasthestopcriterion.
Previous Generation Experiments Table File
Once all of the experiments in the experiments table are complete, CMOST output the previous generation’s experiments information into this file. The table is output in CSV format. All the columns in the CMOST Experiments Table page are output, as shown in the following example: I D, Gener ator , St at us, Resul t St atus, Proxy Rol e, Keep SR2, Has SR2, Hi ghl i ght , Par a1, Par a2, Para3, Par a4, Obj 1, Obj 2… 18, Ext er nal Engi ne, 5, 4, 0, 0, Fal se, Fal se, 0. 5, 200, 1, 0. 45, 3450. 3, 54. 4, … 19, External Engi ne, 5, 4, 0, 0, Fal se, Fal se, 1. 9, 300, 1, 0. 21, 2980. 3, 125. 8, … 20, Ext er nal Engi ne, 5, 4, 0, 0, Fal se, Fal se, 1. 0, 200, 3, 0. 45, 1480. 3, 50. 2, …
User-defined Executable File
This executable file contains the user’s optimization algorithm. It is called when all the existing experiments in the experiments table are completed. The main tasks of the user-defined executable file include, but are not limited to: 1. Read in the previous experiments’ results from the Previous Generation Experiments Table file.
124
Running and Contr ollin g CMOST
CMOST User Gui de
2. Analyze the results and, as necessary, propose a new set of CMOST experiments. 3. Write the proposed experiments to the New Experiments Table file, which is going to be read by CMOST. Notes about creating user-defined executables: 1. User-defined executables can be compiled using any programming language. 2. User-defined executables must use a generation-based algorithm; i.e., in each generation, the population size (number of experiments) must be maintained constant. 3. When CMOST calls a user-defined executable, the iteration number is passed to the executable as the only command argument. 4. A user-defined executable may save information used between generations, or simply output this information to a debug file. 5. Maximum run time for a user-defined executable is 60 minutes. 6. User-defined executables must be able to exit silently once they have completed their calculations. New Experiments Table File
The new experiments table file is the CSV table file output by the user-defined executable. The number of proposed experiments must equal the population size. The first row of the table file is the comma separated parameter names, and the rest of the rows are proposed experiments. Note that any illegal or repeated experiments will be rejected and will automatically be replaced with a random experiment defined by CMOST. An example new experiment table: I Par a1, Para2, 1. 9, 200, 0. 56, 1. 5, 300, 0. 32, 0. 5, 100, 0. 21,
Par a3, Para4 2 1 3
Parameter Table File
After the external engine has started, CMOST will write a parameter table file. The userdefined executable can read in parameter information from this file. Users may alternately hard-code parameter information into the executable, in which case, the parameter table file will be ignored. Each parameter’s information will be output to a line in the file. Different formats are used, depending on the parameter type:
For discrete integer and discrete real (double) parameters, the parameter information format is: Par amt er Name, Source, candi dat eVal ue1, candi dat eVal ue2, …, candi dat eVal ueN
CMOST User Gui de
Running and Contr ollin g CMOST
125
For discrete text parameters, the numerical value will be output in the following format: Par amt er Name, Di scr et eText , candi dat eNumer i cal Val ue1, candi dat eNumer i cal Val ue2, candi dat eNumer i cal Val ue3…, candi dat eNumer i cal Val ueN
For continuous real (double) parameters, the format is: Par amt er Name, Cont i nuousDoubl e, mi nVal ue, maxVal ue
The following is an example of the contents of a parameter table file: Par a1, Par a2, Par a3, Par a4,
Di scret eDoubl e, 0. 5, 1. 0, 1. 5, 1. 9 Di scr eteI nt eger , 100, 200, 300 Cont i nuousDoubl e, 0. 1, 0. 6 Di scret eText, 1, 2, 3
6.4 Simulation Settings Before you can run CMOST simulation jobs, you will need to configure the Simulation Settings page, shown below, in particular: ● Schedulers ● Simulator version ● Number of CPUs per job ● ●
Maximum simulation run time Job record and file management
As shown in the above example, the Simulation Settings page has three areas.
126
Running and Contr ollin g CMOST
CMOST User Gui de
6.4.1 Schedulers Through the Schedulers area, you can adjust the settings for each scheduler separately. The Schedulers table also lists basic information about the schedulers. Configure the table in the Schedulers area as follows: ● Active: The Active check box determines whether or not CMOST will use a specific scheduler. If the Active check box is selected, CMOST will use the scheduler; otherwise, the scheduler will not be used. NOTE: If schedulers other than Local are used, all study files must be located in a UNC (Universal Naming Convention) directory. ●
Scheduler Name: The names of the schedulers are listed in the Scheduler Name
●
column. ‘Local’ is the computer that is currently being used to open the project. This information cannot be edited in CMOST. If this information needs to be changed, it should be done through Launcher. Type: The scheduler type is one of the following types: -
Local (local computer running CMG Job Service) CMG Drone Scheduler (a remote computer running CMG Job Service) MSCC Scheduler (Microsoft Windows Compute Cluster)
-
LSF Scheduler (Platform Computing LSF)
-
●
SGE Scheduler (Sun Grid Engine) Max Concurrent Jobs: Different numbers of jobs can be run on different schedulers. Max Concurrent Jobs can be edited so that CMOST will send a certain number of jobs to each scheduler. The default value is 1. When changing this value, you should consider: - Number of processors for the scheduler: N -
Number of required processors for each job: nj
-
Is the scheduler shared by other users? Max Concurrent Jobs should NOT be greater than N/nj. If a scheduler is shared by other users, you should limit Max Concurrent Jobs to prevent using all processors by yourself.
CMOST User Gui de
Running and Contr ollin g CMOST
127
NOTES:
1. Jobs that are running as well as jobs that are queued are both considered pending jobs. 2. If the CMG DECE Optimizer is used for a history matching or optimization study, it is suggested that the total number of Max Concurrent Jobs for all schedulers be less than 10 to improve performance. This is because if there are too many simultaneously running jobs, the optimizer will not be able to learn fast enough to reduce the total number of jobs required to find an optimum solution. On the other hand, if the objective is just to reduce total elapsed time, this suggestion can be ignored. If this value is set to 0, no jobs will be run on that scheduler; however, it is recommended that the Active check box be cleared if jobs should not be run on a specific scheduler to avoid confusion. ●
Max Failed Jobs: Schedulers can be set to stop sending jobs to a scheduler if a
certain number of jobs have failed. Failed jobs are jobs that could not be started due to hardware or software problems such as:
●
-
Network cable unplugged Computer shut down
-
CMG Job Service not running No simulator is installed, or No license is available.
Jobs that are terminated by simulator abnormally due to numerical problems are NOT failed jobs. CMOST will stop sending jobs if the hardware/software problem persists for a scheduler. The default maximum number of failed jobs is 25. Work Plan: CMOST can be set to send jobs to different schedulers at different times. There are three options available for work plans: - All Time - Evenings and Weekends -
Weekends If Work Plan is set to All Time, there will no limits on when jobs can be scheduled. This is the default. If Work Plan is set to Evenings and Weekends, jobs will be scheduled 8 PM to 6 AM Monday through Friday and all hours on Saturday and Sunday. This option is useful if a computer is normally used during the day during the workweek but is left idle during evenings and weekends. If Work Plan is set to Weekends, jobs will only be scheduled from 12:00 AM Saturday to 12:00 PM Sunday. This option is useful if a computer is used during the workweek but left idle during the weekends.
128
Running and Contr ollin g CMOST
CMOST User Gui de
NOTE: Jobs that take a while to run may run outside of the range set in the Work Plan. The
Work Plan only guarantees that jobs will not be sent to a scheduler outside of the times set. If necessary, it is OK to kill jobs in Launcher. Once a job is killed, CMOST will be notified and proper action will be taken. For sensitivity analysis and uncertainty assessment, a new job with the same parameter values will be scheduled. For history matching and optimization, the killed job will be ignored and a new job may be scheduled depending on the progress of the optimization process. ●
Job Priority: Different priorities can be set for different schedulers. If there are
multiple jobs queued for a scheduler, the jobs with higher priority will be run first. The default value for Job Priority is Low. ●
●
●
Additional Switches: If a scheduler switch is required, it should be entered into the Additional Switches column. See the Launcher Users Guide for more
information on scheduler switches. Host Computer: The Host Computer column lists the computers the schedulers run from. There is no Host Computer listed for the Local scheduler. This information cannot be edited in CMOST. If this information needs to be changed, it should be done through Launcher. Refresh: If a new scheduler has been added or removed via Launcher, the Scheduler
table can be updated to reflect these changes. Schedulers may also need to be updated if a study file is copied in from another computer and the schedulers are different on the two machines. The Refresh button can be clicked to update the table. If new schedulers have been added, they will be added to the table with default values.
6.4.2 Simul ator Setti ngs Through the Simulator Settings area, configure the setting as follows: ● Simulator: Simulator that CMOST should use is specified. This is read-only. ● Simulator Version: If there is more than one simulator version installed on the local computer, more than one version will be listed. It is recommended that the same simulator version is installed on all compute nodes of all schedulers. Number of CPUs per job: The number of processors to use for each job can be ● specified. NOTE: This value should not be greater than the number of processors for the scheduler with
the lowest number of processors. If you are running on your local machine, right-click the Windows task bar then select Properties. Select Start Task Manager. The number of available CPUs is displayed in the Performance tab. Enter that number in this field. In the following example, the number of available CPUs is 12.
CMOST User Gui de
Running and Contr ollin g CMOST
129
Thiscomputer has12CPUs
●
Method to Find Executable: The method to find executable defines which
simulator will be used on remote computers. By default CMOST will attempt to use the simulator version defined in the Simulator field. If this version does not exist on the remote computer, CMOST will use a method to find an alternative simulator version. Three options are available: - Find Closest Version - Find Latest Version
●
●
●
- Find Exact Version The option Find Closest Version will try to locate the closest match to the simulator defined. The Find Latest Version option will try to find the newest version located on the remote computer. If Find Exact Version is used, only the simulator defined will be used; i.e., no jobs will be sent to the scheduler if that version does not exist. Max Run Time per Job (hours): A limit can be set on the amount of time taken for a single job. If a job has not completed by the time specified, the job will be killed by the engine. The default maximum is 720 hours (30 days). Additional Simulator Switches: If simulation switches are required, they can be entered in the Additional Simulator Switches text box. More information on simulator switches can be found in the simulator and Launcher user guides. Apply Simulator License Multiplier : If set to True, simulation jobs submitted by
CMOST will take less tokens than a job submitted out of CMOST. For example, for the same amount of license, IMEX™ jobs submitted from CMOST take ¼ of the token compared with an IMEX job submitted out of CMOST. GEM™ and STARS™ jobs take ½ of the token when using the CMOST License Multiplier. 130
Running and Contr ollin g CMOST
CMOST User Gui de
●
Write SR2 Files on Execution Host: If set to True, SR2 files will be written to a
●
temporary folder on the execution host computer during simulation, then the files will be copied to the study folder when the simulation is done. If set to False, then SR2 files will be written to the study folder directly. Write Log File on Execution Host: If set to True, simulator log files will be written to a temporary folder on the execution host computer during simulation, then the files will be copied to the study folder when the simulation is done. If set to False, then simulation log files will be written to the study folder directly.
6.4.3 Job Recor d and File Management Through the Job Record and File Management area you can configure how CMOST manages job records in Launcher and simulation output files on the disk. For example, you can instruct CMOST to keep or delete simulation output files (.irf, .mrf, .out) for abnormally terminated jobs, as shown below:
6.5 Experiments Table Based on your engine settings, CMOST will automatically generate a set of experiments, except when you choose to use a manual or external engine. You can also generate additional experiments using available design algorithms and by manually defining them. Through the Experiments Table, you manage the experiments that will be run as part of a study. This section provides information about the following: Navigating the Experiments Table Creating and Importing Experiments, in addition to those that are automatically created by CMOST based on engine settings.
Configuring the Experiments Table. You can configure the appearance of the Experiments table; in particular, the order of the columns and the grouping of rows on the basis of one or more of the columns. Checking Experiment Quality. You can review the orthogonality of the experiments. Exporting the Experiment Table to Excel. Viewing the Simulation Log.
CMOST User Gui de
Running and Contr ollin g CMOST
131
Clearing the SR2 Files Reprocessing Experiments.
6.5.1 Navigating the Experim ents Table The Experiments Table, shown below, will contain experiments that have been: Automatically generated by CMOST based on engine settings. Generated by the user using CMOST experiment design tools (such as Latin hypercube design). Manually entered by the user (user-defined experiments).
Experiments Table
Contextmenu
Operationbuttons
6.5.1.1
Experim ents Table Colum ns
NOTE: Whether cells in the Experiments Table are editable or not depends on the status of the experiment. For New experiments, cells in the Proxy Role, Keep SR2, Highlight, and Comments columns are editable. For Completed experiments, only the cells in the Highlight, Proxy Role and Comments columns are editable. Proxy Role and Keep SR2 columns can be
edited by right-clicking the experiment then selecting the desired option in the context menu. ●
●
132
ID: Unique experiment ID number, assigned by CMOST when the experiment is
created. If the experiment is subsequently deleted, the ID number will not be reused. Generator: The generator that was used to create the experiment; for example, LatinHyperCube, Response Surface Methodology, or User .
Running and Contr ollin g CMOST
CMOST User Gui de
●
Status: Status of an experiment while it is running, one of:
-
Expecting: The Engine has determined that a certain number of
experiments need to be run. They have not been created yet, but are expected to be created. -
Reuse Pending: This status is displayed if you have added new
parameters after creating an experiment. Experiment values for the new parameters are therefore unknown so you will need to resolve Reuse Pending status by providing the unknown parameter values for each experiment, as follows: a. Right-click an experiment, or multiple experiments, with Reuse Pending status, using the CTRL and SHIFT keys as necessary. b. Select Resolve Reuse Pending and then click Resolve Selected Experiment(s). c. In the Experiment Parameter Values dialog box, enter the value for the new parameter that you want to use in the selected experiment(s). d. Click OK to apply.
-
If you select Resolve Reuse Pending and then click Resolve All Experiments, you can set the value of the new parameter(s) for all experiments that still have a Reuse Pending status. New: New experiment has been added, but the dataset has not yet been created.
-
Creating dataset: CMOST is creating the dataset for the experiment.
-
Running : CMOST has submitted the dataset to Launcher.
Dataset created: CMOST has created the dataset for the experiment. Complete: CMOST has received the SR2 files from Launcher. Reused: A previously completed experiment been reused in the current
study. -
●
Aborted: At any time before they are Complete, users can abort
experiments manually. Aborted experiments are ignored by CMOST; for example, the results of an experiment that has been aborted will not be used to determine a proxy model or as a candidate for an optimal solution. Result Status: Status of the simulation results of an experiment: - Unknown: CMOST has not yet checked the .log, .out, and .irf files. - Incomplete: Simulation is not complete. -
CMOST User Gui de
Exceed max run time: Experiment has exceeded the Max Run Time per Job setting defined in Simulator Settings.
Running and Contr ollin g CMOST
133
-
Abnormal termination: Job was terminated by the simulator before the
stop time was reached. -
Normal termination: The simulation has run to the last time or date
specified in the dataset and has produced the output files. CMOST will clear the job record in Launcher, recover the necessary information from the output files, and process them as specified in the Job Record and File Management table in the Simulation Settings node, and in the Keep SR2 column in the Experiments Table. -
-
Violate hard constraints: A hard constraint has been violated, and the
simulation run has not been allowed to proceed. Refer to Hard Constraints for further information. Waiting to be re-processed: This status is displayed if, after running an experiment, you change unit system, field data, fundamental data, or objective functions. In this case, CMOST will need to recalculate the objective functions. Starting the engine will automatically reprocess any experiments that require it, or you can force one or more experiments to be reprocessed, as follows: 1. Click the experiment you want to reprocess or use the CTRL key to select multiple experiments. 2. Right-click the selected experiment(s) then select Reprocess | Reprocess Selected Experiment(s) or click the button then select Reprocess Reprocess Experiment(s) Selected Experiment(s).
●
●
134
3. CMOST will try to recalculate all the objective functions for the experiment. - Re-process failed: CMOST cannot find enough data to recalculate the objective functions for this experiment. For example, a new objective function is added, so all experiments need to be reprocessed. A re-process failed can happen if the SR2 files are deleted and the VDR files do not contain the data needed to calculate the newly added objective function. Proxy Role: If set to Training , the results of the experiment will be used to formulate the proxy model. If set to Verification, the results will be used to verify the accuracy of the resulting proxy model. If set to Ignore, the results will be used for neither training nor verification. Refer to Proxy Dashboard and Proxy Analysis for further information about the CMOST proxy model. Keep SR2: If set to Auto, CMOST will adhere to the simulation settings. If set to Yes, CMOST will keep the SR2 files after the experiment simulation has run regardless of the simulation settings. If set to No, CMOST will delete the SR2 files after the experiment has run, regardless of the simulation settings. This field is editable as long as the experiment has not completed.
Running and Contr ollin g CMOST
CMOST User Gui de
●
Has SR2: If simulation results (SR2) files have been produced and CMOST has
not deleted them, this box will be checked. This check box is read-only and cannot be edited. ●
Highlight: Specific experiments can be highlighted in CMOST plots. If Highlight
is checked, the results of that experiment will be displayed as a purple curve or data point in plots. ●
●
● ●
●
●
6.5.1.2
Parameter Value: The specific candidate value (numeric, text, or other), that was entered or generated for each parameter is listed in the Experiments Table, with
the name of the parameter displayed in the column heading. If the parameter had its source specified as Formula , the formula calculation result is displayed. If a parameter is not Active, the default value of the parameter is used. All parameters are listed whether or not they are Active. Objective Function Values: The calculated value for each objective function is listed in the Experiments Table, with the name of the objective function displayed in the column heading. Execution Node: Host computer on which the experiment was run. Dataset Path: The name and path of the dataset that was simulated is listed here.
In most cases, the .dat, .log, and output files are deleted for normally terminated jobs, since this is the default setting in the Job Record and File Management area of the Simulation Settings node. The VDR files are for CMOST use only. Optimal: There is generally only one optimal experiment. For example, if the global objective function to be optimized is NPV and the optimization direction is Maximize (these fields are chosen in the Engine Settings node), the experiment with the largest NPV is the optimal experiment. The optimal experiment may change during the run. The Optimal check boxes are read-only and cannot be edited. By default, the optimal experiment will show in a different color in all plots by default so that they are easily distinguishable. In some cases, multiple experiments may have identical global objective function values. A typical situation that may lead to multiple optimal solutions is when some of the parameters have no or very little effect on the global objective function. Comment: Users can enter information about a particular experiment in this field. Context Menu
Once you have populated the Experiments Table, you can select an experiment (the experiment row will change to blue shading once selected), right-click the experiment row to display the context menu, then select a menu item to perform one of the following operations: ● Edit : Available if you have selected a user-defined experiment. It will not be available if you select a generated experiment.
CMOST User Gui de
Running and Contr ollin g CMOST
135
If you have selected a user-defined experiment, the Experiment Parameter Values dialog box will be displayed, through which you can change the experiment’s parameter values. The new values will appear in the table after you click OK. ●
●
●
●
Copy to new: Copy the selected experiment to a new user-defined experiment,
then change the parameters in the new experiment to ensure it is unique. Copy to Clipboard: Copy the experiment table headings and selected experiment rows to the Windows clipboard, which you can then paste into Excel, for example. Resolve Reuse Pending: If the status of an experiment is Reuse Pending, this option will be enabled. By clicking this option, users will be prompted to enter the unknown parameter values for the selected experiment, so that the experiment can be reused by CMOST. Delete: If you have selected a generated experiment, you will be given the option to Delete all Experiments in Generators. If you proceed, the Choose Generators to Delete dialog box will be displayed, as shown in the following example:
In the above example, Generator Name LatinHyperCube is selected. If you click OK, all of the experiments created by the LatinHyperCube generator will be deleted from the table. If you select a user-defined experiment, and then right-click Delete, you will have the option of deleting only that experiment or, through the Choose Generators to Delete dialog box, deleting all of the user-defined experiments.
●
136
You can select multiple user-defined experiments to delete using the CTRL and SHIFT keys. Set Proxy Role: Change the proxy role of the experiment; for example, if it was set to Training, you will be able to change it to Verification or Ignore. When set to Training, the results of the experiment will be used to calculate the proxy model. If set to Verification, the results will be used to verify the accuracy of the resulting proxy model. If set to Ignore, the results will not be used for training or verification. Refer to Proxy Dashboard and Proxy Analysis for further information about the CMOST proxy model.
Running and Contr ollin g CMOST
CMOST User Gui de
●
Set Keep SR2: If set to Auto, CMOST will honour the settings in the Job Record and File Management table in the Simulation Settings page. If set to Yes, the
SR2 files will be kept after the experiment has run regardless of the simulation settings. If set to No, the SR2 files will not be kept after the experiment has run, regardless of the simulation settings. ●
Clean SR2 Files: Deletes related SR2 files to save disk space.
●
Set Highlight: If set to False (default), the results of the experiment will not be highlighted in plots. If set to True, the status and results of the experiment will be
●
highlighted in plots. The status and results of multiple experiments can be highlighted. This setting can be entered directly into the table by selecting the Highlights check box. Abort: Regardless of the status of the experiment, the status will be changed to Aborted . If the test has not yet been run, it will not be run. If it had already been submitted to the scheduler, the job will be killed.
●
Reprocess: Recalculate all objective function values for the experiment.
●
Create Dataset: A dataset will be created using the experiment parameters. If you select an experiment with a dataset path defined, you can click the Launch Builder button to open the dataset in Builder. The purpose of this is for viewing
●
6.5.1.3
only, since CMOST will not know about any changes you apply through Builder. Restore to New: This will restore a completed experiment’s status to New. Parameter values are kept the same; however, the values of all objective function are erased. Operation Butt ons
The following buttons are provided on the right side of the Experiments Table: Button
Name
Operation
Create Experiments
Manually create experiments, in addition to those automatically generated by the CMOST engine.
Export to Excel
Export the table contents to an Excel file. Refer to Exporting the Experiment Table to Excel.
Configure the Table
Configure the columns or apply filters, as outlined in Configuring the Experiments Table.
Reprocess Experiments
Recalculate all objective function values for all or selected experiments.
CMOST User Gui de
Running and Contr ollin g CMOST
137
Button
138
Name
Operation
Check Quality
Open the Experiments Quality window to view information about the orthogonality of the experiments. Refer to Checking Experiment Quality for further information.
View Simulation Log
Open the selected experiment’s simulation log file, if available, in your default text editor. You specify if the experiment log files are to be saved through the Simulation Settings node.
Launch Builder
Open the selected experiment’s dataset, if available, in Builder. You specify if the experiment datasets are to be saved through the Simulation Settings node.
Launch Results Graph
Open the selected experiment’s SR2 files, if available, in Results Graph. The Has SR2 box will be checked if it is available. You specify if experiment SR2 files are to be saved through the Simulation Settings node.
Launch Results 3D
Open the selected experiment’s SR2 files, if they are available, in Results 3D. The Has SR2 box will be checked if it is available. You can specify if experiment SR2 files are to be saved through the Simulation Settings node.
Running and Contr ollin g CMOST
CMOST User Gui de
6.5.2 Creatin g Experim ents In addition to experiments that CMOST will automatically generate, you can create your own experiments, as follows: 1. Click Create Experiments in the upper right. The Create New Experiments dialog box is displayed:
2. Select the method you want to use to create the new experiments then click Next. 3. If you choose Using classic design, the Choose classic experiment design dialog box will be displayed:
a. Select the Levels of Experimental Design, 2 or 3.
CMOST User Gui de
Running and Contr ollin g CMOST
139
b. Select the Sampling Method. For Levels of Experimental Design equal to 2, Sampling Method can be set to one of Plackett-Burman, Fractional Factorial , or Full Factorial. For Levels of Experimental Design equal to 3, Sampling Method can be set to one of Box Behnken or CCD Uniform Precision. For further information about these sampling methods, refer to Classical Experimental Design. c. Number of Experiments will be set based on the above settings. 4. If you choose Using Latin Hypercube design, the following Create New Experiments dialog box will be displayed (for further information about Latin hypercube design, refer to Latin Hypercube Design):
a. Select the sampling option for Continuous Parameters Sampling, one of :
140
Discrete Sampling Using Pre-defined Levels
If this option is selected, the data range will be divided equally into pre-defined levels, and experiments will only chose discrete values.
Continuous Uniform Sampling within the Data Range
Parameters are sampled uniformly within the data range.
Continuous Sampling Using Prior Distribution
The user-defined prior distribution is considered while sampling the search domain.
Running and Contr ollin g CMOST
CMOST User Gui de
b. Select the sampling option for Discrete Parameters Sampling, one of: Treat Discrete Values Equally Probable
All candidate values are treated as if they have the same probability.
Honour Prior Distribution for Discrete Values
User-defined prior distributions are honoured during the sample process.
c. Select the Number of Experiments. This will initially be set by CMOST. The available selections are based on the number of parameters and candidate values. d. If you select Design quality optimization, CMOST will try to optimize the quality of the design (minimizing the maximum pairwise correlation and maximizing the minimum sample distance) by generating multiple designs and selecting the best. If not selected, only one design is generated and used. e. Set Design quality optimization iterations, the maximum number of iterations allowed for optimizing the design quality. It is only used when Design quality optimization is selected. f.
If desired, select Use user-specified random seed to generate the experiments and then enter the User-specified random seed. If the same random seed is used, the sequence of the random generated experiments can be repeated. If you do not select Use user-specified random seed, a default random seed will be used.
g. Click . CMOST creates experiments based on the sampling options and design configuration. The progress bar will indicate when the experiment designing is complete. 5. The Using all parameter combination design option will only be enabled if the parameters are discrete real, integer, or text. Formula-based parameters are also allowed, since these are dependent parameters. CMOST supports up to 65000 combinations of parameter values, so the Using all parameter combination design option will not be enabled if you meet or exceed this number of combinations.
CMOST User Gui de
Running and Contr ollin g CMOST
141
6. If you select Manual (user defined) , you can define and add your own experiments to the Experiments Table:
a. As shown above, select candidate values for discrete variables (HTSORW, for example) and use the slider to set the value of a continuous variable (PERMH, in the example). b. Click Next to add the experiment to Experiments to be added.
c. Click Finish to add the experiment to the Experiments Table. 7. If all of the parameters are discrete, you will have the option of selecting Using all parameter combination design, which will generate a set of experiments testing all parameter combinations. 8. Regardless of the method you use to add experiments, they will be added to the Experiments Table when you click Finish, for example:
142
Running and Contr ollin g CMOST
CMOST User Gui de
6.5.3 Config uri ng the Experim ents Table 6.5.3.1
To Re-order the Colum ns
Drag column headings left or right to reorder the columns.
CMOST User Gui de
Running and Contr ollin g CMOST
143
6.5.3.2
To Group the Experim ents
You can drag column headings to the area above the table to group the experiments by those columns. In the following example, the experiments are grouped first by POR and then by PERMV:
6.5.3.3
To Disable Colum n Displ ay
You can restrict the display of information in the Experiments Table by clicking the Configure Table button then selecting Column Configure. The Column Configure dialog box is displayed. Through this dialog box, you can select general, parameter, and objective functions for display:
144
Running and Contr ollin g CMOST
CMOST User Gui de
6.5.3.4
To Filt er Experim ents
If you have a large number of experiments, you can define a filter to display only those experiments that meet the filter properties, as follows: 1. Click the Configure Table button then select Filter Configure. The following dialog box is displayed:
2. Configure the table filter as required. In the following example, the filter will display only those experiments where Generator contains the string “ LatinHyperCube”, POR is equal to 0.22, ID is greater than 5, and HTSORG is not equal to 0.04:
3. Once you have configured the filter, click OK. The Filter info label at the bottom of the Experiments Table will now display the filter specification to the right of the check box. 4. If you select the Filter info check box, the Experiments Table will only display experiments meeting the filter specification, as shown below for our example:
NOTE: To display all experiments again, clear the Filter info check box. CMOST User Gui de
Running and Contr ollin g CMOST
145
6.5.4 Checking Experiment Quality Click the Check Quality button to open the Experiments Quality window which provides information about the quality of the experiment design. Refer to Sampling Methods for a discussion of experiment orthogonality and the distribution of sampling points about the parameter space. In the following example, the experiment orthogonality is about 3.1E-17 , which is in the Perfectly orthogonal range:
In the following example, the orthogonality is 0.47 , which is in the Not Orthogonal range:
NOTE: If the experimental design quality is in the red area, you can try additional classical
experimental design or Latin hypercube design to improve the design quality.
146
Running and Contr ollin g CMOST
CMOST User Gui de
6.5.5 Exportin g the Experiment Table to Excel Click the Export to Excel button to export the table contents to an Excel file. This command will honour the ordering of the columns that you have specified, and will save only the rows that have been filtered.
6.5.6 Viewing the Simul ation Log Click the View Simulation Log button to open the simulation log file in your default text editor. If no text editor is set to open .log files by default, a dialog box will appear asking the user to select a program to view the file with. The .log file contains information useful to reservoir engineers. Through the Job Record and File Management area on the Simulations Settings node, users can specify when to keep or delete the log file.
6.5.7 Reprocessi ng Experiment s If the study objective functions, unit system or history data changes, the experiment objective functions will need to be recalculated, after which the experiment can be reused. Click the button to reprocess (i.e. recalculate all objective functions for) selected experiments or all experiments.
6.6 Proxy Dashboard The CMOST Proxy Dashboard provides an efficient way of assessing the effects of parameter values on results, even while simulations jobs are in progress. You can see the results, as simulations are completed, without having to wait until all of the simulations are complete. This is particularly useful in history matching and optimization, where you may be running a large number of simulations. Through the Proxy Dashboard, before all of the experiments are completed, you can:
●
Use preliminary proxy models to begin predicting reservoir behavior. Investigate the effect of varying input parameter values, thereby improving your understanding of the reservoir and how proxy modeling works. Define and add training or verification experiments to the study.
●
Compare different proxy models.
● ●
6.6.1 Opening the Proxy Dashboard Once the CMOST engine has started, you can view and interpret the results of completed experiments through the Proxy Dashboard. On the left side of the dialog box, you can configure the Proxy Dashboard display, as follows:
CMOST User Gui de
Running and Contr ollin g CMOST
147
Selectthetimeseriesthatyouwanttodisplay.
Selectthetrainingorvalidationexperimentforwhichyou wanttocomparesimulation,proxymodel,andfield historyresults,andwhat-ifscenarios.
Clicktocopytheselectedexperimentparametervalues totheWhat-ifscenariotable.
Definetheparametervaluesforthewhat-ifscenario.
Addanexperimentthatusesthewhat-ifscenario parametervaluestotheExperimentsTableor,ifthe selectedexperimentisnotyetrunning,updateits parametervaluesusingthewhat-ifscenarioparameter values.
●
Select series area: In this area, select the series for which you want to build a
proxy model based on completed training experiments, view results calculated by the proxy model, and compare the proxy model prediction with simulator outputs and field history measurements. ●
Reference experiment area: Select the experiment for which you want to
compare results of field history measurements, simulator predictions (from the experiment’s .irf file), and proxy model predictions (using the experiment’s parameter settings). To do this: 1. Select the generator, CMG DECE in the above example. 2. Select the Proxy Role, one of Training or Validation. The experiments that meet the criteria will be available through the Experiment ID dropdown list. 3. Select the Experiment ID. The experiment parameter values will be displayed in the Selected Experiment table. If a proxy model has been built, the proxy model prediction will be displayed in the plot. 148
Running and Contr ollin g CMOST
CMOST User Gui de
●
●
●
What-if scenario area: If you click the Selected Experiment parameter What-if scenarios values will be copied into the table. You can then adjust the values in the What-if scenario table using drop-down selections for discrete
parameters, and sliders (or type values in directly) for continuous variables. The proxy-model-predicted time series for the what-if scenario will be displayed in the plot. This technique illustrates the effect of varying parameter values on the proxy model prediction, and can be used to define new experiments and adjust existing experiments if they have not yet started. If you click Update Experiment, then the parameters in the selected experiment will be updated with the values in the what-if scenario table, as long as the experiment has not yet started running, and it is unique. If you click Add Experiment, a new experiment, using the parameter settings defined in the What-if scenario area, will be added to the Experiments Table as long as it is unique. As well, the experiment will become the reference (selected) experiment.
6.6.2 Build ing a Proxy Model through the Proxy Dashboard The proxy model is built internally by CMOST using the results of completed Training experiments. The number of experiments needed to build a proxy model depends on the number of parameters and the proxy type. To build a proxy model, click the Build Proxy Model Build Proxy Model dialog box is displayed:
button. The Select and
Select the desired proxy model type, one of Classic Polynomial Proxy, Ordinary Kriging Proxy, Advanced Polynomial Proxy, or Kernel Proxy, and then click the Build Proxy button. Refer to Proxy Modeling for more information. The progress bar will be changed to green once the proxy model is built:
CMOST User Gui de
Running and Contr ollin g CMOST
149
Click OK. The proxy model is built and the model’s prediction for the selected time series using the experiment’s parameter values is displayed, as shown in the following example:
Simulatorpredictionusing parametersettingsfor Experiment34. Proxymodelprediction usingparametersettings forExperiment34. Fieldhistorydata Proxymodelprediction usingparametersettings for“what-ifscenario”.
In the above example: ●
●
●
150
The plot shows the proxy dashboard display for experiment 34 of a history matching task that uses an ordinary kriging proxy model built after 50 experiments were run. The plot shows the fit of the simulator and proxy model predictions for experiment 34 with the field history data. The “what-if scenario” has been generated using the parameter settings of experiment 34 as a starting point, then adjusting PERMH to move its proxy model prediction down and below the field data, for the purpose of illustration. A user would more likely try to align the proxy model prediction for the what-if scenario as closely as possible with the field history data by adjusting one or more reservoir parameters, then adding an experiment to the study with those values or, if the reference experiment has not yet run, update its parameters with those of the what-if scenario.
Running and Contr ollin g CMOST
CMOST User Gui de
6.6.3 Interacti ng with the Proxy Model 6.6.3.1
To assess proxy model predicti on
As illustrated above, once you have built the proxy model, you can select any completed experiment to see how closely the proxy model prediction matches experiment simulation results and field data. As more Training experiments are completed, you can click the button to rebuild the same type of proxy model, or to specify and build a new proxy model type. 6.6.3.2
To view the effect of parameter values variation
As illustrated above, through the What-if scenario area, you can investigate the effect that varying parameter values has on the prediction of the proxy model. In our previous example, the What-if scenario area, starting with the parameter settings for experiment 34, appears as follows:
Youcanadjustthevalueofacontinuousparameter, suchasPERMH,bytypingoverthevalueormovingthe slider. Youcanadjustthevalueofadiscreteparameter,such asHTSORW,byselectingthedesiredvalueinthedropdownlist. Addanexperimentusingthewhat-ifscenario parameterstotheExperimentsTableor,ifthereference experimentisnotyetrunning,updateitsparameters usingthewhat-ifscenarioparametervalues.
6.6.3.3
To zoom in to Proxy Dashboard plots
As with other plots, you can define and zoom into an area to view the proxy dashboard plot in more detail, then right-click the plot and select Un-zoom to 100% to zoom back out. 6.6.3.4
To copy the Proxy Dashboard plot
You can also save the proxy dashboard plot to one of several image file formats, or copy it to the Windows clipboard then paste it into another application, such as Excel or Word. 6.6.3.5
To add experiments thro ugh the Proxy Dashboard
You can add an experiment using the parameter values for the what-if scenario by clicking the Add Experiment button or, if the selected experiment is not yet running, update its parameter values with those of the what-if scenario by clicking the Update Experiment button. Through the Experiments Table, you can make further adjustments, for example, you can change parameter values or the experiment’s proxy role.
CMOST User Gui de
Running and Contr ollin g CMOST
151
You can run these experiments then, through the Proxy Dashboard, see how well the simulation results for the added or modified experiment compare with the field history data and the proxy model prediction. 6.6.3.6
To reload the proxy dashboard display
The Proxy Dashboard page does not automatically update if you remain on it; for instance, a selected experiment may complete, but its *.irf curve will not show up in the Proxy Dashboard plot unless you click Reload. If the CMOST engine has added experiments in the background (experiments added by the DECE engine, for example), the added experiments will be available for selection after you click Reload.
6.6.4 Changing the Proxy Role At any time, you can open the Experiments Table, change the proxy role, and then return to the Proxy Dashboard. You may want to do this, for example, if you feel an experiment is an outlier and you do not want to use it for training purposes.
6.7 Simulation Jobs You can view the status of the simulation jobs, by clicking the Control Centre | Simulation Jobs node. As experiments are completed, details are recorded in the Simulation Jobs table, as shown in the following example:
NOTE: As with the Experiments Table, you can group the rows in the Simulation Jobs table
by dragging the headings to the area above the table. You cannot, however, reorder the columns by dragging their headings right or left. The columns in the table are as follows: Experiment: Unique (within the study) experiment ID number.
Launcher ID: Each experiment in a study will have a unique experiment ID. They
will be assigned a unique Launcher ID when they are submitted to the scheduler. The Launcher ID will be different from the experiment ID since the scheduler may process, within a single session, experiments from multiple studies.
152
Scheduler: The scheduler to which Launcher submitted the job.
Execution Node: Computer executing the job.
Running and Contr ollin g CMOST
CMOST User Gui de
Launcher Status: Launcher job status, one of Waiting to Start , Running, or Complete.
Submitted At: Time Launcher submitted the job to the simulator.
Started At: Time the simulator started the simulation.
Finished At: Time the simulator completed the simulation.
Dataset: Dataset used for the simulation. If you selected Delete .dat on Normal
Termination, the dataset will be deleted if the simulation job terminated normally. If you did not select Delete .dat on Normal Termination, the datasets will be stored in the study folder.
Status: Launcher job status, one of Pending or Finished .
Results Status:
-
Abnormal Termination: Job terminated by simulator before reaching
the stop time. -
Incomplete: Job cannot run to completion as a result of hardware or
software issues. -
Normal Termination: Simulator ran it through to completion without
there being any problems. - Killed: Job killed by user. - Failed: Job failed to complete due to a hardware, software, or license problem. - Unknown: While the job is running, its result status is unknown. Results Status Info: Information to clarify the reason for a particular Results Status. If the Results Status for an experiment is Abnormal Termination, Results Status Info may, in the case of an optimization study, be Convergence not achieved .
On the right side of the table, the following buttons are provided: Button
Name
Operation
Refresh Simulation Jobs
Click to force CMOST to obtain an update of the status of the simulation jobs.
View Simulation Log
Open the selected job’s simulation log file, if available, in your default text editor. The log file may not be available after the job is Complete, depending on your Simulation Settings.
Launch Builder
Open the selected job’s dataset in Builder. The .dat file for the job may not be available after the job is Completed , depending on your Simulator Settings.
CMOST User Gui de
Running and Contr ollin g CMOST
153
Button
154
Name
Operation
Launch Results Graph
Open the selected job’s SR2 files, if available, in Results Graph. The SR2 files for the job may not be available after the job is Completed , depending on your Simulation Settings.
Launch Results 3D
Open the selected job’s SR2 files, if available, in Results 3D. The SR2 files for the job may not be available after the job is Completed , depending on your Simulation Settings.
Running and Contr ollin g CMOST
CMOST User Gui de
7 Viewing and Analyzing Results
7.1 General Information Through the Results & Analyses node, you can view the results of the CMOST runs. As soon as the study runs start, CMOST will begin to display the results.
7.1.1 Display of Multi ple Plots In the main page of each Results & Analyses node that has subnodes, you can display a mix of plots from any of the subnodes, as shown in the following example: 1. Click Select in the Operations area. The Select Plots dialog box is displayed:
2. As shown in the above example, immediate subnodes appear as tabs, then in each tab, there is a table of lower levels nodes. Select the plots you want to display in the main node. You can select plots from different nodes and subnodes.
CMOST User Gui de
Viewing and Analyzing Results
155
3. In the Operations area of the main node, configure the plot display by selecting the number of plots on the horizontal and the vertical. In the following Results & Analyses | Objective Functions node, we are displaying a run progress plot, a histogram, a cross plot, and a proxy analysis plot:
If the number of plots exceeds the number allowed per page, multiple pages will be required and you can step through these. Alternately, you can change the number allowed horizontally and vertically, as shown above. 4. If you are displaying the plots while the simulations are running, the plots will be refreshed regularly; however, you can click the Refresh plots immediately if new data is available.
button to update the
7.1.2 Screen Operations Refer to Plots for information about general operations, such as saving plots to image files and the Windows clipboard, and zooming in and zooming out. 7.1.3 Navigating the Tree View Once you select a node in the tree view, you can navigate up and down open nodes using the UP and DOWN ARROW keys. You can also use the LEFT and RIGHT ARROW keys to open and close tree nodes. If there are no nodes to open or close, using the LEFT and RIGHT ARROW keys will move the cursor up and down the tree hierarchy. 156
Viewing and Analyzing Results
CMOST User Gui de
7.2 Parameters Through the Parameters node, you can display results information on the basis of study parameters.
7.2.1 Run Progress Through the Parameters | Run Progress node, you can view plots that show the progress of the simulation runs in terms of study parameters:
The above example shows run progress based on parameter INJP2007 and the experiment ID. Each blue data point represents one experiment. As the run progresses, more and more experiments (data points) are added. In the case of history matching and optimization studies, the current optimal solution is also indicated by a red diamond-shaped data point. As the run progresses, the optimal solution will be changed whenever a better solution is found. As shown above, if you move the pointer over a data point, the experiment ID and the value of the parameter ( INJP2007 ) will be displayed. A Run Progress plot is produced for each study parameter, which you can view by selecting the plot node in the tree view.
CMOST User Gui de
Viewing and Analyzing Results
157
If you right-click the plot and then select Data, you can open the Run Progress Data Table dialog box:
Using the buttons on the right, you can export the t he contents of the Run Progress Data Table to Excel, or copy it to the Windows clipboard then paste it into another application.
7.2.2 Histograms Through the Parameters | Histograms node, you can view the frequency with which a parameter value has been used in study study experiments, experiments, for example: example:
As with other plots, you can save the image in various formats, or copy it to the Windows clipboard then paste it into other applications. You can also zoom in and zoom out.
158
Viewing and Analyzing Results
CMOST User User Gui de
If you right-click the plot then select Data you can view and save the data in tabular format:
Using the buttons on the right, you can export the contents of the Results Histogram Data Table to Excel, or copy it to the Windows clipboard then paste it into another application.
7.2.3 7.2.3 Parameter Parameter Cross Plots Through the Parameters | Cross Plot node, you can view and assess trends and relationships between parameters, and between parameters and and objective functions. functions. A typical typical cross plot plot is shown below:
To generate a cross cr oss plot: 1. In the tree view, under Parameters | Cross Plots node, select the parameter you want to analyze.
CMOST User User Gui de
Viewing and Analyzing Results
159
2. In the Cross Plot Settings table to the left, select the cross plots you want to display, TotalOilProduced in in the above example. As outlined previously, you can display multiple cross plots in one view using the Plots on horizontal and vertical settings. The cross plot will automatically include points from the base dataset if it is available, and optimal and user-highlighted solutions will be identified.
7.3 Time Series 7.3.1 Observers An example of a Time Series Results Observer plot is shown below:
The optimal solution in the above Time Series Results Observers plot is colored red. Other runs are shown in blue. The thicker the lines appear, the more runs exist e xist that are similar to the one shown. If a field history file is specified for the Result Observer, data points from this file will displayed in the plot as dark blue circles. For a history matching or an optimization minimization task, the optimal solution is the run with the lowest global objective function. For an optimization maximization task, the optimal solution is the run with the highest global objective function.
160
Viewing and Analyzing Results
CMOST User User Gui de
To export times-series data to a .txt file:
1. Right-click anywhere in the time-series time-series plot plot and then select select Data. The Export Series Data – Time Settings dialog box is displayed:
2. Set the time you you want to export with the the data, one of: -
Export time for each experiment: The data will be exported with the
time associated with each experiment. -
Export common time for all experiments: The data will be adjusted to
have a common time before it is exported. In this case you will define a start and end time, and the number of points between them. 3. Click Next. The Export Series Data dialog box is displayed:
4. Select the series series that you you want to export, one of the following: following: -
Export all experiments Export user-highlighted user-highlighted experiments
-
Export optimal solution experiment Export user selected experiments. If you you select this this option, option, you can click User Selections to open the Select experiments to export table. You can then select multiple experiments, making use of the CTRL and SHIFT keys as necessary, then click Check. You can also select experiments then click Uncheck to cancel. Finally, you can click the Export check box directly.
CMOST User User Gui de
Viewing and Analyzing Results
161
5. Click Next. The Export Series Data – File Information dialog box is displayed:
6. Enter the specification specification and configuration information for the exported exported text file. 7. Click Finish to export the time series data.
7.4 7.4 Property vs . Distance 7.4.1 Observers An example of a property vs. distance results observer plot is shown below:
162
Viewing and Analyzing Results
CMOST User User Gui de
As with the time series results observers, the optimal solution is colored red. User highlighted solutions are shown in purple. The remaining runs are shown in blue. If a field history file is specified for the Result Observer, this will be displayed as circular dark blue data points. For a history matching or an optimization minimization task, the optimal solution is the run with the lowest global objective function. For an optimization maximization task, the optimal solution is the run with the highest global objective function. You can export property vs. distance observer data to a text file, in a manner similar to the procedure for exporting time series observer data, which is described in Exporting Time Series Data. In the case of property vs. distance data, time setting will not be necessary.
7.5 Objective Functions 7.5.1 Run Progress In the Objective Function Run Progress plots, the optimal solution is shown in red. In the following example, we have moved the pointer over the optimal solution data point to display the experiment ID and the value of GlobalObj :
As with other plots, you can save the image in various formats, or copy it to the Windows clipboard then paste it into other applications. If you right-click the plot then select Data you can view and save the data in tabular format.
CMOST User Gui de
Viewing and Analyzing Results
163
7.5.2 Histogram The Objective Function Histogram plot shows the distribution of objective functions calculated from the experiments:
As with other plots, you can save the image in various formats, or copy it to the Windows clipboard then paste it into other applications. You can also zoom in and zoom out. If you right-click the plot then select Data you can view and save the data in tabular format:
Using the buttons on the right, you can export the contents of the Results Histogram Data Table to Excel, or copy it to the Windows clipboard then paste it into another application.
164
Viewing and Analyzing Results
CMOST User Gui de
7.5. 7.5.3 3 Objective Objective Functi Functi on Cross Cross Plots Using objective function cross plots, you can identify trends and relationships between, for example, objective functions and parameters, as shown below:
7.5.4 7.5.4 OPAAT Analys is If you are performing a sensitivity analysis using the OPAAT engine, OPAAT plots will be produced for each objective objective function, using different experiments experiments as the the reference, as shown shown in the following example, where we have used the Select button to select multiple OPAAT displays.
CMOST User User Gui de
Viewing and Analyzing Results
165
In the above example, the OPAAT plot uses Experiment ID 1 as the reference. The vertical darker line indicates the value of the objective function produced by the experiment. The values of the objective function for a parameter are shown by the side bars. If there is a positive relationship relationship between the the objective function function and the parameter, parameter, the Parameter Upside (blue bar) is to the right and the Parameter Downside (red bar) is to the left of the reference experiment which is shown with the vertical darker line. On the other hand, if the relationship between the objective function and the parameter is negative, the corresponding bars are reversed; i.e., blue bar is on the left and red bar is on the right. The overall length of the bar indicates the degree to which changing the value of the parameter affects the value of the objective objective function. In Experiment ID 1, PERMH had a value of 4500. If we decrease PERMH to 3000, holding all other parameters constant, the effect on ProducerCumOil is to reduce it to 951. If we increase PERMH to 6000, ProducerCumOil will increase to 1209. There is a strong positive relationship between between PERMH PERMH and ProducerC ProducerCumOi umOil, l, indicate indicatedd by the red red bar to the the left left and the blue blue bar bar to the right. For comparison, the relationship between HTSORG and ProducerCumOil is a negative one; i.e., starting with Experiment ID 1 then increasing HTSORG to 0.06 while holding all other parameters constant will decrease ProducerCumOil to 1063. NOTE: When a horizontal bar in the OPAAT diagram ends on the reference experiment line,
this means the experiment was conducted with the parameter at a minimum or maximum value.
166
Viewing and Analyzing Results
CMOST User User Gui de
7.5.5 7.5.5 Proxy Analys is Information is displayed in this node if there are enough experiments to build the proxy model. Once enough runs are available, one or more types of proxy models (linear, quadratic, reduced linear, reduced quadratic, and ordinary kriging) will be created. For sensitivity analysis studies, the proxy models are then used to determine the main (linear) effects, interaction effects, and quadratic (nonlinear) effects. For uncertainty assessment studies, proxy models are used to perform Monte Carlo simulations simulations using using the distributions distributions from the Prior Probability Distribution Functions defined in the Parameters Parameters page page of the study study file. NOTE: It is important to check the verification plot of each response surface model for obvious
outliers. If there are outliers, the cause of the outliers should be investigated. Sometimes the jobs that correspon correspondd to the outlie outliers rs may may need need to be re-run re-run or excluded excluded from the analys analysis. is. 7.5.5.1 7.5.5.1
Proxy Model Verifi catio n Plot
In the Proxy Analysis node, if you select an objective function subnode, information about the proxy models for that objective function will be displayed, as shown in the following example, in which the Model Quality Check (QC) tab is selected:
The Model QC plot shows how closely the proxy model predictions match actual values from the simulations. The 45 degree line represents a perfect match between the proxy model and actual simulation results. The closer the points are to the 45 degree line, the better the match between the predicted and actual data. The points that that fall on the 45 degree line are are those that are perfectly predicted. The points that are far from the 45 degree line are outliers. If there are outliers, the cause needs to be investigated before making use of the proxy model. CMOST User User Gui de
Viewing and Analyzing Results
167
For polynomial regression models (Linear, Quadratic, Reduced Linear, and Reduced Quadratic), the lower and upper 95% confidence curves are superimposed on the actual by predicted plot. These confidence curves are are useful for determining determining whether whether the regression model is statistically significant. The lower and upper 95% confidence curves are determined using the equations given in paper “Leverage Plots for General Linear Hypotheses”, John Sall, The American Statistician, November 1990, Vol. 44, No. 4. The dark blue points are the training experiments used by CMOST to create the proxy model. The green points are verification experiments used by CMOST to check if the proxy model created is a good proxy to the actual simulation results. For ordinary kriging models, all of the points for the training jobs should be exactly on the 45 degree line. Therefore, only the verification jobs can be used to estimate the predictability of an ordinary kriging model. 7.5.5.2 7.5.5.2
Proxy Model Data Table
The proxy model can be viewed in tabular form by right-clicking the verification plot then selecting Data to open the Proxy Model Qc Results table:
As with other results tables, you can save the contents of the Proxy Model Qc Results table to Excel or to the Windows clipboard, from which it can be pasted into another application.
168
Viewing and Analyzing Results
CMOST User User Gui de
7.5.5.3 7.5.5.3
Proxy Model Statist ics
A detailed report of the t he response surface statistics is available by selecting the Statistics tab, shown in the following example ( Effect Screening Using Normalized Parameters and Coefficients in Terms of Actual Parameters tables in the example have been cropped to fit on the page):
For polynomial regression models (Linear, Quadratic, Reduced Linear, and Reduced Quadratic), see Proxy Modeling for an explanation of the statistical terms used in CMOST. As shown above, the Statistics tab contains c ontains the following five sections. •
Summary of Fit
•
Analysis of Variance
•
Effect Screening Using Normalized Parameters
•
Coefficients in Terms of Actual Parameters
•
Equation in Terms of Actual Parameters
CMOST User User Gui de
Viewing and Analyzing Results
169
For ordinary kriging models, the variogram base, exponent, and weights will be shown if the number of experiments is not more than 100. You can copy the contents of the Statistics tab to Excel by selecting the desired portion of the tab with your mouse, or press CTRL+A to copy everything. Right-click in the selected area then select Copy. In Excel, click Paste. 7.5.5.4
Proxy Model Effecti ve Estim ate
For information on interpreting the information in this tab, refer to Linear Model Effect Estimates, Quadratic Model Effect Estimates, and Reduced Model Effect Estimates as appropriate. 7.5.5.5
Monte Carlo Simul ation s
In the case of uncertainty analyses, the Monte Carlo Simulation tab shows the distribution of values for each objective function with all uncertain parameters sampled from the prior probability density functions, as illustrated in the following example:
The plot shows a histogram of objective function values, to illustrate the shape of the probability density function, as well as the cumulative probability. P10, P50, and P90 values are highlighted.
170
Viewing and Analyzing Results
CMOST User Gui de
Data can be viewed in tabular form by right-clicking the plot and then selecting Data. The Monte Carlo Unconditional Simulation Results dialog box is displayed, as shown in the following example:
The table shows the distribution data for selected Monte Carlo simulations, sorted by cumulative probability values. In this data table, you can find the objective function value and the corresponding parameter values for typical cumulative probabilities, such as P10, P50, and P90. Click to copy the data to the Windows clipboard, or to export it to an Excel spreadsheet. If you select rows of the table using CTRL and SHIFT keys, then only those rows will be copied to the clipboard or spreadsheet. Similarly, select one or more data rows then click to create new experiments with those parameter settings. You can also right-click the plot then select commands to save the image in one of several formats, or copy the image to the Windows clipboard. If you right-click the plot and then select All Generated Monte Carlo Simulation Data, you will open a table that shows the results of all the Monte Carlo simulations. Again, you can copy the content of the table to the Windows clipboard or to a spreadsheet, as outlined above.
CMOST User Gui de
Viewing and Analyzing Results
171
7.5.5.6
Proxy Settin gs
Through the Proxy Settings tab, you can select a proxy type then configure the available settings:
After you change your settings, click the Build Proxy
172
Viewing and Analyzing Results
button.
CMOST User Gui de
8 General and Advanced Operations
8.1 CMM File Editor 8.1.1 Introduction The CMOST CMM File Editor is a tool for viewing and editing the CMOST master dataset (.cmm) and related include (.inc) files. This editor can be used to: •
Create/Insert/Delete CMOST parameters.
•
Comment out/Uncomment multiple lines in a file.
•
Create/Edit/Extract include files.
•
Quick navigation through the editing file.
8.1.1.1
To Start the CMOST CMM Edito r
To start the CMM Editor, click the Edit button on the General Properties page or the Parameters page. CMOST CMM Editor Toolbar and Status Bar
Functions will show up as you move the pointer over the toolbar buttons. Comment Lines
Copy Save
Undo
Cut
Add Comment
Delete CMOST Parameter
Next CMOST Parameter
Redo
Create CMOST Uncomment Parameter Lines
Paste
Line Number CMOST Parameters
SearchCMOST Parameter
CMOST User Gui de
OpenInclude File
Find& Replace
GoToLine Number
Previous CMOST Parameter Reservoir Sections
ZoomIn
Create IncludeFile Extract IncludeFile
ToggleSyntax Highlighting
ZoomOut Toggle Outlining
General and Advanced Operations
173
8.1.1.2
CMOST CMM Edito r Cont ext Menu
Most functions can be accessed through the context menu. To display the context menu, right-click in the text editor:
NOTE: Refer to Keyboard Shortcuts for information about shortcuts. 8.1.1.3
To Create/Ins ert a CMOST Parameter
Select the part that needs to be replaced with a CMOST parameter, and then click the Create button. Enter a parameter name and default value (optional) to create CMOST Parameter a new parameter. To insert an existing parameter, select the parameter name in the drop-down list. If the default value of the selected parameter has been defined, the previous default value will be used.
Click OK. The selected text will be replaced by a CMOST parameter.
174
General and Advanced Operations
CMOST User Gui de
8.1.1.4
To Delete a CMOST Param eter
Move the cursor into a parameter definition part, and then click the Delete CMOST Parameter button. The CMOST parameter will be deleted. If a default name is defined, the parameter will be replaced by the default value.
8.1.2 Working with Comments 8.1.2.1
To Add a Comment
Move the cursor to the place where you want to add a comment, and then click the Add Comment button to add a new comment. 8.1.2.2
To Comment Out Selected Lin es
Select one or multiple lines then click the Comment Lines selected lines. 8.1.2.3
button to comment out the
To Uncom ment
Select one or multiple comment lines, click the Uncomment Lines comment indicator in each selected line. NOTE: Only the CMG default comment indicator (
button to delete the
) is supported.
8.1.3 Working with Include Files 8.1.3.1
To Create an Inclu de File
Select a portion of the cmm file, and then click the Create Include File include file name in the Create Include File dialog box.
button. Enter the
NOTE: The include file will be saved to the same folder as the master dataset file.
CMOST User Gui de
General and Advanced Operations
175
Click OK. The include file will be created, and the selected text will be replaced by an include line, for example: In the Create/Extract Include File dialog box, if the Create CMOST parameter check box is selected, then a CMOST parameter will be created using the created include as the default value. 8.1.3.2
To Edit an Inclu de File
Move the cursor to an include file line, then click the Open Include File the include file in a new CMM Editor session.
8.1.3.3
button to open
To Extract an Incl ude File
Move the cursor to an include file line, and then click the Extract Include File button. The Extract Include File dialog box will be displayed. Click OK. The include file line in the master dataset will be replaced by the content in the include file. If Delete include file is selected, the include file will be deleted after extraction.
176
General and Advanced Operations
CMOST User Gui de
8.1.4 Navigati on Tools 8.1.4.1
To Go to a Lin e
Enter the line number you want to go to in the Line Number box in the toolbar and then press Enter. At any time, click to go to the line number entered in the box. 8.1.4.2
To Go to a Section
Click the Reservoir Sections the specified section.
8.1.4.3
button and select any item in the drop-down list to go to
To Go to a CMOST Paramet er
Click the CMOST Parameters drop-down list in the upper-right of the screen and select any item in the list to go to the line that contains the parameter. If a CMOST parameter appears multiple times in the file, click the Search CMOST Parameter button to go to its next appearance. CMOST parameters in the list are sorted alphanumerically, as shown in the following example.
8.1.5 Other Functi ons 8.1.5.1
Toggl e Outl ini ng
Click the Toggle Outlining button to collapse of expand multi-line comments and data blocks. You can collapse sections of the CMM file by clicking at the head node, or expand by clicking . Move the cursor over any collapsed head node to preview part of the collapsed lines.
CMOST User Gui de
General and Advanced Operations
177
8.1.5.2
Toggle Syntax Highlight ing
Click the Toggle Syntax Highlighting button to turn on or turn off syntax highlighting. For large CMM files, this speeds up navigation of the file. 8.1.5.3
Find/ Replace Text
Click the Find and Replace button on the toolbar to open the Find and Replace dialog box. Regular expression searching is supported.
8.1.5.4
Blo ck Selection
Block selection is used to select a rectangular portion of text.
178
•
To make a block selection with the mouse, hold the ALT key, click in the text area and then drag to define the selection.
•
To make a block selection with the keyboard, hold the SHIFT+ALT keys, and then press any arrow key.
General and Advanced Operations
CMOST User Gui de
8.1.5.5
Multi pl e Views of a File
Select then drag the handle located at the top right corner of the editor to enable multiple views of a file. A maximum of two vertically split views are supported. Horizontally split views are not available.
CMOST User Gui de
General and Advanced Operations
179
8.1.6 Keyboard Shortcut s Commands
180
Shortcut Keys
Redo
CTRL+Y
Undo Cut Copy Paste
CTRL+Z CTRL+X CTRL+C CTRL+V
Find Save File Block Selection Create Parameter
CTRL+F CTRL+S ALT+ Mouse Selection CTRL+P
Delete Parameter Open Include File
CTRL+SHIFT+P CTRL+G
Create Include File Extract Include File
CTRL+E CTRL+SHIFT+E
Add Comment Comment Out Uncomment
CTRL+K CTRL+SHIFT+C CTRL+SHIFT+U
General and Advanced Operations
CMOST User Gui de
8.2 Handling Large Files When handling large dataset files, if the CMM File Editor seems slow, you can try saving grid block properties such as POR and PERMI into include files. This can be done using Builder. Refer to the “Organizing Include Files” section in the Builder User Guide for more information. Segments related to these keywords often account for more than 90% of the size of a large file. The size of the main CMM file can be greatly reduced by using include files.
8.3 Formula Editor Formulas are equations that perform calculations on values during a CMOST run. Formulas can appear in CMOST master datasets. In a CMOST master dataset, formulas can appear anywhere provided that each formula is enclosed within a start tag and an end tag . As an advanced feature, CMOST also allows using JScript code to create formulas (See Using Jscript Expressions in CMOST for details).
8.3.1 Parts of a Formul a A formula can contain any or all of the following: functions, variables (parameter or objective function term names), constants, and operators. •
Constants: Numbers or text values entered directly into a formula, such as 0.001, “OPEN”.
•
Functions: The POWER(30, 0.25) function returns the result of 30 raised to 0.25.
•
Variables: SteamPress returns the chosen value of parameter SteamPress for the particular simulation job created by CMOST.
•
Operators: The + (plus) operator adds, and the * (asterisk) operator multiplies.
8.3.2 Constant s in Formulas A constant is a value that is not calculated. For example, the number 210, and the text "OPEN" are all constants. Text must be enclosed in double quotation marks. An expression, or a value resulting from an expression, is not a constant. 8.3.3 Functions in Formulas Functions are predefined formulas that perform calculations by using specific values, called arguments, in a particular order, or structure. Functions can be used to perform simple or complex calculations. Structure of a function: POWER(SteamPress * 0.001, 0.2388057) •
•
Structure: The structure of a function begins with the function name, an opening parenthesis, the arguments for the function separated by commas, and a closing parenthesis. Function name: Refer to List of Built-in Functions in CMOST.
CMOST User Gui de
General and Advanced Operations
181
Arguments: Arguments can be constants, parameter names, formulas, or other functions. The argument you designate must produce a valid value for that argument. For the LOOKUP function, arrays of constants or formulas can be arguments as well. In certain cases, you may need to use a function as one of the arguments of another function. In this case, the function used as an argument is a nested function. When a nested function is used as an argument, it must return the same type of value that the argument uses. •
8.3.4 Variables in Formulas For formulas in the Master Dataset and Parameters page, a parameter name can be used as a variable name. For formulas in the Objective Functions page, an objective function term name can be used as a variable name. A parameter name identifies a parameter defined in your CMOST study file. For a particular job created by CMOST, each parameter has one chosen value, which will be used by CMOST in evaluating formulas. An objective function term name identifies an objective function term within the objective function. For a complete job, CMOST will extract the value from SR2 files and use it in calculating formulas. For example, the following formula contains a parameter name SteamPress. For a specific simulation job created by CMOST, the chosen value of parameter SteamPress is equal to 2500. When CMOST evaluates the formula, the result will be 223.78. 179.7989 * POWER(SteamPress * 0.001, 0.2388057) Another example, the following formula contains two objective function terms: CumOil and CSOR. CumOil is defined as the Cumulative Oil SC at 2008-12-01 and CSOR is defined as the cumulative steam oil ratio at 2008-12-01. For a specific finished simulation job, CumOil is equal to 6.528e5 m 3 and CSOR is equal to 3.15. When CMOST evaluates the formula, the result will be 0.207. 1e-6 * CumOil/CSOR 8.3.5 Operators in Formulas Operators specify the type of calculation that you want to perform on the elements of a formula. CMOST includes two different types of calculation operators: arithmetic and comparison. Arithmetic operators: To perform basic mathematical operations such as addition, subtraction, or multiplication; combine numbers; and produce numeric results, use the following arithmetic operators. Arithmetic operator
Meaning (Example)
+ (plus sign) – (minus sign) Negation * (asterisk)
Addition (3+3) Subtraction (3–1) (–1) Multiplication (3*3)
/ (forward slash)
Division (3/3)
182
General and Advanced Operations
CMOST User Gui de
Comparison operators: You can compare two values with the following operators. When two values are compared by using these operators, the result is a logical value either TRUE or FALSE. Comparison operator
Meaning (Example)
== (double equal sign)
Equal to (A1==B1)
> (greater than sign) < (less than sign) >= (greater than or equal to sign) <= (less than or equal to sign)
Greater than (A1>B1) Less than (A1=B1) Less than or equal to (A1<=B1)
!= (not equal to sign)
Not equal to (A1!=B1)
8.3.6 Formula Calculation Order Formulas calculate values in a specific order. CMOST calculates the formula from left to right, according to a specific order for each operator in the formula. Operator precedence: If you combine several operators in a single formula, CMOST performs the operations in the order shown in the following table. If a formula contains operators with the same precedence—for example, if a formula contains both a multiplication and division operator — CMOST evaluates the operators from left to right. Operator
Description
– * and /
Negation (as in –1) Multiplication and division
+ and – == < > <= >= !=
Addition and subtraction Comparison
Use of parentheses: To change the order of evaluation, enclose in parentheses the part of the formula to be calculated first. For example, the following formula produces 11 because CMOST calculates multiplication before addition. The formula multiplies 2 by 3 and then adds 5 to the result. =5+2*3 In contrast, if you use parentheses to change the syntax, CMOST adds 5 and 2 together and then multiplies the result by 3 to produce 21. =(5+2)*3
8.3.7 Lis t of Built-in Functio ns in CMOST NOTE: The list here only includes the Built-in functions in CMOST. As an advanced feature,
CMOST allows the use of JScript code to write formulas. So all functions supported by JScript are supported by CMOST. See Using Jscript Expressions in CMOST for more details.
CMOST User Gui de
General and Advanced Operations
183
8.3.7.1
IF
Returns one value if a condition you specify evaluates to TRUE and another value if it evaluates to FALSE. Use IF to conduct conditional tests on values and formulas. Syntax I F( l ogi c al _ t es es t , val ue_ i f _ t r ue, val ue_ i f _ f a l s e) e)
logical_test is any value or expression that can be evaluated to TRUE or FALSE. For example, WellLen>=800 is a logical expression; if the value of WellLen is greater than or equal to 800, the expression evaluates to TRUE. Otherwise, the evaluates to FALSE. This argument can use any comparison calculation operator. value_if_true is the value that is returned if logical_test is TRUE. Value_if_true Value_if_true can be a constant or a formula. value_if_false is the value that is returned if logical_test is FALSE. Value_if_false Value_if_false can be a constant or a formula. Example 1 Use the following IF functions to open/close certain perforations of a well according to the length of the well in a CMOST master dataset. PER PERF GEO ** $ UBA 9 3 12 1. 9 4 12 1. 1. 9 5 12 1. 1. 9 6 12 1. 1. 9 7 12 12 1. 9 8 12 12 1. 9 9 12 12 1.
' Hori zont zont al ' ff St at us Conn onnect i on OPEN FLO FL OW- TO ' SURFA SURFACE CE'' OPEN FLO FL OW- TO 1 OPEN FLO FL OW- TO 2 OPEN FLO FL OW- TO 3 I F( WL>= L>=500, " OPEN" PEN" , " CLOSED LOSED" ) cmost os t > FLO FL OW- TO 4 I F( WL>= L>=600, " OPEN" PEN" , " CLOSED LOSED" ) cmost os t > FLO FL OW- TO 5 I F( WL>= L>=700, " OPEN" PEN" , " CLOSED LOSED" ) cmost os t > FLO FL OW- TO 6
For a simulation job created by CMOST with WL=600, perforations for well 'Horizontal' in the dataset will be: PER PERF GEO ** $ UBA 9 3 12 1. 9 4 12 1. 1. 9 5 12 1. 1. 9 6 12 1. 1. 9 7 12 1. 9 8 12 1. 9 9 12 1.
8.3.7.2
' Hori zont zont al ' ff St at us Conn onnect i on OPEN FLO FL OW- TO ' SURFA SURFACE CE'' OPEN FLO FL OW- TO 1 OPEN FLO FL OW- TO 2 OPEN FLO FL OW- TO 3 OPEN FLO FL OW- TO 4 OPEN FLO FL OW- TO 5 CLOSED CLOSED FL OW- TO 6
LOOKUP
The LOOKUP function looks in array_constants for the specified value and returns a value from the same position in array_formulas. Use LOOKUP to define relationships between parameters.
184
General General and Advanced Operations
CMOST User User Gui de
Syntax LOOKUP( l ooku ookup p_val ue, ar r ay_con ay_const st ant ant s, arr ay_f ay_f ormul as)
lookup_value is a value that LOOKUP searches for in an array. Lookup_value must be a parameter name. array_constants is a group of constants that you want to compare with lookup_value. array_formulas is a group of formulas, one of which will be the return value of the LOOKUP function. The number of formulas should be equal to the number of constants in array_constants. Remarks array_constants format •
Constants are enclosed enclosed in braces { } and separated with commas (,).
•
Array constants can contain contain numbers and text. Text is is case-sensitive. case-sensitive.
•
Numbers in array constants can be in integer, decimal, decimal, or scientific format.
•
Text must be enclosed in double quotation marks, for example, "CLOSED".
•
Array constants cannot contain formulas and parameter names.
array_formulas format •
The format format of array_formulas array_formulas is the same as array_constants array_constants except that array_formulas can contain constants, formulas, and variable names.
Important •
The LOOKUP LOOKUP function function finds the value value that matches exactly the same same as the lookup_value. If LOOKUP can't find the lookup_value, the result is undefined.
Example 2 Use the LOOKUP function to set the mole fraction of solution gas according to gas-oil ratio in a CMOST master dataset. NOTE: and must be in the same line. **$ Propert Propert y: Oi Oi l Mol e Fract i on( Sol n_Gas) _Gas) MFRA FRAC_OI _OI L ' Sol n_Gas' n_Gas' CON LOO LOOKUP( ogor ogor , {7, 5, 4, 3}, 3}, {0. {0. 10, 0. 08, 0. 06, 0. 04}) 4}) cmost >
For a simulation job created by CMOST with ogor=4, the mole fraction fra ction of solution gas will be set to 0.06 **$ Propert Propert y: Oi Oi l Mol e Fract i on( Sol n_Gas) _Gas) MFRA FRAC_OI _OI L ' Sol n_Gas' n_Gas' CON 0. 06
CMOST User User Gui de
General General and Advanced Operations
185
8.3.7.3
ABS
Returns the absolute value of a number. The absolute value of a number is the number without its sign. Syntax ABS( ABS( number )
number is the actual number of which you want the absolute value. Example ABS( - 12. 5) r et ur ns 12. 12. 5
8.3.7.4
COS
Returns the cosine of the given angle. Syntax COS( number )
number is the angle in ra radians dians for which you want the cosine. Remark If the angle is in degrees, multiply it by 3.14159/180 to convert it to radians. Example COS( 1. 047) 047) r et urns 0. 50017 00171 1
8.3.7.5
EXP
Returns e raised to the power of number. The constant e equals 2.71828182845904, which is the base of the natural logarithm. Syntax EXP( EXP( number )
number is the exponent applied to the base e. Remarks To calculate powers of other bases, use the POWER function. Example EXP( EXP( 2) r et urns 7. 3890 38905 56
8.3.7.6
LN
Returns the natural logarithm of a number. Natural logarithms are based on the constant e (2.71828182845904).
186
General General and Advanced Operations
CMOST User User Gui de
Syntax LN( LN( num number )
number is the positive real number for which you want the natural logarithm. Remark LN is the inverse of the EXP function. Example LN( LN( 2. 71828 18281 18) r et urns 1
8.3.7.7
LOG10
Returns the base-10 logarithm of a number. Syntax LOG LOG10( number )
number is the positive real number for which you want the base-10 logarithm. Example LOG LOG10( 10( 100) 100) r et ur ns 2
8.3.7.8
MAX
Returns the largest value in a set of values. Syntax MAX( num number ber 1, num number 2, . . . )
number1, number2, ... are numbers for which you want to find the maximum value. Remarks •
You can specify specify arguments arguments that are numbers, numbers, variable variable names, and formulas. formulas.
•
The arguments must contain at least two values.
Example MAX( var var 1, 2. 54) r et ur ns 2. 54 i f var var 1=1. 06
8.3.7.9
MIN
Returns the smallest number in a set of values. Syntax MI N( num number ber 1, num number 2, . . . )
number1, number2, ... are values for which you want to find the minimum value.
CMOST User User Gui de
General General and Advanced Operations
187
Remarks •
You can specify specify arguments that are numbers, variable names, and formulas. formulas.
•
The arguments must contain at least two values.
Example For a SAGD well pair, you want to change the total liquid rate of the producer according to the steam rate of the injector. You can use the MIN function in your master dataset to achieve this. I NJ ECTO ECTOR MOBWEI GHT EXPLI EXPLI CI T ' WESTEST- I ' I NCOMP WATER 1. 0. 0. TI NJ W 253. 253. 2 QUAL 0. 9 OPERATE PERATE MAX BH BHP 4200. 420 0. CONT REP REPEAT EAT OPERATE MAX STW St eam c mos t > CONT REPEAT REP EAT PRODUCER ' WEST- P' OPERATE PERATE MI N BHP BHP 2000 2000 CONT CONT REPEAT OPERATE MAX STL MI N( St eam* 1. 6, 1000) c mos t > CONT REPEAT REPEAT
For a job with Steam=500, operating constrains for producer 'WEST-P' will be: PRODUCER ' WEST- P' OPERATE PERATE MI N BHP BHP 2000 2000 CONT CONT REPEAT OPERATE MAX STL 800 CONT REPEAT
For a job with Steam=700, operating constrains for producer 'WEST-P' will be: PRODUCER ' WEST- P' OPERATE PERATE MI N BHP BHP 2000 2000 CONT CONT REPEAT OPERATE MAX STL 1000 CONT REPEAT
8.3.7.10 POWER
Returns the result of a number raised to a power. Syntax POW POWER( number , power )
number is the base number. It can be any number. power is the exponent exponent to which which the base number number is raised. Example POWER( ER( 5, 2) r et ur ns 25
8.3.7.11 SIN
Returns the sine of the given angle. Syntax SI N( num number ber )
number is the angle in ra radians dians for which you want the sine. Remark If your argument is in degrees, multiply it by 3.14159/180 to convert it to radians. 188
General General and Advanced Operations
CMOST User User Gui de
Example SI N( 30*3. 14159/ 180) r et ur n 0. 5
8.3.7.12 SQRT
Returns a positive square root. Syntax SQRT( number )
number is the number for which you want the square root. Remark The number must be non-negative. Example SQRT( 2. 0) r et urns 1. 4142
8.3.7.13 TAN
Returns the tangent of the given angle. Syntax TAN( number )
number is the angle in radians for which you want the tangent. Remark If your argument is in degrees, multiply it by 3.14159/180 to convert it to radians. Example TAN( 0. 785) r et ur n 0. 99920
8.4 Using Jsc rip t Expr ession s in CMOST Dynamic JScript code execution is a powerful feature that allows CMOST to be extended with code that is not compiled into the application. With it users can customize and extend CMOST via custom JScript code. For example, users can write their own code for calculating objective functions. Then CMOST can read the code and execute it on the fly. With dynamic code execution, CMOST allows users to have advanced control over the CMOST workflow, including simulation dataset generation, simulation output file post processing, objective function calculations, and applying optimization constraints. In CMOST, JScript code can appear in the Parameters, User-Defined Time Series, Objective Functions, User-Defined Nominal Global Objective Function Candidates, and Constraints pages to: •
Define complex relationships among parameters
•
Define custom objective functions
•
Define complex constrains and penalty functions
CMOST User Gui de
General and Advanced Operations
189
JScript is the Microsoft dialect of the ECMAScript scripting language specification, with JavaScript being another dialect. JScript is implemented as a Windows Script engine. It allows embedding ActiveX objects to reuse the functionalities of many Microsoft Windows applications. For example, users can easily link CMOST to an Excel spreadsheet through JScript code for calculating objective functions. A complete reference on the JScript language is available in Microsoft Developer Network website: http://msdn.microsoft.com/en-us/library/x85xxsf4.aspx The website below also provides reference information on some key JScript topics: http://ns7.webmasters.com/caspdoc/html/jscript_language_reference.htm The following examples are provided in the CMOST installation directory to help users to understand the use of JScript in CMOST: •
RelPermTableHM.cmp: use JScript code to create a relative permeability table
•
SAGD_2D_OP.cmp: use JScript code to define custom objective functions
•
SAGD_2D_Tuning.cmp: use JScript code to read simulation output files
The following sections provide more details about the use of JScript code in CMOST.
8.4.1 Transferr ing Data fro m CMOST to User JScript code Because the JScript code written by users is executed dynamically by CMOST, the data transfer between CMOST and user’s JScript code occurs in the form of common variables which are “visible” in both CMOST and user’s JScript code. The following table summarizes common variables that are visible to user’s JScript code depending on the location of the JScript code. Jscript Code Location
Common Variables
Parameters
• All parameters except for the parameter which is defined by the current JScript code • All input and output files of the simulation job Advanced Objective • All parameters except for the parameter which is Functions defined by the current JScript code • All input and output files of the simulation job User-Defined Nominal • All objective function terms defined for the current Global Objective Function objective function Candidates • All input and output files of the simulation job • All parameters Hard Constraints • All parameters • All input and output files of the simulation job Soft Constraints • All parameters • All objective functions • All input and output files of the simulation job
190
General and Advanced Operations
CMOST User Gui de
Jscript Code Location
Soft Constraint Penalties
Common Variables
• All parameters • All objective functions • All input and output files of the simulation job
To use the common variables in your JScript code, use the corresponding names directly. Do not use the same common variable names to define new variables. For example, if InjectionPress is the name of a parameter, you can use InjectionPress as a variable in your JScript code directly. However, you should not try to define a new variable with its name equal to InjectionPress. Also, you should not attempt to modify the value of any common variable declared by CMOST because CMOST will replace the variable name by its actual value before the JScript is executed. The available variables are predefined in the formula, refer to Formula Editor for more information.
8.4.2 Accessing Simulation Job Input and Output Files To access the input and output files of a simulation job, use the following common variables defined by CMOST: Variable Name
Value
ProjectDirectory
Full path of the project directory
StudyDirectory
Full path of the study directory
ExperimentName
Name of the simulation job (which is the file name without extension)
ExperimentDatFilePath
Full path of the dataset of the simulation job
ExperimentLogFilePath
Full path of the log file of the simulation job
ExperimentOutFilePath
Full path of the out file of the simulation job
ExperimentIrfFilePath
Full path of the irf file of the simulation job
ExperimentMrfFilePath
Full path of the mrf file of the simulation job
For example, the following JScript code opens the log file of a simulation job and reads the material balance error for the job at the last time step.
CMOST User Gui de
General and Advanced Operations
191
var f so=new Act i veXObj ect ( "Scri pt i ng. Fi l eSyst emObj ect ") ; var t s=f so. OpenText Fi l e(Experi ment LogFi l ePathh) ; var l i ne: St r i ng; var mat Er r or St r : St r i ng var mat Er r or: doubl e whi l e(! t s. At EndOf St r eam) { l i ne=t s. ReadLi ne( ) ; i f ( l i ne. l engt h>=90) { mat Er r or St r =l i ne. subst r i ng( 84, 89) try { matEr r or=doubl e. Par se( matEr r orSt r ) ; } cat ch( e) { / / Do nothi ng } } } matEr r or;
8.4.3 Transferr ing Data fro m JScript Code to CMOST When user’s JScript code is executed inside CMOST, the final result that will be transferred from user’s JScript code to CMOST is the evaluation result of the last line of the JScript code. For example, the final value obtained by CMOST for the following JScript code is 10; var a , b, c , d; a=4; b=3; c=2; d=1; a+b+c+d;
8.4.4 Starting a New Line in the Dataset The keyword data input system of simulators requires that certain data entry must start with a new line. To start a new line in JScript code, string “\r\n” should be used. For example: var kwLi nes: St r i ng; kwLi nes =" \ r \ n" +"* * Gr oup 1 Heat ers "; kwLi nes+=" \ r \ n" +" HEATR I J K "+UBAI +" 1 " +UBAK+” ”+QH; kwLi nes+=" \ r \ n"+" UHTR I J K " +UBAI +" 1 " +UBAK+" 16351"; kwLi nes+=" \ r \ n"+" TMPSET I J K " +UBAI +" 1 " +UBAK+" 1129" ; kwLi nes+=" \ r \ n" +" AUTOHEATER ON " +UBAI +" 1 " +UBAK;
192
General and Advanced Operations
CMOST User Gui de
9 Configuring Launcher and CMOST to Work Together
9.1 Introduction CMOST relies on either Launcher or the CMG Job Service for running jobs. So before CMOST can be used, you will need to configure Launcher and the CMG Job Service to work properly. CMOST must also be configured to connect with Launcher or the CMG Job Service in order for jobs to be queued and run properly.
9.2 Configuring Launcher 9.2.1 Launcher Launcher is a Windows GUI application. It can be started any time using the shortcut on the user’s desktop or by running the executable (CMG.exe) directly from its installation folder (CMG_HOME\Launcher\xxxx.xx\Win32\EXE\). 9.2.2 CMG Job Servic e CMG Job Service is a Windows Service application. The executable (CMG.JobService.exe) is installed in CMG_HOME\CMGJobService directory. The service (CMG Job Service) is automatically created and registered with Windows when CMG software is installed. You may access the CMG Job Service via Control Panel | Administrative Tools | Services .
CMOST User Gui de
Configuri ng Launch er and CMOST to Work Together 193
NOTE: Administrator rights are required in order to register, start, or stop a Windows Service
If for any reason CMG Job Service is not available in the list of Windows Services, it can be registered by a user with Administrator rights using the following command: sc create CMGJobService binPath= "" Note that a space is required between the equal sign and the path to CMG.JobService.exe. If the path to CMG.JobService.exe contains spaces, it must be contained in quotes. Because CMG Job Service is a Windows Service application, starting or stopping the service must be performed in Control Panel | Administrative Tools | Services. If the startup type is set to Automatic, CMG Job Service will be started automatically when the computer is rebooted. If the startup type is Manual, the user has to start the service manually by pressing the Start button (Administrator rights are required for starting the service manually).
9.2.3 Use Launcher Embedded Mode for Submit tin g Job s If your organization uses smart card technology for user credentials, you may use Launcher in Embedded Mode for submitting jobs. In Embedded Mode, the CMG Job Service is not used and the service may be stopped. In Embedded Mode, Launcher must always be open and “Use job service to submit any jobs” must be cleared in Launcher | Configuration | Configure Local Job Server as shown in the Configure Local Job Server dialog box:
194
Configuri ng Launch er and CMOST to Work Together
CMOST User Gui de
9.2.4 Use CMG Job Servic e for Submit tin g Jobs If you use a user name and password for Windows login, you may use the CMG Job Service for submitting jobs. In this mode, CMG Job Service must be running. Refer to CMG Job Service on how to start and stop CMG Job Service. If jobs are submitted by CMOST to CMG Job Service, Launcher can be open or closed. However, if you want to see the list of jobs in Launcher, Launcher must be open and Use job service to submit any jobs must be selected in Launcher | Configuration | Configure Local Job Server as shown in the following dialog box.
CMOST User Gui de
Configuri ng Launch er and CMOST to Work Together 195
9.2.5 Submitti ng Jobs to a Remote Comput er Jobs can be submitted to a remote computer through a Remote Scheduler. Launcher supports several types of remote schedulers: •
CMG Scheduler
•
IBM Platform LSF
•
Microsoft HPC
•
Oracle Grid Engine
Portable Batch System (PBS/TORQUE) Here we show the configurations required for submitting jobs to a remote computer using CMG Scheduler. For other types of schedulers, refer to the Launcher Users Guide. •
196
Configuri ng Launch er and CMOST to Work Together
CMOST User Gui de
•
Install CMG Software on the remote computer. During the installation process, select Use CMG Job Service to submit jobs when prompted by the installation program.
•
On the remote computer, go to Control Panel | Administrative Tools | Services and open CMG Job Service. Make sure the startup type is set to Automatic so that the service is started automatically when the computer is rebooted. If the service has not been started, press the Start button to start the service (Administrator rights are required to start the service manually).
•
On the remote computer, open Launcher and go to Configuration | Configure Local Job Server. Make sure Use job service to submit any jobs and This computer is a CMG Cluster Compute Node are selected. Launcher can be closed after the setting is complete.
•
On the user’s local computer, open Launcher and go to Configuration | Configure Remote Schedulers. Follow the wizard to add a remote CMG Scheduler by providing the remote computer name.
CMOST User Gui de
Configuri ng Launch er and CMOST to Work Together 197
On the user’s local computer, add your password to Launcher through Configuration | Password | Add/Modify (This step can be skipped if Launcher already has your password). Now the user can submit jobs to the remote computer from Launcher on the user’s local computer. •
NOTE: For jobs submitted to a remote computer, the remote computer needs to access the
working directory of job input and output files using a UNC path. If the working directory is a non-UNC path, Launcher will try to convert it to a UNC path. If the conversion is not successful, you will not be able to submit the job to a remote computer.
198
Configuri ng Launch er and CMOST to Work Together
CMOST User Gui de
10 Troubleshooting
10.1 Introduction This section provides information to help the user understand and resolve problems that may arise during use of the CMOST application.
10.2 Failed and Abn orm al Termination Jobs NOTE: Refer to Number of Perturbation Experiments for Each Abnormal Experiment for
information about creating perturbation experiments to help obtain normal termination jobs. 1. Failed/incomplete jobs are jobs that cannot run to completion as a result of hardware or software issues. If a job is failed/incomplete, it should be able to run to completion using the same dataset after the hardware or software issue is resolved. To determine why a job has failed:
CMOST User Gui de
In Launcher, find the job and check whether there are any error messages in the Message column. If the Message column is not shown, click Add/Remove Job Columns to add the Message column. Open the .log or .out file of the job in Notepad and scroll down to the end of the file to check whether the simulator has written any termination messages there. Find the .dat file of the job and submit it to the scheduler directly in Launcher and see whether the job can run to completion. Some possible causes for failed/incomplete jobs are: Missing or wrong password in Launcher. This can be verified by submitting a job to the scheduler directly in Launcher. If this is the cause, supply the correct password using Launcher | Configuration | Password | Add/Modify . Out of disk space. Check available disk space in the simulator working directory. If the remote schedulers have been configured to write simulation output files to a temporary folder on the execution computer and copy all files back to the simulator working directory when the job is done, it is possible that the location (usually the C: drive) for temporary files has been filled. If this is the case, you may remove unwanted Troubleshooting
199
temporary simulation output files from REMOTE EXECUTION COMPUTERS. In Windows Server 2008, the temporary files are in C:\ProgramData\CMG\CopyLocalJobs. In Windows Server 2003, the temporary files are in C:\Documents and Settings\All Users\Application Data\cmg\CopyLocalJobs. In Windows 7, the temporary files are stored in C:\ProgramData\cmg\CopyLocalJobs. Intermittent file I/O issue. This is often very difficult to diagnose. To reduce the chance of running into intermittent file I/O issues, it is strongly recommended that the simulation output files (.irf, .mrf, .rst, .out) are kept as small as possible by properly configuring the Input/Output Control section of the .cmm file. For details, refer to WRST, OUTSRF, WSRF, OUTPRN, and WPRN keywords in the simulator’s manual. If you are running more than five concurrent remote jobs, it is also recommended using a Windows 2003 or 2008 File Server to store CMOST input/output files. The remote scheduler does not accept remote jobs. If a job is submitted to a remote CMG Scheduler and the remote computer is configured to run local jobs only, the job will fail. To verify this, open Launcher in the remote computer and check Launcher | Configuration | Configure Local Job Server. To allow remote jobs, “Use job service to submit any jobs” and “This computer is a CMG Cluster Compute Node” must be checked in the remote computer. In addition, CMG Job Service must be running in the remote computer. The required simulator executable doesn’t exist in the remote computer. Simulator licensing problem.
2. Abnormal termination jobs are jobs terminated by the simulator before reaching the stop time. If a job is abnormal termination, it will usually stop at the same point if the same dataset is re-run again with the same simulator release. Usually, certain modifications of the dataset such as numerical tuning are required to make the job run to completion. To determine the cause of abnormal termination, examine the .log and/or .out file of the job. Some possible causes for abnormal termination jobs are:
200
Troubleshooting
Syntax error in the dataset. Simulator runs into numerical problems such as not able to converge, too many time step cuts, etc. Killed by CMOST because maximum job run time is exceeded.
CMOST User Gui de
10.3 Excepti on Repor ts When CMOST encounters a problem that it is not able to resolve, it will display a CMOST Unhandled Exception dialog box containing information that can assist CMG development and support staff investigate the problem, as shown in the following example:
If you encounter an Unhandled Exception message: 1. Click Copy Exception. 2. Open an email message and paste (CTRL+V) the exception report into the body. 3. Add any other information you feel could assist CMG resolve the problem. 4. In the subject line, enter “CMOST Unhandled Exception”. 5. Send the email to [email protected]. 6. As shown in the above exception report, if you click Continue , CMOST will ignore the message and attempt to continue. If you click Quit, CMOST will close immediately.
CMOST User Gui de
Troubleshooting
201
11 Theoretical Background
11.1 Probabilit y Distribut ion Functi ons 11.1.1 Uniform Distribut ion Uniform distribution assumes that all values in the defined range are equally probable. Its probability density function is:
0 1 f ( x ) = b − a 0
x a
≤ x ≤ b x > b
where a and b are the lower and upper limit of the variable.
11.1.2 Triangle Distribution The probability density function for the triangle distribution is:
0 2( x a ) − ( b − a )(c − a ) f ( x ) = 2( b − x ) ( b − a )( b − c) 0
x a
≤x≤c
c < x ≤ b x > b
where a and b are the lower and upper limit of the variable, and c is the peak (mode).
11.1.3 Truncated Normal Distribution The probability density function for the Gaussian normal distribution is:
− ( x − µ ) 2 exp 2σ 2 2π
1
f ( x ) = σ
where µ and σ are the mean and standard deviation of the variable. CMOST User Gui de
Theoretical Background
203
In CMOST, the normal distribution is truncated by user-defined minimum and maximum values: Min ≤ x ≤ Max
The default min and max values are -1E+308 and 1E+308 respectively.
11.1.4 Truncated Log Normal Distri butio n The probability density function for the log normal distribution is: 1
f ( x ) =
xσ 2 π
− (ln x − µ) 2 2 2 σ
exp
where µ and σ are the mean and standard deviation of the variable’s natural logarithm. By definition, in a log normal distribution, the variable’s logarithm is normally distributed. In contrast, the mean and standard deviation of the non-logarithm values are denoted m and s. To calculate m and s from μ and σ :
+0 5, � − 2+
= e
.
= e
1 e
To calculate μ and σ from m and s:
√ 22 2 2 = ln
+
, =
ln(1 +
)
In CMOST, the log normal distribution is truncated by user-defined minimum and maximum values: Min ≤ x ≤ Max
The default Min and Max values are 1E-308 and 1E+308 respectively.
11.1.5 Determin isti c Distribut ions Unlike probability distributions, where the uncertainty of an input parameter is described by a distribution, deterministic distributions treat the input parameter as a constant. A fixed value is defined for the parameter, since there is no uncertainty about its value. 11.1.6 Custom Distribut ion CMOST provides predefined discrete and continuous distributions for input parameters; however, if none of these distributions is appropriate for the uncertainty of an input parameter, users can create a custom distribution. The custom distribution is given as a table of intervals and the corresponding probability values. For example, the following table provides the points that define a custom distribution which indicates the probability is 60% for values between 0.2 and 0.3:
204
Theoretical Background
CMOST User Gui de
Left Bound
Right Bound
Probability
0.0 0.1 0.2 0.3
0.1 0.2 0.3 0.4
0.05 0.15 0.60 0.15
0.4
0.5
0.05
NOTE: If the probability for all defined intervals does not sum to 1, CMOST will normalize
the probability values to ensure that the total cumulative probability is equal to 1.
11.1.7 Discrete Probability Distribut ion The discrete distribution is given as a table of x-values and the corresponding probability values. For example, for the discrete distribution defined by the following table, only three values (100, 200, and 300) will be used in Monte Carlo simulation. The probability of using 100, 200, and 300 is 25%, 50%, and 25% respectively. If the sum of all probability values is not equal to 1, CMOST will normalize the probability values so that it does equal 1. X
Probability
100
0.25
200 300
0.50 0.25
11.2 Objecti ve Func tio ns Two types of objective functions are described in this section: • •
History Match Error Net Present Value
11.2.1 History Match Error The History Match Error measures the relative difference between the simulation results and measured production data for each objective function. If a field has multiple wells and each well has multiple types of production data to match, CMG recommends you define an objective function for each well. The objective function of each well contains multiple objective function terms, each of which corresponds to a production data type. In practice, it is also common for the quality and importance of measured data to be different for different production data types. In a manual history matching task, these variations are usually taken into account by the reservoir engineer intuitively and qualitatively. In computer-assisted history matching, a quantitative approach should be used to account for the data quality and importance. Therefore, different absolute measurement errors and weights need to be
CMOST User Gui de
Theoretical Background
205
assigned to different production data types of different wells in calculating objective functions. In CMOST, the following equation is used to calculate the history match error for well i:
2 T i j s m ∑ − � i j t i j t t=1 N i j=1 ij (,)
1
i ∑Nj=1i i j
Q =
where:
()
tw ,
()
×
Y, , Y, , NT(i, j)
Scale ,
ij
× 100% × tw ,
i,j,t
Subscripts representing well, production data type, and time respectively
N(i)
Total number of production data types for well i
NT(i,j)
Total number of measured data points
Yis, j, t
Simulated results
Yim, j, t
Measured results
twi,j
Term weight
Scale i,j
Normalization scale
The normalization scale is calculated using one of the following four methods. Method #1 applies when the number of measured data points is greater than 5 and the normalization method is set to AUTO. In this method, the normalization scale is the maximum of the following three quantities: m ΔYi, j
+ 4 × Merr i, j
0.5 × min ( max(Yi,m j, t ) , min(Y i,m j, t )
) + 4 × Merr i, j
0.25 × min ( max(Yi,m j, t ) , min(Y i,m j, t )
) + 4 × Merr i, j
where:
∆Yim, j
Measured maximum change for well i and production data type j
Merr i,j
Measurement error
The value of measurement error (ME) means that if the simulated result is between (historical value – ME) and (historical value + ME), the match is considered to be satisfactory (or perfect because it is within the range of measurement accuracy). So ME is the ½ absolute error range. 206
Theoretical Background
CMOST User Gui de
Method #2 applies when the number of measured data points is small ( ≤5), and the normalization method is set to AUTO, in which case, the normalization scale is obtained by: Scale i, j
= max( max(Yi,m j, t ) , min(Yi,m j, t ) ) + 4 × Merr i, j
Method #3 applies when the normalization method is set to OFF, in which case: Scale i, j
=1
Method #4 applies when the normalization method is set to be MeasurementErrorOnly, in which case: Scale i, j = Merr i, j
As can be seen from the above equations, the calculated history match error is a dimensionless percentage relative error for methods #1, #2, and #4. If the simulation results are exactly the same as measured data, the calculated history match error is 0% which indicates a perfect match. Our experience indicates that if the history match error is less than 5%, the match is usually acceptable. The global history match error is calculated using the weighted average method: NW
1
∑w Q
Q global = NW
∑w
i
i
i =1 i
i =1
where: Q global
Global objective function
Qi
Objective function for well i
NW
Total number of wells
wi
Weight of Q i in the calculation of Q global
11.2.2 Net Present Value In finance and economics, discounting is the process of finding the present value of an amount of cash at some future date. The net present value of a cash flow is determined by reducing its value by the appropriate discount rate for each unit of time between the time when the cash flow is to be valued and the time of the cash flow. The time when the cash flow is to be valued is called the NPV Present Date in CMOST. To calculate the present value of a single cash flow, it is divided by one plus the interest rate for each period of time that will pass: PV =
R t (1 + I) t
CMOST User Gui de
Theoretical Background
207
where: t
Time of the cash flow
I
Discount rate (interest rate)
R t
Net cash flow (positive for inflow, and negative for outflow) at time t
Net present value (NPV) is defined as the total present value (PV) of a time series of cash flows. It is a standard method for using the time value of money to appraise long-term projects. Each cash inflow/outflow is discounted back to its present value (PV). Then they are summed. Therefore NPV is the sum of all cash inflows/outflows: NPV =
T
R
∑ (1 + tI) t t =1
The method for calculating cash flow depends on the property. If the selected property is a daily property, such as Oil Rate SC-Daily, then cash flow is calculated daily. If the selected property is a monthly property, such as Oil Rate SC-Monthly, cash flow is calculated monthly. If a property does not specify the frequency, such as Oil Rate SC, then the cash flow is calculated daily. Do not select cumulative properties unless the cash flow is to be calculated for one day only: NPV =
NJ N 2
∑∑
Quantity × UnitValue × ConversionFactor
j=1 t = N1
(1 + DailyInter estRate) t
where: t
Time of the cash flow in days (the number of days elapsed from the NPV Present Date to the date when the Property Value is read).
N1
Number of days from the NPV Present Date to the Start Date Time.
N2
Number of days from the NPV Present Date to the End Date Time.
Quantity
Value read from the SR2 files using the user-specified origin name and property name
Unit Value
User-specified cash flow value per Quantity (positive for inflow, and negative for outflow)
j
Represents each objective function term
NJ
Number of objective function terms for the Net Present Value objective function.
The yearly interest rate is input by the user. Monthly, quarterly, daily interest rates are converted from the yearly rate; for example, the yearly interest (discount) rate is converted to the daily interest rate using the following formula: 208
Theoretical Background
CMOST User Gui de
36 ln − (
=
)
1
CMOST uses the CMOST unit system defined on the General Properties page to read SR2 files and a proper Unit Value for each objective function term must be entered according to the chosen unit system. For example, if the CMOST unit system is Field , the unit for oil rate will be bbl/day and the unit value should be dollar per barrel. If the unit system is SI , the unit for oil rate will be m 3/day and the unit value should be dollar per m 3.
11.3 Sampling Methods Parameter space sampling is the most important step in sensitivity analysis and uncertainty assessment. The outcome of parameter space sampling is a design for laying out a detailed simulation plan in advance of performing simulations. A well-chosen design maximizes the amount of “information” that can be obtained for a given amount of simulation effort. Below is an introduction to some basic terminology used in CMOST.
Parameters (variables, factors): Simulation inputs which a researcher manipulates
to cause changes in simulation outputs. A parameter can have two or more sample values.
Sample values (levels): The different values of a parameter.
Objective functions (responses): The outputs of a simulation.
Experiment: An experiment represents the combination of one particular sample
value for each parameter in the simulation model. Parameter space (search space): The number of all possible experiments for a given set of parameters and sample values. Sampling: Process of selecting a set of experiments from all possible experiments. Design: A set of experiments generated by the sampling process. A good design with desirable characteristics allows you to fit an accurate proxy model and draw reliable conclusions regarding parameter effects. Effect: How changing the value of a parameter changes the objective function. The effect of a single parameter, as opposed to the effect of an interaction, is also called a main effect. Interaction: Occurs when the effect of one parameter on an objective function depends on the level of another parameter.
For a given set of parameters and sample values, the parameter space is usually extremely large. For example, the number of all possible experiments for 15 parameters with three sample values for each parameter is 3 15, or 14,348,907. If we want to select a set of 600 experiments from the total of 14,348,907 experiments, there is an exceptionally large number of ways to do the selection. According to statistical experimental design theory, to efficiently explore the parameter space, the design (the set of experiments) selected should possess two desirable characteristics:
CMOST User Gui de
Theoretical Background
209
Approximate orthogonality of the input parameters. Space-filling, that is, the sampling points (experiments) should be evenly distributed in the parameter space. In other words, the collection of experiments should be a representative subset of all possible experiments. This is indicated by the minimum sampling distance, the larger the better. The orthogonality of two columns in a design matrix is measured by the correlation between two column vectors v = (v1 , v2 ,..., vn ) and w = ( w1 , w2 ,..., wn ) :
n
∑ [(v − v )(w − w )] i
i
i =1
n
∑ i =1
n
(vi
− v ) 2 ∑ ( wi − w ) 2 i =1
If two columns have zero correlation, they are orthogonal. If all of the columns in the design matrix are orthogonal, the design is an orthogonal design. An orthogonal design is desirable since it ensures independence among the coefficient estimates in a regression model. In CMOST, the orthogonality of a design is measured by the maximum pair-wise correlation of the columns of a design matrix. The maximum pair-wise correlation is found by calculating the absolute value of the correlation coefficient for all pairs of column vectors in the design matrix, and then selecting the maximum of these values. A value of 0 is best (indicating orthogonality), and a value of 1 is worst (indicating that at least one column in the design matrix is a linear combination of the remaining columns). Generally, to ensure the accuracy of sensitivity analysis and uncertainty assessment results, the maximum pair-wise correlation of the design should be less than 0.2. Another desirable feature for a design is its ability to evenly spread points in the parameter space. For many interpolation methods used to generate proxies for the outputs of simulations, such as kriging, the errors get larger as the interpolated point moves away from an observation point (in many cases the errors are zero at the interpolation points). Having the observation points evenly distributed would then guarantee a uniform accuracy for the approximation throughout the parameter space. Designs with these characteristics are called “space-filling designs”. Another benefit of space-filling designs is the avoidance of undesirable artificial correlations between the parameters. In CMOST, the space-filling of a design is assessed by the Euclidean minimum distance which is the minimum Euclidean distance of all design points (experiments). A large value of Euclidean minimum distance means that no two points are close to each other. Between two designs, the one with the greater minimum distance between any two points (experiments) is considered to be the better design.
210
Theoretical Background
CMOST User Gui de
In CMOST, the following sampling methods are available: One parameter at a time (OPAAT)
Two-level classical experimental designs: Fractional factorial and Plackett-Burman designs Three-level classical experimental designs: Box-Behnken and Central Composite designs Latin hypercube design
11.3.1 One-Parameter-at-a-Time Sampling One-parameter-at-a-time sampling is a traditional method for sensitivity analysis. In this method, the researcher seeks to gain information about the effect of a parameter by varying only one parameter at a time. This procedure is repeated in turn for all parameters to be studied. For example, let us assume we want to perform a sensitivity analysis for the following five parameters.
The candidate values for these parameters are: Parameter
Candidate Values
POR PERMH
0.22, 0.29, 0.36 3000, 4500, 6000
PERMV HTSORW
2000, 2400, 2800 0.16, 0.21, 0.26
HTSORG
0.02, 0.04, 0.06
Using One-Parameter-at-a-Time sampling, CMOST will generate the following 11 experiments:
CMOST User Gui de
Theoretical Background
211
In the above example, the parameter default values are shown in Experiment ID 0 (base case). It can be seen from the table that for experiments 1, 2, and 3, all parameters except for POR use their middle candidate value. Therefore, we can determine the conditional main effect of POR by comparing the simulation results for experiments 1, 2, and 3. Similarly, we can determine the effect of PERMH by comparing the simulation results for experiments 3, 4, and 5. The use of one-parameter-at-a-time sampling is generally discouraged by researchers, for the following reasons: •
More runs are required for the same precision in effect estimation
•
Interactions between parameters cannot be captured
•
Conclusions from the analysis are not general (i.e., only conditional main effects are revealed)
11.3.2 Latin Hypercub e Desig n 11.3.2.1 Evolution of Latin Hypercube
To explain the fundamentals of Latin hypercube design, this section traces the line of literature from random designs to Latin hypercube sampling to Latin hypercube to orthogonal Latin hypercube (Cioppa, 2002). Random design was proposed by Satterthwaite (1959). In a random design, a random sampling process with replacement is used to choose all or some of the elements of each variable in the design matrix. The principal criticisms of random designs are that the interpretation of the results cannot be justified due to random confounding and the estimators of the coefficients could be biased.
212
Theoretical Background
CMOST User Gui de
To improve random design, Mckay et al. (1979) proposed Latin hypercube sampling. In Latin hypercube sampling, the input variables are considered to be random variables with known distribution functions. For each input variable, all portions of its distribution are represented by input values which divide its range into n strata of equal probability and sampling once from each stratum. For each input variable, the n sampled input values are assigned at random to n cases. As an example, let us assume there are four input variables, each having a uniform [0, 1] distribution and 10 simulation runs are to be made. For all four variables, one design value is independently chosen at random from within each of the 10 equal probable intervals [0.0, 0.1), [0.1, 0.2) , [0.2, 0.3) , [0.3, 0.4) , [0.4, 0.5) , [0.5, 0.6) , [0.6, 0.7) , [0.7, 0.8) , [0.8, 0.9) , and [0.9, 1.0]. For every input variable, the order in which the 10 sampled values appear in the design matrix is randomly determined. The following table shows a design matrix obtained by this procedure. It is noted that, as shown in this example, design matrices generated in this way will likely have correlations between columns. Run
X1
X2
X3
X4
1 2 3
0.32 0.53 0.92
0.17 0.58 0.84
0.91 0.30 0.48
0.71 0.93 0.12
4
0.17
0.90
0.05
0.22
5 6 7
0.29 0.45 0.63
0.02 0.41 0.68
0.16 0.83 0.74
0.30 0.87 0.04
8
0.75
0.24
0.66
0.61
9 10
0.87 0.01
0.79 0.36
0.52 0.22
0.48 0.53
A common variant of the design generated by Latin hypercube sampling is called Latin hypercube (Tang, 1993). In Latin hypercube, the input values for every variable are predetermined and there is no sampling within strata. An n × k Latin hypercube consists of k permutations of the vector {1, 2, …, n}T. Each element of the vector represents a sample value (level). Each of the k columns of the design matrix contains the levels 1, 2, …, n, randomly permuted, with each possible permutation being equally likely to appear in the design matrix. To enhance the capability of Latin hypercube designs for regression analysis, Ye (1998) constructs orthogonal Latin hypercubes. An orthogonal Latin hypercube is defined as a Latin hypercube for which every pair of columns has zero correlation. Furthermore, in Ye’s orthogonal Latin hypercube construction, the element-wise square of each column has zero correlation with all other columns, and the element-wise product of every two columns has zero correlation with all other columns. These properties ensure the independence of estimates of linear effects of each variable and the estimates of the quadratic effects and interaction effects are uncorrelated with the estimates of the linear effects.
CMOST User Gui de
Theoretical Background
213
11.3.2.2 Latin Hyperc ube Desig n in CMOST
The Latin hypercube designs generated by CMOST use a more general variant of the above. Specifically, each of the parameters can have any number of sample values. The sample values can be evenly distributed (uniform distribution) or not-evenly distributed as they are entered by the user. To combine the sample values to create design points (job patterns) in the design, draws without replacement are done; i.e., for the first point a value for each parameter is selected randomly from the set of possible values, for the second point the random selection is done excluding the points already selected and so on. As an example, the following algorithm describes the steps to generate a basic Latin hypercube design for five parameters.
The sample values for these parameters are: Parameter
Sample Values
POR PERMH PERMV HTSORG
0.22, 0.24, 0.26 (3 values) 3000, 3500, 4000, 4500, 5000 (5 values) 2000, 2500 (2 values) 0.16, 0.18, 0.20, 0.22 (4 values)
HTSORW
0.02, 0.04, 0.06 (3 values)
1. The number of points (job patterns) should be a common multiple for all the numbers of sample values. So the available numbers of jobs patterns are 60, 120, 180, and 240. Let’s assume we want to generate a design with 120 points (job patterns). 2. For each parameter, generate a vector with 120 sample values. For example, for parameter POR, the vector should have 40 values of 0.22, 0.24, and 0.26 respectively. The 120 sample values for each parameter are ordered randomly. 3. Assemble all the vectors of all parameters to form a basic Latin hypercube design with 120 design points (job patterns). 214
Theoretical Background
CMOST User Gui de
The basic Latin hypercube design generated in the above does not guarantee that the points will be evenly distributed and uncorrelated. The figure below shows two examples of valid Latin hypercube design where the points are totally correlated. It is obvious that these designs are not suitable for proxy generation and sensitivity analysis.
To avoid such undesirable artifacts, an iteration (optimization) process is adopted in CMOST to generate Latin hypercube designs with two desirable characteristics. •
Approximate orthogonality of the input parameters.
•
Space-filling, that is, the sampling points (experiments) should be evenly distributed in the parameter space.
The iteration (optimization) process is described as follows: 1. Start with an initial basic Latin hypercube design (this is the initial best design). 2. Generate a new basic Latin hypercube design. 3. Calculate the maximum pair-wise correlation of the new design. 4. Calculate Euclidean minimum distance of the new design. 5. Compare the new design with the best design. If the new design outperforms the best design in terms of maximum pair-wise correlation and Euclidean minimum distance, replace the best design with the new design. 6. Repeat steps 2~5 until the number of iterations is reached or an orthogonal design is found. It is noted that the above iteration (optimization) process does not aim at getting the optimum Latin hypercube design, but just an improvement over the initial Latin hypercube design, while constraining the time to generate the designs in the reasonable range. 11.3.2.3 References
Cioppa, T.M., “Efficient Nearly Orthogonal and Space-Filling Experimental Designs for HighDimensional Complex Models”, Naval Postgraduate School PhD Dissertation, September 2002.
CMOST User Gui de
Theoretical Background
215
McKay, M.D., Beckman, R.J., and Conover, W.J., “A comparison of three methods for selecting values of input variables in the analysis of output from a computer code”, Technometrics, Vol. 21, No. 2, May 1979. Satterthwaite, F.E., “Random balance experimentation”, Technometrics, Vol. 1, No. 2, May 1959. Tang, B., “Orthogonal array-based Latin hypercubes”, Journal of the American Statistical Association: Theory and Methods, Vol. 88, No. 424, December 1993. Ye, K.Q., “Orthogonal column Latin hypercubes and their application in computer experiments”, Journal of the American Statistical Association: Theory and Methods, Vol. 93, No. 444, December 1998.
11.3.3 Classical Experi mental Design 11.3.3.1 Two-Level Classi cal Experim ental Designs
Two-level designs are typically used in sensitivity analysis to identify main (linear) effects. They are ideal for a quick screening study. They are simple and economical. They also give most of the information required to go to a next-step multilevel response surface experimental design if one is needed. The standard layout for a two-level design uses - and + notation to denote the “low level” and the “high level” respectively, for each parameter. For example, the matrix below describes an experiment in which 4 runs were conducted with each parameter set to high or low during a run according to whether the matrix had a + or - set for the parameter during that run. If the experiment had more than 2 parameters, there would be an additional column in the matrix for each additional parameter. Run
Parameter (X1)
Parameter (X2)
1 2 3 4
+ +
+ +
The following types of two-level experimental designs are available in CMOST: •
Fractional factorial designs
•
Plackett-Burman designs
11.3.3.2 Three-Level Classi cal Experim ental Designs
In uncertainty assessment, three-level experimental designs can be used. In a three-level design, each parameter effect on the response is evaluated at three levels (low, median, and high). Three-level designs are also called response surface designs because in addition to main effects (linear term), both two-term interactions and quadratic terms can be examined. The standard layout for a three-level design uses -, 0, and + notation to denote the “low level”, “median level”, and the “high level” respectively, for each parameter. CMOST provides the following types of response surface designs: 216
Theoretical Background
CMOST User Gui de
•
Box-Behnken designs
•
Central Composite designs (Uniform Precision)
11.3.4 Parameter Correlatio n The Parameter Correlation table is used to incorporate relationships that may exist between green uncertain parameters when performing uncertainty assessment studies. Normally, it is assumed that uncertain parameters are independent, especially when Monte Carlo simulation is used to drive the uncertainty assessment; however, the assumption of independent parameters may not be reasonable for all simulation studies. The technique used by CMOST to account for parameter correlation was introduced by Iman and Conover (1982) (refer to the reference below for more information). Correlation between desired parameters is used to measure the degree to which parameters are related. The most common measure of linear dependence is the Pearson Product Moment Correlation, or the Pearson Correlation for short. However, since the Pearson Correlation cannot capture non-linear trends and will be misleading in cases where there are outliers, CMOST uses Spearman Rank Correlation to measure parameter correlation. 11.3.4.1 Positive definit e correlation matrix:
CMOST calculates algorithmically (Iman and Conover, 1982) the realized Spearman’s rank correlation matrix if the parameter correlation (desired Spearman’s rank correlation matrix) is positive definite, as defined below: An n × n real symmetric matrix P is positive definite if x TPx > 0 for all non-zero vectors x with real entries, where xT denotes the transpose of x. NOTE: If the desired Spearman’s rank correlation matrix is not positive definite, CMOST
will try to find the nearest matrix which meets this condition. 11.3.4.2 Spearman’s rank coeff ici ent
Spearman's rank correlation coefficient is a measure of the statistical dependence of two variables in a monotonic function. Instead of using the actual values directly, the Spearman’s rank correlation coefficient ranks the values and then correlates the ranks. It produces a value between -1 and +1. If the correlation is +1, it means there is a perfect positive relation between the ranks of both variables, and if the correlation is -1, it means there is a perfect negative relation between the ranks of both variables. To calculate the Spearman’s rank correlation parameter, the data should first be ranked from minimum to maximum, or vice versa, and then entered into the following equation: ρ =
∑ ∑
n
n i =1
( xi − x )( yi − y )
( xi − x ) i =1
CMOST User Gui de
2
∑
n
( yi − y ) i =1
2
Theoretical Background
217
11.3.4.3 References
Iman, R. and Conover, W., “A Distribution-Free Approach to Inducing Rank Correlation Among Input Variables”, Communications in Statistics - Simulation and Computation, 1982.
11.4 Prox y Modeling 11.4.1 Respo nse Surf ace Methodology Response surface methodology (RSM) explores the relationships between input variables (parameters) and responses (objective functions). The main idea of RSM is to use a set of designed experiments to build a proxy (approximation) model to represent the original complicated reservoir simulation model. The most common proxy models take either a linear form or quadratic form of a polynomial function. After a proxy model is built, tornado plots displaying a sequence of parameter estimates can be used to assess the sensitivity of parameters. 11.4.2 Types of Respo nse Surf ace Models 11.4.2.1 Lin ear Model
The linear proxy model is:
0 11 22 ⋯ =
+
+
+
+
where: y
response (objective function)
1 2 1 2 ,
,…
coefficients of the proxy model. In some statistics references, referred to as the parameter estimates or unknown parameters.
,
,…
input variables (parameters)
11.4.2.2 Simpl e Quadrati c Model
The simple second-degree (quadratic) polynomial model is: y = a0 +
k
∑ j=1
a j x j +
k
∑ a jj x j2
j=1
where: a0
1 2 ,
a jj
218
,…
intercept coefficients of linear terms coefficients of quadratic terms
Theoretical Background
CMOST User Gui de
11.4.2.3 Quadratic Model
The second-degree (quadratic) polynomial model is: y = a0 +
k
k
j=1
j=1
∑ a j x j + ∑
a jj x j2
k
+ ∑ ∑ a ij x i x j i < j j= 2
where: a0
Intercept
a1 , a2 ,, ak
coefficients of linear terms
a jj
coefficients of quadratic terms
aij
coefficients of cross (interaction) terms.
11.4.2.4 Reduced Lin ear Model
For a polynomial proxy model, the statistical significance of each term is characterized by its corresponding > | | value. If a term has a large > | | value, the term is statistically not significant and it can be removed from the proxy model to simplify and improve the model (i.e., to maximize and . For further information, refer to Summary of Fit Table).The significance probability (alpha) determines whether a response > | | of a surface term should be included in the reduced response surface model. If the term is less than or equal to alpha, the term will be included. The default alpha value is 0.1.
2 2
In CMOST, the reduced linear model is built using the following three-step process: 1. Build the linear model. 2. Remove statistically insignificant terms. 3. Build the reduced linear model using the remaining (statistically significant) terms. 11.4.2.5 Reduced Quadrati c Model
Similar to the Reduced Linear Model, the reduced quadratic model is built using the following three-step process: 1. Build the quadratic model. 2. Remove statistically insignificant terms. 3. Build the reduced quadratic model using the remaining (statistically significant) terms.
11.4.3 Normalized Parameters (Variables) The coefficients of a proxy model are highly dependent on the scale of the input variables. For example, if an input variable is converted from millimeter to meter, the coefficient changes by a factor of a thousand. If the same change is applied to a squared (quadratic) term, the coefficient changes by a factor of a million. Since we are interested in the effect size CMOST User Gui de
Theoretical Background
219
indicated by the coefficients, we need to examine the coefficients in a more scale-invariant fashion. This means converting from an arbitrary scale to a meaningful one so that the magnitudes of the coefficients can be related to the size of the effects on the response. In CMOST, all input variables (parameters) are scaled to have a mean of zero and a range from -1 to 1. This corresponds to the scaling used in traditional experiment design. For a linear term, the coefficient is half the predicted response change as the input variable travels over its full range, from -1 to 1.
11.4.4 Respo nse Surface Model Verif ication Plot The model verification plot shows how the data points fit the model by plotting the actual response versus the predicted response for each training and verification job. The distance from each point to the 45 degree line is the error, or residual, for that point. The points that fall on the 45 degree line are those that are perfectly predicted. To visually show whether the model is statistically significant, the lower and upper 95% confidence curves are superimposed on the actual (simulated) vs. proxy predicted plot. The lower and upper 95% confidence curves are determined using equations given in the paper “Leverage Plots for General Linear Hypotheses” by John Sall, published in The American Statistician, November 1990, Vol. 44, No. 4. If the 95% confidence curves cross the horizontal reference line defined by the Mean of Response, then the model is significant. If the curves do not cross, then the model is not significant (at the 5% level). 11.4.5 Summ ary of Fit Table The Summary of Fit table shows the following numeric summaries of the response surface model: 2
11.4.5.1 R-Squar ed (R ) 2
The coefficient of multiple determination R is defined as: R
2
=
Sum of Squares ( Model) Sum of Squares (Total )
2
R is a measure of the amount of reduction in the variability of the response obtained by using 2 the regressor variables in the model. An R of 1 occurs when there is a perfect fit (the errors 2 are all zero). An R of 0 means that the model predicts the response no better than the overall 2 response mean. It should be noted that a large value of R does not necessarily imply that the 2 regression model is a good one. Adding a variable to the model will always increase R , regardless of whether the additional variable is statistically significant or not. Thus, it is 2
possible for models that have large values of R to yield poor prediction of new observations.
220
Theoretical Background
CMOST User Gui de
11.4.5.2 R-Square Adj ust ed 2
R can be adjusted to make it comparable over models with different numbers of regressors by using the degrees of freedom in its computation. 2 =1− Radjusted
(n − 1) (n − p)
2
is defined as:
(1 − R 2 )
Here n is the number of observations (training experiments) and p is the number of terms in the response model (including the intercept). In general,
2
will not always increase as variables are added to the model. In fact, if
2
2
unnecessary terms are added, the value of will often decrease. When R and differ dramatically, there is good chance that non-significant terms have been included in the model.
2
11.4.5.3 R-Square Predict ion
2 2
R prediction
is defined as:
= 1−
PRESS Sum of Squares (Total )
where PRESS is the prediction error sum of squares. To calculate PRESS, select an observation i. Fit the regression model to the remaining n − 1 observations and use this equation to predict the withheld observation yi . Denoting this predicted value by yˆ (i ) , we can find the prediction error for point i as e( i ) = yi − yˆ ( i ) . The prediction error is often called the ith PRESS residual. This procedure is repeated for each observation i = 1, 2, , n , producing a set of n PRESS residuals e(1) , e( 2 ) , , e( n ) . The PRESS statistic is then defined
as the sum of squares of the n PRESS residuals. PRESS =
2
n
∑e
2 (i )
i =1
n
= ∑ [ yi − yˆ (i ) ]2 i =1
provides an indication of the predictive capability of the regression model. For
2 example, we could expect a model with R prediction = 0.95 to “explain” about 95% of the
variability in predicting new observations. 11.4.5.4 Mean of Respo nse
Mean of Response is the overall mean of the response values. It is important as a base model
for prediction because all other models are compared to it.
CMOST User Gui de
Theoretical Background
221
11.4.5.5 Standard Error
Standard Error estimates the standard deviation of the random error. It is the square root of the Error Mean Square in the corresponding Analysis of Variance table. Standard Error
is commonly denoted as σ .
11.4.6 Analysis of Variance Table 11.4.6.1 Source
Source lists the three sources of variation: Model, Error, and Total. 11.4.6.2 Degrees of Freedom (DF)
Total DF is used for the simple mean model. Only one degree of freedom is used (the
estimate of the mean parameter) in the calculation of variation, so the degrees of freedom for Total is always one less than the number of observations. Model DF is the number of terms (except for the intercept) used to fit the model. Error DF is the difference between Total DF and Model DF. 11.4.6.3 Sum of Squares
The Sum of Squares (SS) column accounts for the variability measured in the response. It is the sum of squares of the differences between the fitted response and the actual response. Total SS is the sum of squared distances of each response from the sample mean which is the
base model (or simple mean model) used for comparison with all other models. Error SS is the sum of squared differences between the fitted values and the actual values.
This sum of squares corresponds to the unexplained Error (residual) after fitting the regression model. Total SS less Error SS gives the sum of squares attributed to the Model. One common set of notations for these is SSR, SSE, and SST for sum of squares due to Regression (model), Error, and Total, respectively. 11.4.6.4 Mean Squar e
Mean Square is a sum of squares divided by its associated degrees of freedom. This
computation converts the sum of squares to an average (mean square). 2
Error Mean Square estimates the variance of the error term. It is often denoted as s . 11.4.6.5 F Ratio
F Ratio is the Model Mean Square divided by the Error Mean Square. It tests the
hypothesis that all of the regression parameters (except the intercept) are zero. Under this whole-model hypothesis, the two mean squares have the same expectation. If the random errors are normal, then under this hypothesis, the values reported in the Sum of Squares column are two independent chi-squares. The ratio of these two chi-squares divided by their respective degrees of freedom (reported in the Degrees of Freedom column) has an 222
Theoretical Background
CMOST User Gui de
F-distribution. If there is a significant effect in the model, the F Ratio is higher than expected by chance alone. 11.4.6.6 Prob > F
Prob > F is the probability of obtaining a greater F-value by chance alone if the specified
model fits no better than the overall response mean. Significance probabilities of 0.05 or less are often considered evidence that there is at least one significant regression factor in the model. This significance is also shown graphically in Simulated vs. Proxy Predicted plots, as described in Response Model Verification Plot.
11.4.7 Effect Scr eening Usin g Norm alized Parameters 11.4.7.1 Term
This column names the estimated terms. The first term is always the intercept. All parameters are normalized from (Low, High) to (-1, +1). 11.4.7.2 Coefficient
This column lists the parameter estimates for each term. These are the coefficients of the response surface model found by least squares. 11.4.7.3 Standard Error
Standard Error is the estimate of the standard deviation of the distribution of the parameter estimate (coefficient). It is used to construct t -tests. 11.4.7.4 t Ratio
t Ratio is a statistic that tests whether the true parameter (coefficient) is zero. It is the ratio of the coefficient to its standard error and has a Student’s t-distribution under the hypothesis,
given the normal assumptions about the model. 11.4.7.5 Prob > |t|
Prob > |t| is the probability of getting an even greater t -statistic (in absolute value), given the
hypothesis that the parameter (coefficient) is zero. This is the two-tailed test against the alternatives in each direction. Probabilities less than 0.05 are often considered as significant evidence that the parameter (coefficient) is not zero. 11.4.7.6 VIF
This column shows the variance inflation factor, which is a useful measure of the multicollinearity problem. Multi-collinearity refers to one or more near-linear dependencies among the regressor variables due to poor sampling of the design space. Multi-collinearity can have serious effects on the estimates of the model coefficients and on the general applicability of the final model. The larger the variance inflation factor, the more severe the multi-collinearity. Variance inflation factors should not exceed 4 or 5. If the design matrix is perfectly orthogonal, the variance inflation factor for all terms will be equal to 1. CMOST User Gui de
Theoretical Background
223
11.4.8 Lin ear Model Effect Estim ates The effect estimate indicates how changing the setting of a parameter changes the response (objective function). The effect estimate of a single parameter is also called a main effect or linear effect. To determine the linear (main) effect estimates, the simulation results are fit using a linear proxy model: y
= a0 + a1 x1 + a2 x2 +
+ ak xk
In effect screening, the coefficients a1 , a2 , , ak are called parameter estimates or effect estimates. In the above equation, a large coefficient suggests that the parameter is important because it means that increasing or decreasing the parameter value leads to a significant change in the objective function (response). On the other hand, a small coefficient would imply that the parameter is not important. Note that parameter estimates are highly dependent on the scale of the parameter. For example, if you convert a parameter from grams to kilograms, the parameter estimates change by a multiple of a thousand. Therefore, the effect estimates should be determined in a scaleinvariant fashion. There are many approaches to doing this. In CMOST, all parameters are scaled to have a mean of zero and a range of two; i.e., all parameters are scaled to have a range from -1 to 1. For a simple linear proxy model, the scaled estimate is half the predicted response change as the parameter travels its full range (i.e., from -1 to 1). To avoid ambiguous interpretation of tornado plots, CMOST reports the actual predicted response change as the parameter travels from the smallest sample value to the largest sample value. As an example, consider the following tornado plot of linear effect estimates:
224
Theoretical Background
CMOST User Gui de
The above tornado plot shows that the linear effect estimate for PERMV(1000, 2000) is 207.7. This means that if you increase PERMV from 1000 to 2000, the expected increase of cumulative oil production is 207.7. Here the word “expected” is used because a linear proxy model is an approximation of the real reservoir simulation model. The actual increase of the objective function due to the change of PERMV from 1000 to 2000 varies for different combinations of sample values of the other parameters. To demonstrate the relative importance of different parameters, all of the effect estimates are plotted on the same scale together with “Maximum”, “Minimum”, and “Target” values. The Maximum is the maximum objective function value of all simulation runs in the design and the Minimum is the minimum objective function value of all simulation runs in the design. The Target is the value in the target field history file, if one is specified. For example, the above plot shows that the Maximum is less than the Target. This indicates that it is not possible to match the historical value using the given set of parameters and the defined ranges. Based on the effect estimates of PERMV and PERMH, we may need to adjust the ranges to PERMV(2000, 3000) and PERMH(3000, 5000) to match the historical value (target) of cumulative oil.
11.4.9 Quadratic Model Effect Estim ates For second-degree (quadratic) polynomial models, parameter interaction effects (cross terms 2 xi x j ) and quadratic effects ( x j ) can be extracted in addition to linear effects ( x j ): y = a0 +
k
k
k
∑ a x + ∑ a x + ∑∑ a x x j
j =1
j
jj
j =1
2 j
ij
i
j
i< j j = 2
Similar to linear model effect estimates, quadratic model effect estimates are determined in a scale-invariant fashion. More specifically, all parameters are scaled to have a mean of zero and a range from -1 to 1. For this reason, for the linear and cross terms in the quadratic model, the scaled estimate is half the predicted response change as the parameter travels through its range (from -1 to 1). To avoid ambiguous interpretation of tornado plots, CMOST reports the actual predicted response change as the parameter (or the cross and quadratic terms) travels from the smallest sample value to the largest sample value. A sample tornado of polynomial effect estimates is shown below:
CMOST User Gui de
Theoretical Background
225
226
Theoretical Background
CMOST User Gui de
The following table explains the interpretation of the above tornado plot: Term
PERMH(3000, 6000)
PERMH =
PERMV(2000, 2800)
PERMV =
POR*PERMH
POR(0.22, 0.36)
Effect Estimate
Scaled Term
(see note)
2( PERMH − 3000) 6000 − 3000
−1
2(PERMV − 2000) −1 2800 − 2000
POR × PERMH POR =
2(POR − 0.22) 0.36 − 0.22
273
223.2
-62.86
−1
-57.77
POP*PERMV
POR × PERMV
-57.35
POR*POR
POR × POR
48.52
HTSORG(0.02, 0.06) PERMH* PERMV
HTSORG =
2( HTSORG − 0.02) 0.06 − 0.02
PERMH × PERMV
−1
-44.63
44.62
NOTE: Effect Estimate is the expected change of the objective function when the scaled
term travels from -1 to +1. Analysis of this particular tornado plot suggests the following conclusions regarding the sensitivities of the parameters on cumulative oil production: 1. The two most important effects are the main (linear) effects of PERMH and PERMV 2. The linear effects of POR and HTSORG are relatively important. 3. There are interaction effects between POR and PERMH, POR and PERMV, and PERMH and PERMV. 4. The non-linear (quadratic) effect of POR*POR is also relatively important.
CMOST User Gui de
Theoretical Background
227
11.4.10 Reduced Model Effect Estim ates It is common that some model terms of a linear or quadratic model are not statistically significant. Consider the quadratic model with the following Effect Screening table, where all terms, including those that are statistically insignificant, are included:
Through the Proxy Settings tab, if you set Exclude Statistically Insignificant Terms to True, then the proxy model will be built using only those terms that are significant; i.e., which have Prob > |t| values greater than the value you set for Significant Probability Alpha. A simple quadratic model can then be built using only significant terms. The Effect Screening table and its corresponding tornado plot for the simple quadratic model are shown below:
228
Theoretical Background
CMOST User Gui de
11.5 Optimizers 11.5.1 CMG DECE The CMOST DECE (Designed Exploration and Controlled Evolution) optimizer implements CMG’s proprietary optimization method. The DECE optimization method is based on the process which reservoir engineers commonly use to solve history matching or optimization problems. For simplicity, DECE optimization can be described as an iterative optimization process that first applies a designed exploration stage and then a controlled evolution stage. In the designed exploration stage, the goal is to explore the search space in a designed random manner such that maximum information about the solution space can be obtained. In this stage, experimental design and Tabu search techniques are applied to select parameter values and create representative simulation datasets. In the controlled evolution stage, statistical analyses are performed for the simulation results obtained in the designed exploration stage. Based on the analyses, the DECE algorithm scrutinizes every candidate value of each parameter to determine if there is a better chance to improve solution quality if certain candidate values are rejected (banned) from being picked again. These rejected candidate values are remembered by the algorithm and they will not be used in the next controlled exploration stage. To minimize the possibility of being trapped in local minima, the DECE algorithm checks rejected candidate values from time to time to make sure previous rejection decisions are still valid. If the algorithm determines that certain rejection decisions are not valid, the rejection decisions are recalled and corresponding candidate values are used again.
CMOST User Gui de
Theoretical Background
229
The DECE optimization method has been successfully applied in a number of real-world reservoir simulation studies, including: •
History matching for a highly heterogeneous black oil model
•
History matching of cold heavy oil production with aquifer
•
History matching of cyclic steam stimulation process
•
NPV optimization for a post-primary SAGD model with aquifer
NPV optimization for a 6 well pair SAGD model The results demonstrate that DECE optimization method is reliable and efficient. Therefore, it is one of the recommended optimization methods in CMOST. •
11.5.2 Latin Hypercube plus Proxy Optimization Use of this optimization algorithm involves the following four steps: 1. Latin Hypercube Design: The purpose of Latin hypercube design is to construct combinations of the input parameter values so that the maximum information can be obtained from the minimum number of simulation runs. Latin hypercube design is chosen here because it can handle any number of input parameters with mixed levels. See Latin Hypercube Design for further information. 2. Proxy Modeling: In this step, an empirical proxy model is built using training data obtained from Latin hypercube design runs. The proxy model options available are polynomial regression model and ordinary kriging. Polynomial regression models have been widely used for the analysis of physical and computer experiments due to their ease of understanding, flexibility, and computational efficiency. The cost of the ordinary kriging interpolation estimate is normally significantly higher than the cost of the polynomial regression estimate; however, it is still orders of magnitude faster than actual simulation and it often provides more accurate prediction than polynomial models. Refer to Proxy Modeling for further information. 3. Proxy-based Optimization: Due to the intrinsic limitations of a proxy model, it is generally recognized that they usually cannot give accurate predictions for highly nonlinear multidimensional problems. Therefore, the optimal solution obtained based on the proxy model may not be the true optimal for the actual reservoir model. This means that certain suboptimal solutions of the proxy model may become the true optimal solution for the actual reservoir model. To counteract false optimum predictions, a pre-defined number of possible optimum solutions (i.e., suboptimal solutions of the proxy model) are generated to increase the chance of finding the global optimum solution.
230
Theoretical Background
CMOST User Gui de
4. Validation and Iteration: For each possible optimum solution found through proxy optimization, a reservoir simulation needs to be conducted to obtain the true objective function value. To further improve the prediction accuracy of the proxy model, the validated solutions can be added to the initial training data set. The updated training data set can then be used to build a new proxy model. With the new proxy model, a new set of possible optimum solutions can be obtained. This iterative procedure can be continued for a given number of iterations or until a satisfactory optimal solution is found. The following figure illustrates the workflow of the Latin hypercube plus proxy optimization algorithm: GenerateinitialLatinhypercubedesign Runsimulationsusingthedesign
Getinitialsetoftrainingdata
• •
Polynomial Ordinarykriging
Buildaproxymodelusingtrainingdata Addvalidated solutionsto trainingdata
Findpossibleoptimumsolutionsusingproxy
Runsimulationsusingthesepossiblesolutions
No
Satisfystopcriteria? Yes
Stop
One unique characteristic of Latin Hypercube plus Proxy optimization is a jump in the solution quality after the Latin hypercube design is finished and proxy optimization starts. For example, as shown in the following figure, after the initial 60 Latin Hypercube design runs, the global optimum solution is quickly found within two iterations of proxy optimization (there are 10 experiments in each iteration).
CMOST User Gui de
Theoretical Background
231
Optimization using proxy
Latin hypercube design
11.5.3 Particl e Swarm Optim ization Particle swarm optimization (PSO) is a population-based stochastic optimization technique developed by James Kennedy and Russell C. Eberhart in 1995, inspired by social behavior of bird flocking and fish schooling. Social influence and social learning enable a person to maintain cognitive consistency. People solve problems by talking with other people about them and, as they interact, their beliefs, attitudes, and behaviors change. The changes can be depicted as the individuals moving toward one another in a sociocognitive space. Particle swarm simulates this kind of social optimization. The system is initialized with a population of random solutions and searches for optima by updating generations. The individuals iteratively evaluate their candidate solutions and remember the location of their best success so far, making this information available to their neighbors. They are also able to see where their neighbors have had success. Movements through the search space are guided by these successes, with the population usually converging towards good solutions. For information about configuring a CMOST particle swarm optimization, refer to Particle Swarm Optimization (PSO).
232
Theoretical Background
CMOST User Gui de
11.5.4 Differenti al Evolut ion Differential Evolution (DE) is a powerful global optimization algorithm that was introduced by Storn and Price (1995)1, based on their solution to the Chebychev polynomial fitting problem. DE uses a fixed number, Np, of vectors (solutions) in each population (or generation), and with each vector, a combination of parameter values. To create new solutions, DE evolves the population by arithmetically operating on these vectors (solutions). The process involves four steps—initialization, mutation, crossover, and selection. The system is initialized with a population of random solutions or predefined solutions. It then searches for optima by updating the populations/generations. The mutation process involves adding a scaled difference of two solutions, using factor F, to the best solution in each generation, to evolve the population/generation. The crossover operation uses factor Cr to increase the population diversity. Finally, a selection operator is applied to preserve the optimal solutions for the next generation. For information about configuring a CMOST differential evolution, refer to Differential Evolution (DE). 11.5.5 Random Brute Forc e Search The brute force search method is a straightforward optimization method that evaluates all possible solutions and decides afterwards which one is the best. It is feasible only for small problems in terms of the dimensionality of the search space, since CMOST requires that the search space (the number of all possible parameter value combinations) be less than 65536. To address the limitations of the brute force search, CMOST has implemented random search methods for optimization, based on exploring the domain in a random manner to find optimum solutions. These are the simplest methods of stochastic optimization and can be quite effective in some problems (small search space and fast-running simulation jobs). There are many different algorithms for random search such as blind random search, localized random search, and enhanced localized random search. The algorithm implemented in CMOST is blind random search. This is the simplest random search method, where the current sampling does not take into account the previous samples. That is, this blind search approach does not adapt the current sampling strategy to information that has been garnered in the search process. One advantage of blind random search is that it is guaranteed to converge to the optimum solution as the number of function evaluations (simulations) gets large. Realistically, however, this convergence feature may have limited use in practice since the algorithm may take a prohibitively large number of function evaluations (simulations) to reach the optimum solution. For information about random brute force search configuration settings, refer to Random Brute Force Search.
1
Storn, R. and Price, K., “Differential Evolution - A Simple Efficient Adaptive Scheme for Global Optimization over Continuous Spaces”, Technical Report 95-012, Int. Comp. Sci. Inst., Berkeley, CA, 1995. CMOST User Gui de
Theoretical Background
233
12 Glossary
The following terms are:
CMOST terms, or terms that have specific meaning within the CMOST context. Terms needed to describe the application and use of CMOST.
Term
Definition
Base 3tp File
File created by Results 3D using the base IRF. The base 3tp file is used by CMOST as the basis for displaying plots in Results Graph.
Base Dataset
A valid dataset for any CMG simulator that is used as the basis for a CMOST study. The master dataset is derived from the base dataset.
Base IRF
Simulation results file which uses default parameter values.
Base Session File
File created by Results Graph using the base IRF. The base session file is used by CMOST as the basis for displaying plots in Results Graph.
Box-Behnken Design
In Uncertainty Assessment, a set of experiments designed to have more runs at the middle values of the input parameters.
Brute Force Search
History Matching and Optimization method in which all combinations of parameter values are tested.
Candidate Values List
List of values that will be substituted for a discrete type parameter in a Master Dataset.
Central Composite Design
In Uncertainty Assessment, a set of experiments with runs which are evenly distributed at low, middle, and high values of the input parameters.
Characteristic Date Time
Dates used in the calculation of an objective function. Characteristic date times include: Built-in fixed date times, the simulation start and end date times derived from the SR2 files. Fixed date times, entered by users. Dynamic date times from original time series, such as the date the value of an original time series exceeds a certain quantity. Dynamic date times from user-defined time series.
CMOST User Gui de
Glossary 235
236
Term
Definition
CMG DECE
See DECE.
CMM Editor
Tool for viewing, navigating, and editing the CMOST Master Dataset (.cmm) and related include files (.inc) files. For further information refer to CMM File Editor .
CMM File
See Master Dataset.
CMM File Editor
See CMM Editor .
CMOST
CMG’s sensitivity assessment (SA), history matching (HM), optimization (OP), and uncertainty assessment (UA) tool.
CMR File
Results file from earlier version of CMOST. Refer to Converting old CMOST Files to new CMOST Files for information about converting CMR files to new CMOST project and study files.
CMT File
Task file from earlier version of CMOST. Refer to Converting old CMOST Files to new CMOST Files for information about converting CMT files to new CMOST project and study files.
Constraint
In History Matching and Optimization, used to prevent unnecessary simulation runs and to allow users to change Objective Function values when constraints are violated. For further information, refer to Hard Constraint and Soft Constraint.
Cross Plot
XY plots which are used to identify trends and relationships. The axes can, for example, be parameters or objective functions. For further information, see Parameter Cross Plots and Objective Function Cross Plots.
Date Time
Refer to Characteristic Date Time for information about date times used in CMOST.
DE (Differential Evolution)
History matching and optimization method in which the run is initialized with a population of random solutions or pre-defined known ones. DE attempts to find parameter values in an intelligent manner to get optimal solutions. Refer to Differential Evolution (DE) for more information.
DECE (Designed Exploration Controlled Evolution)
CMG-proprietary History Matching and Optimization method. For further information, see CMG DECE.
Dictionary File
Text file that contains a repository for descriptions (name, dimensions, data range, and so on) of simulation data items in the SR2 files.
Dynamic Date Time
A date time from an original or a user-defined time series, on which a condition is met for the first or the last time; for example,
Glossary
CMOST User Gui de
Term
Definition
the first date time at which a property reaches a critical value. Experiment
A CMOST experiment is defined by a unique set of input parameters and objective functions.
Experimental Design
Definition of a set of experiments, optimally selected to obtain information about a response.
FHF (Field History File)
A text file containing reservoir production or injection data for one or more wells. Field History Files are required only for History Matching.
Fixed Date Simulation Results Observer
A Results Observer that collects data at one point in time for each simulation.
Fixed Date Times
User-defined fixed date, for example, YearEnd2011, used in the determination of an objective function, such as the cumulative oil produced by the end of 2011. Refer to Characteristic Date Times for information about specifying fixed data times.
Fluid Contact Depth Series
If the SR2 files contain fluid saturation data, CMOST can calculate gas-oil, water-oil, and water-gas contact depths at well locations. These depths are calculated for each time step that fluid saturation data is available. These depths can then be used as time series data for history matching.
Formula
Equation entered in a Master Dataset to perform calculations on values during a CMOST run.
Fractional Factorial Design
In classic experiment design, a sampling method in which a subset of the samples determined from a full factorial design is chosen to determine information about the important aspects of a study.
Full Factorial Design
Study with experiments that take on all possible combinations of parameter values at two or three selected levels.
Fundamental Data
Data (time, time series, distance vs. depth, and fluid contacts) that is obtained or calculated directly from SR2 files, and used for history matching and optimization.
Hard Constraint
If a hard constraint is violated, the simulation run will not take place as these constraints are checked by the CMOST engine prior to the start of the run. See Hard Constraints for information about specifying hard constraints.
Histogram
A graph with values or ranges of values on the x-axis and bars, the height of which represents occurrence, in the y direction.
HM (History Match)
CMOST analysis to match simulation results to production history.
CMOST User Gui de
Glossary 237
238
Term
Definition
History Matched Model
See Matched Model.
History Match Error
Percentage relative error between simulation results and production history obtained from, for example, a Field History File.
Include File
A data file (an array of porosity data, for example) that is included in a Master Dataset by reference.
Intermediate Parameter
A parameter that is entered in the Parameters table to help defined the relationships between two other parameters. For an example, refer to To add an intermediate parameter .
IRF (Indexed Results File)
Text file in the SR2 file system describing the data in the MRF (Main Results File) and how to obtain this data.
Kriging
Estimating the values of geostatistical variables at locations where samples have not been taken, using weighted values of neighboring samples.
LHD (Latin Hypercube Design)
Technique for constructing combinations of input parameter values so that the maximum information can be obtained from the minimum number of simulation runs.
Latin Hypercube Plus Proxy Optimization
Latin hypercube design is used to construct experiments then an empirical proxy model is built using the training data obtained from the Latin hypercube design runs. The proxy model is then used to determine the optimal solution. See Latin Hypercube plus Proxy for further information.
Local Objective Function
A function that the user wants to minimize (history match error , for example) or maximize (net present value, for example). Refer to Objective Function for additional information.
Master Dataset (CMM)
Version of the Base Dataset that has been modified to tell CMOST where to enter different parameter values, thereby creating new datasets for each experiment.
Match Quality
In history matching, a measure of the match between the results of a CMOST study and a field history file. Refer to History Match Quality for further information.
Matched Model
Model produced by minimizing the history match error. Also referred to as a history matched model.
Monte Carlo Simulation
Simulations that involve repeated generation of outputs using randomly generated inputs which follow defined probability distributions.
Monte Carlo Simulation Using
Uncertainty Assessment method. Using Monte Carlo simulation, inputs are randomly generated from probability distributions to
Glossary
CMOST User Gui de
Term
Definition
Proxy
simulate the process of sampling from an actual population. These inputs are then fed into the response surface (proxy) model, which is used to generate outputs and determine the uncertainty in the reservoir model. See Monte Carlo Simulation Using Proxy for further information.
Monte Carlo Simulation Using Reservoir Simulation
Uncertainty Assessment method. In this case, the inputs selected from the Monte Carlo simulation are run through the simulator to generate outputs and determine the uncertainty in the reservoir model. See Monte Carlo Simulation Using Simulator for further information.
MRF (Main Results File)
Main Results File, a binary file in the SR2 file system containing simulation data.
NPV (Net Present Value)
Stream of future cash flows discounted to a given date (present date or base date) to reflect the time value of money and other factors, such as investment risk.
Objective Function
An expression or quantity that the user wants to minimize or maximize. In the case of History Matching, for example, the user wants to minimize the error between field data and simulation results. In the case of Optimization, the user may want to maximize net present value.
Observer
See Results Observer .
OPAAT (One Parameter At A Time Sampling)
Traditional method for performing Sensitivity Analysis studies, in which information about the effect of a parameter is determined by varying only that parameter. The procedure is repeated, in turn, for all parameters to be studied. Refer to One-Parameter-ata-Time Sampling for more information.
OP (Optimization)
Identification of an optimal field development plan, and operating conditions that will produce either a maximum or minimum value for objective functions that the user has specified, in particular the global objective function (OF, for example, the net present value, or NPV) and subsidiary OF’s, which reflect the influence of selected operating parameters.
Optimal Model
Model determined from an optimization study; i.e., the parameter values which, for example, maximize net present value, or NPV.
Optimizer
Algorithm used to find the optimal solution for history matching or optimization studies. In the case of CMOST, these algorithms include CMG DECE Optimizer , Latin Hypercube Plus Proxy Optimization, Particle Swarm Optimization, and Random Brute Force Search.
CMOST User Gui de
Glossary 239
240
Term
Definition
Ordinary Kriging
Kriging that relies on the spatial correlation structure of the data to determine the weighting values that should be applied for a particular location; for example, the further the data point is from the location, the lower the weighting factor.
Origin
Source of simulation data, for example, a well or a field.
Original Time Series
Time series data obtained directly from a simulator SR2 files.
Orthogonality
In CMOST, the orthogonality of an experiment design is measured by the maximum pair-wise correlation of the columns of a design matrix. Refer to the information about Orthogonality.
.out File
A text file that echoes the contents of the .dat file, and also includes simulation results. Users are able to read this file.
Parameter
Depending on the experimental design, values are substituted for parameters in the Master Dataset, either from a Candidate Values List or a formula.
PSO (Particle Swarm Optimization)
History Matching and Optimization method in which the run is initialized with a population of random solutions. Navigation through the search space is guided by the best success so far, which usually results in a convergence towards the best solution. Refer to Particle Swarm Optimization for more information.
Plackett-Burman Design
In Sensitivity Analysis, a screening method in which the resulting number of experiments is a multiple of four. This method can be used when you have a large number of potential factors and you want to quickly determine those that will most affect the objective function.
Pre-simulation Commands
Commands that are used to modify the experiment dataset before it is submitted to a simulator; for example, users may want to adjust variogram parameters in History Matching.
Prior Probability Distribution Function
In Uncertainty Assessment, the probability distribution of the input parameter values. The information is used to formulate the parameter values used in uncertainty tests, so that their distribution reflects reality.
Project
A collection of studies defined for the purpose of characterizing the performance of, for example, a field, sector, group, or even a single well. CMOST projects consist of one or more studies, each of which consists of one or more experiments.
Property vs. Distance Series
Type of data series, such as saturation vs. distance along the well, which is retrieved from the SR2 files for one instant in time. Used for history matching, property vs. distance series data can be compared with data obtained from one or more well log files.
Glossary
CMOST User Gui de
Term
Definition
Proxy Dashboard
Through the Proxy Dashboard, you can immediately start to inspect and assess the effects of varying parameter values on time series while the study is running. Refer to Proxy Dashboard for further information.
Proxy Model
An empirical model built using data obtained from simulation runs. The proxy model will typically run several orders of magnitude faster than actual simulations. Refer to Proxy Modeling for more information.
Proxy-based Optimization
Optimization method, in which a predefined number of possible optimal solutions, obtained from the proxy model, is run through the simulator to obtain the true optimal solution. See Proxy-based Optimization.
Random Brute Force
History Matching and Optimization method in which all combinations of parameter values are tested, with the starting point and path through the parameter values different for each run. See Random Brute Force Search for further information.
RSM (Response Surface Methodology)
For Sensitivity Analysis and Uncertainty Assessment using classical experimental design or Latin hypercube design, a response surface methodology is applied. Response surface methodology (RSM) explores the relationships between input variables ( parameters) and responses (objective functions). A set of designed experiments is used to build a proxy model (approximation) of the reservoir objective function. The most common proxy models take either a linear or quadratic form. After a proxy model is built, Tornado plots displaying a sequence of parameter estimates are used to assess parameter sensitivity. Refer to Response Surface Methodology for further information.
Restart (.rst) File
This file contains the information that allows a simulation to continue from a previously halted run.
Results File (CMR)
With CMOST 2012 and earlier, results are saved to a CMR file.
Results Observers
Simulation outputs that CMOST caches in its results file. During CMOST runs, results observers display results specified by the user. As the run progresses, more and more curves or plots will appear on the plots, with the optimal runs highlighted. The user can also highlight the results of specific experiments.
R-Square ( R2)
Indicates how well a proxy model fits observed data. An R2 of 1 occurs when there is a perfect fit (the errors are all zero). An R2 of 0 means that the proxy model predicts the response no better than the overall response mean.
R-Square Adjusted
Modification of R2 that adjusts for the number of explanatory
CMOST User Gui de
Glossary 241
Term
Definition
terms in a model. Unlike R , the adjusted R increases only if the new term improves the proxy model more than would be expected by chance. The adjusted R2 can be negative, and it will always be less than or equal to R2.
242
R-Square Predicted
Indicates how well a proxy model predicts responses for new observations. Ranging between 0 and 1, larger values suggest models of greater predictive ability.
Run Configuration
Specification of the machines to which CMOST will submit jobs; for example, to the user’s local machine or to a cluster of machines accessible through the network.
Sampling Method
Method by which the parameter space is sampled when performing a Sensitivity Analysis or Uncertainty Assessment. For further information, refer to Sampling Methods.
SA (Sensitivity Analysis)
Analysis carried out to determine which parameters have the greatest effect on simulation results. This information is then useful in suggesting parameters that can be eliminated from consideration in subsequent studies.
Soft Constraint
Allows the user to override objective function values if they violate the constraint. Checking for this violation takes place while the simulation is being run. A penalty for constraint violation can also be defined. See Soft Constraints for information about specifying soft constraints.
Special Dictionary File
Dictionary file required to process SR2 files produced by a special simulator, such as the STARS-ME simulator.
SR2 Files
Group of files containing the results of a simulation run—an IRF (Indexed Results File) and an MRF (Main Results File). Results Graph and 3D use the SR2 files for post-processing of simulation output.
SR2 Processing Stack Size
Stack size (MB) used by the SR2 reader to read SR2 files. The default stack size is 40 MB.
Study
A set of parameters is varied in a defined way to assess the sensitivity of parameters on objective functions (SA, Sensitivity Analysis), to match simulator outputs with a history file (HM, History Match), to optimize the value of objective functions by varying operating conditions (OP, Optimization), or to assess the variation of an objective function due to uncertainty in the value of a reservoir parameter (UA, Uncertainty Assessment).
Study File (.cms)
A file that contains all of the configuration data needed to run a CMOST study.
Glossary
CMOST User Gui de
Term
Definition
Study Folder (.cmsd)
Folder that contains all of the study .dat, SR2, .log, and .vdr files. The retention of .dat, .log and SR2 files is as specified by the user in the Job Record and File Management area of the Simulation Settings page.
Time Series Simulation Results Observer
Results observer which collects and plots data that changes with time, such as rate and pressure for all times during the simulation runs.
Tornado Plots
A tornado plot is produced for each objective function. Parameters are ordered vertically, from those that have the greatest effect on the objective function (longest bar) to those that have the least effect (shortest bar). The effect is a graph that looks like a tornado.
Training Data
Data used to build a proxy model by analyzing the relationship between input parameters and output objective functions.
Three-level Classical Experimental Design
Experiments take on all possible combinations of three values or “levels” for each input parameter; i.e., a low, median, and high level.
Two-level Classical Experimental Design
Experiments take on all possible combinations of two values or “levels” for each input parameter; i.e., a low and high level.
UA (Uncertainty Assessment)
Analysis carried out to determine the likely variation in simulation results due to uncertainty, in particular, of reservoir variables.
User-defined Time Series
Time series that is not directly available from the SR2 files, but which can be derived from available SR2 data. Refer to UserDefined Time Series for further information.
Variogram
Used in kriging, the variogram describes the variance of the difference between the field values at two locations (x and y) across realizations of the field (Cressie, N., 1993, Statistics for Spatial Data, Wiley Interscience).
VDR (Vector Data Repository) Files
Files containing compressed simulation data from CMOST runs, which are used to calculate objective functions. The files are compressed to reduce disk space and runtime.
CMOST User Gui de
Glossary 243
13 Index
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A Advanced objective functions , 100 Advanced settings, 61
B Base dataset, 21 Base files, 20 Base IRF, 21 Base session file, 21 Base SR2 files, 21 Base SR2 Info area, 60 Basic simulation result , 90 Best practices (for using CMOST) , 32
C Characteristic date times , 88 Classical experimental design , 216 CMG DEC optimization, 229 engine settings, 117 CMM File Editor , 173 block selection, 178 comments, 175 context menu, 174 creating/inserting CMOST parameters , 174 deleting parameters, 175 enable/disable syntax , 178 find/replace text, 178 include files, 175 keyboard shortcuts, 180 multiple views, 179 navigation tools, 177 starting, 173 syntax enabling and disabling , 178 toggle outlining, 177 CMOST base files, 20 CMOST User Gui de
best practices, 32 closing, 57 components, 20 concepts, 20 configuring to work with Launcher , 193 file system, 21 formulas, 27 master dataset, 24 names, 52 navigating, 35 opening, 35 overview, 17 project components, 20 required fields, 52 running and controlling , 111 tab display, 53 tables, 54 user interface, 30 CMOST Formula Editor , 181 built-in functions, 183 constants (in formulas), 181 formula calculation order , 183 functions (in formulas), 181 operators (in formulas), 182 parts (of formulas), 181 variables (in formulas), 182 CMOST formulas, 27 Constraints hard constraints, 80 soft constraints, 107 Control Centre, 111 Converting files to new CMOST , 11 Creating and editing input data , 59 Curves, 50
D Data points, 50 Default Field Values, 53 Differential evolution, 233 engine settings, 121 Index
245
E
H
Engine settings, 114 CMG DECE optimization, 117 differential evolution, 121 external engine, 123 Latin hypercube plus proxy optimization , 118 Monte Carlo simulation using proxy , 118 Monte Carlo simulation using simulator , 120 one-parameter-at-a-time (OPAAT), 120 particle swarm optimization (PSO), 121 random brute force search, 121 response surface methodology , 122 Experiment creating, 139 highlighting , 50 quality, checking, 146 Experiments Table, 131 checking experiment quality , 146 configuring, 143 creating experiments, 139 exporting to Excel , 147 navigating, 132 reprocessing experiments, 147 viewing simulation log , 147 Exporting time series data , 161 External engine external engine and user-defined executable, 123
Handling large files , 181 Hard constraints, 80 Head nodes, 30 Help obtaining, 16 Highlighting (an experiment) , 50 History match (HM) overview, 18 History match quality, 91
F Field data info area, 60 Field Default Values, 53 Field history file, 21 File Editor (see CMM File Editor ) File system, 21 Fluid contact depth series , 70 Formula Editor (see CMOST Formula Editor ) Formulas CMOST, 27 examples, 27 Fundamental data, 61
G General information area, 59 General properties, 59 Getting started , 35 Global objective function candidates , 105 Glossary, 235 246
Index
I Include files, 28 Intermediate parameter , 75
J Jscript using in CMOST, 189
L Large files, handling, 181 Latin hypercube design , 212 Latin hypercube plus proxy optimization , 230 engine settings, 118 Launcher configuring, 15 configuring to work with CMOST , 193 Licenses, 15
M Manual about, 15 Master dataset, 24 editing parameters, 77 referenced files, 29 syntax, 26 Monte Carlo simulation using proxy engine settings, 118 Monte Carlo simulation using simulator engine settings, 120 Multiple studies, managing , 41
N Net present value, NPV, 96
CMOST User Gui de
O Objective functions, 88, 205 advanced , 100 global, 105 history match quality, 91 net present value, NPV, 96 Observer plots property vs. distance series, 162 time series, 160 One-parameter-at-a-time (OPAAT) engine settings, 120 Optimization (OP) overview, 18 Optimizers CMG DECE, 229 Latin hypercube plus proxy optimization , 230 particle swarm optimization, 232 random brute force search, 233 Original time series, 62 Orthogonality, 210
P Parameter correlation, 78, 217 Parameterization, 72 Parameters adding, 72 copying, 77 deleting, 76 editing in master dataset , 77 importing from master dataset , 78 intermediate , 75 moving in table , 77 prior probability distribution functions , 75 Particle swarm optimization (PSO) , 232 engine settings, 121 Plots copying image, 49 data points and curves , 50 highlighting , 50 saving image, 49 zooming in and out, 50 Pre-simulation commands, 82 Prior probability distribution functions , 75 Probability distribution functions , 203 Production history files , 21 Project creating, 39 folder , 21 Property vs. distance series configuring, 67 CMOST User Gui de
observer plot, 162 Proxy dashboard , 147 adding experiments, 151 assessing predictions, 151 building proxy model , 149 changing proxy role , 152 opening, 140 using, 151 Proxy modeling, 218
R Random brute force search, 233 engine settings, 121 Requirements files, 20 computers, 15 licenses, 15 Reprocessing experiments, 147 Response surface methodology engine settings, 122 proxy modeling, 218 types of response surface models , 218 verification plot, 220 Reuse pending, 133 Running and controlling CMOST , 111
S Sampling methods, 209 classical experimental design , 216 Latin hypercube design , 212 one-parameter-at-a-time (OPAAT) sampling, 211 Screen operations and conventions , 49 Sensitivity analysis (SA) overview, 17 Simulation files, 20 settings, 126 Simulation jobs, 152 Simulation settings, 126 job record and file management, 131 schedulers, 127 simulator settings, 129 Soft constraints, 107 Study adding existing, 46 changing display name , 46 copying, 49 creating, 41 engines, 23 excluding, 47 importing data from, 48 Index
247