The Monte Carlo Modeling Methods

Using quantitative assessments in environmental claims allocation.

October 27, 2015 Photo

A particular challenge for insurers is the determination of policy triggers, attributing responsibility, and allocation of costs for environmental contamination when overlapping coverage of primary and excess policies are involved. Ultimately, this is reduced to an exercise in quantifying outcomes from either historical events or outcomes that are likely or possible from future events. In both cases, there are typically myriad contributing and interacting variables for which complete information is either lacking or outdated and no longer definitive.

The last 30 years have seen development and expansion of a wide variety of mathematical/statistical techniques for data analysis and quantitative assessment in business, industry, and science, which also are applicable to this particular problem facing claims professionals. One of the more significant advances includes a variety of approaches collectively known as Monte Carlo (MC) methods. These methods essentially model or simulate the random variability that occurs in many events and processes of interest.

Random variability is the “noise” that exists in virtually every measurement that we use to describe the events in our modern world. Many factors obscure our recognition of variability and how it influences the inferences and decisions that we base on quantitative information. Research in decision theory has demonstrated that there is a natural tendency to rely heavily on past experience and intuition. This arises out of the need to quickly impart certainty and consistency with numerical quantities. As a result, fixed numerical quantities, such as “average,” describe events or processes easily and are immediately internalized.

More importantly, however, conceptualizing the possible spread of the high and low values is much more difficult. Similarly, the proportion of different values within this high and low data spread—what statisticians term as “distribution”—may not easily be recognized or interpreted. An example of the influence of data distribution is grading of school examinations on a curve, where test results are scaled relative to the bell-shaped curve or “normal” distribution. An “A” is awarded to those scores at the top of the curve, as these represent performance that is less frequent and substantially above the average.

This type of distribution closely approximates and quantitatively describes the range and spread of data associated with many phenomena in our common experience, such as human performance on psychological or educational tests, physical height, product manufacturing error, and consumer demand. However, while an average is easily calculated and convenient, the peril of ignoring the importance of range and distribution and considering the average for a basis of decisions brings to mind the story of the man who drowned crossing a stream that was, on average, six inches deep.

Construction projects and environmental remediation contain complex processes that are comprised of a series of interrelated events that can be conceptualized as individual experiments. Variables such as costs and schedule duration for each event can be described quantitatively by actual data, by data that we can simulate, or by a distribution that we can propose based on past experience and expert judgment. Due to the interrelationships between events within the process (system), the range and distribution of outcomes become a very difficult proposition to conceptualize.

To simplify understanding of these complex processes, we can use the idea of “proportion” embedded within the definition of distribution to estimate the probability of an outcome or occurrence of interest. Using an academic grading example, if 80 students out of 1,000 (a proportion of 8/100) historically scored 87 percent or more correct answers on a particular examination, we could say, all things being equal and with no other information, that there is a probability of 8 percent—calculated as a proportion—that our event in question (i.e., scoring 87 or above) will occur. This is the basic concept where MC methods have their utility.

MC methods utilize computing power to “crunch” a large quantity of data describing interrelated events into a model to help foresee the aggregate impacts in our overall system or process. Rather than face the impractical proposition of conducting hundreds or thousands of actual tests, MC-method simulation easily links the variability of each event in the network (each with its own range and probability) and compiles the frequency of final outcomes under the laws of randomness.

Specifically for environmental projects, a proficient risk management professional knowledgeable in environmental regulations, engineering, testing, environmental conditions, costs, etc. can translate the uncertainty and variability in each event into ranges and distributions that comprise a quantitative model simulated by MC techniques. For example, some variables, such as cost per ton of transportation and disposal of contaminated soil or cost to install a groundwater monitoring well, may be characterized at a particular site with only slight variability.

On the other hand, the number of necessary monitoring wells or number of contaminated areas requiring soil excavation on a site that is only partially characterized clearly would have more uncertainty and a quantitative descriptive range and distribution with more variability. The MC simulation process selectively generates thousands of data points, or samples, based upon the range and distribution for each component variable. The samples are selected to maximize similarity of the descriptive distribution. These variable samples comprise the system/model, and the frequency of various outcomes produce a distribution or picture of final outcomes matched with a corresponding proportion (equivalently represented as a probability).

Some basic project questions that can be answered with MC methods include, “What is the probability the project will be completed within budget?” and “Is the proportion of simulated outcomes less than or equal to the actual budget?” The answer to a similar question—“How much contingency is necessary to ensure completion of the project within the risk tolerance of stakeholders?”—is determined by the difference between the budgeted amount and the mean of the simulated costs projected by the MC simulation.

Some MC simulation programs can represent graphically the distribution of all simulated outcomes (e.g., proportion projected to be completed at various costs). Then a given cost to complete is easily matched to an associated probability. This allows contingency decision amounts to be compared to the risk tolerance of the decision-maker. Thus, we may have a 90 percent probability of completion within budget (i.e., 90 percent of our simulated outcomes are below the budget) and 99 percent probability that the project will be completed for the budget plus the agreed contingency, easily identified as the difference between the budget and costs at the 99 percent criterion of the simulated outcomes.

MC methods can be applied in estimating a range of time to specific events or processes, which can be directly applicable to determining policy triggers and allocation of costs involving environmental contamination and long-tail overlapping coverage of primary and excess insurance. Allocation is a complex process with various models applied across jurisdictions. However, a two-step process has emerged as a basic framework in which (1) a loss is assigned to a specified time period and (2) the loss is then covered by applicable policies within the identified time frames.

In a situation involving environmental claims at a former manufacturing site, for example, an assumption of continuous environmental release may not reflect accurately the actual contributory events or operations. With sufficient site history (and data from similar operations) of waste stream generation and disposition, risk managers can develop models for specific damage (such as spread of a subsurface plume) to a time frame and thereby temporally determine liability based on the applicable insurance coverages.

This same technique also can be applied to determine a specific trigger for coverage. For example, an insurer issued a solid waste closure/post-closure policy for multiple municipalities that provided extended coverage if a given facility required closure prior to a specific date. Consequently, the insurer could quantify risk and potential claims exposure only by understanding the probability of a facility closing prior to the policy end date. Information about each of the facilities was easily obtained—type of facility, estimated remaining waste handling capacity rates, population growth rates for the served municipalities, current population waste generation rates, and facility closure plans. The underlying variables were not fixed and were described with range and probability distributions in the model based on professional expertise.

The base model and subsequent MC simulation produced a distribution or picture of final outcomes or the probability of closure prior to a specified date. An additional analysis of the simulated date completions enabled a determination of the impact of specific underlying variables. Termed “sensitivity analysis,” this is accomplished by fixing a target variable at specific values (rather than the distribution of the variable), executing simulations, and tabulating the overall results.

As with any model, careful delineation and understanding of assumptions and limitations is paramount. Quantitative description of variables by an experienced practitioner is essential, as the risk management professional in effect directs the tools. Complete understanding of the spread and distribution of the data for the various components of the project and of the proportion concept is essential for interpretation of the outcomes and subsequent decision recommendations. Within this process, a quantitative basis for decision-making is provided by inherently incorporating what is known and unknown in the models and projections rather than depending on less precise rules of thumb, gut judgments, or similar intuitive assessments and conclusions.  

photo
About The Authors
Multiple Contributors
Christopher Spicer

R. Christopher, CIH, CHMM, is a principal with WCD Group LLC. He has been a CLM Fellow since 2012 and can be reached at (609) 730-0007, cspicer@wcdgroup.com, www.wcdgroup.com

Jonathan Hoyle

Jonathan Hoyle, PMP, is a senior remediation project manager with WCD Group LLC. He has been CLM Fellow since 2012 and can be reached at  jmhoyle@wcdgroup.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events