CASIE
Articles from "Response"

Reexamining The Search Management Function
by
John Bownds, Michael Ebersole, David Lovelock and Daniel O'Connor

Copyright ©1994.
All Rights Reserved By The Authors.

Section I

Search theory has seemingly remained unchanged for several years now, while other developments in search management and search technique have occurred. The authors felt it was time to reexamine certain aspects of modern search theory. It is this search theory which often dictates the search management function.

The authors will examine several aspects of the search management function and search theory in the following pages. Topics include: the Mattson Consensus, search area segmentation, optimizing resources in a search, search terminology, how to treat clues, and information about the CASIE search software.

Before launching into the first topic, let's define or perhaps re-define some commonly used and confused search terminology. We'll discuss the uses and abuses of these terms later.

POA technically means Probability of Area. While often used many different ways, POA usually means, "What is the likelihood that the subject is in this particular search segment?" POA changes for each segment after any portion of the total search area is searched.

POD, according to strict search theory, means Probability of Detection. As you'll see later, while the term "POD" can have several meanings, it often refers to a measure of a search team's effectiveness after coming in from a period of searching. "How well has this segment been searched?" PODs can be applied to resources or segments.

POS is an abbreviation for Probability of Success, and is the product of a search segment's POA and POD (POS = POA × POD). As discussed later, it is not always clear what this number has to do with success, especially if the subject has yet to be found.

A Mattson Consensus establishes initial POAs for a new search, allowing search managers to deploy resources as they see fit. The Mattson Consensus is an averaging of opinions, and is actually an educated guess (based on subject behavior, lost person characteristics, and the experience, knowledge and hunches of the search experts) of where the subject is most likely to be.

ROW stands for Rest of the World and refers to all of the region outside the designated search area.

Recall how POAs and PODs should be used in typical large-scale searches. Based on the current POAs, resources are deployed into the various search segments. If the subject is found, then the search function stops. However, if the subject is not found, the resources are evaluated for how well they searched their assigned segment, producing PODs. These PODs and initial POAs are then used to generate updated POAs. These new POAs become the current POAs, and the entire sequence of events is repeated, until either the subject is found, or the probability that the subject is in the ROW is so large that the search is suspended. You must be aware, however, that this method assumes the subject remains in the same search segment throughout the search. The methodology described in this paper assumes that the subject is stationary. If the subject is mobile, computed values can be useful to the search manager but the underlying search theory is not yet clearly defined.

Rethinking the Mattson Consensus

When an agency or organization makes the decision to conduct a full-scale search, the Incident Commander or Search Manager must immediately do two things before resources are deployed: (1) divide the search area into manageable segments, and (2) coordinate the establishment of a Mattson Consensus. Later in this paper we will present some thoughts on search segmentation, but right now let's turn our attention to the Mattson Consensus.

Establishing a Mattson Consensus requires the input of a team of "experts", personnel knowledgeable in search emergencies and the local terrain.

Each expert must assign a numeric probability to every search segment, estimating the relative chance that the lost subject is in that segment. One additional segment, the Rest of the World (ROW) must also be rated. Each evaluator's percentages for all search segments plus the ROW must equal 1.00 (or 100%).

The average of all values assigned to each segment constitutes the so-called Mattson Consensus. This represents the best guess about where the subject might be found, based on the experience and subjective "hunches" of a team of local area experts. These judgments combine knowledge of the area with that of lost person behavioral characteristics. The Mattson Consensus is simply an averaging of these experts' opinions. It is a starting estimate of what the experts feel the subject's chances are of being in each search segment. This Probability of Area (POA) for each segment determines where search and rescue resources will initially be deployed. While POA is not a true probability but a subjective estimate, it is still a critical step in search evolution.

Besides its primary role of getting the search started by creating a hierarchy of POAs for the entire search area, the Mattson Consensus serves an additional, more subtle role. Each time a segment is searched, the current POA is updated, based on the varying efficiencies of the search resources. The Mattson Consensus calculated at the start of the search provides the weighting variables for updating or redistributing POAs for each segment throughout the entire search. Clearly, the Mattson Consensus is a critical component of the search management function. It establishes the initial distribution of POAs in the search area, and repeatedly affects the redistribution of these POAs after resources are deployed (and Probabilities of Detection or PODs for each operational period become available). To be effective, the Mattson Consensus should be an unbiased reflection of the search evaluators' judgement.

However, four major potential sources of bias have been discovered in actual and simulated calculations of the Mattson Consensus:

1.Mathematical Inaccuracy: Some experts just can't add very well; i.e., they have trouble making their POAs add up to 1 or 100%. In other cases, POAs of zero may be inappropriately assigned to a segment since all segments have some likelihood that the subject is there.
2.Numeric Subjectivity: Each evaluator has a different perception about what a percentage of POA represents. Numbers mean different things to different people.
3.Remainder Bias: This refers to an evaluator's tendency to under or overrate the last few segments based on how many POA percentage points remain to be allocated.
4.Underestimating ROW: Some experts assign a value of zero to the ROW POA. This should never be allowed to happen, as there is always some chance that the subject is out of the search area. Once assigned a POA of zero, the ROW stays locked at zero throughout the entire search. It can no longer act as a barometer to indicate how the search is progressing.

We'd like to expand on the first three problems in more detail.
Mathematical Inaccuracies tend to occur under two conditions. First, when there are many search segments to be evaluated, and second, when very strong clues are present while the search is still being planned.

For training purposes, a Mattson Consensus is often simulated with four or five search segments. In this case, it is fairly easy to assign POAs that add up to 100%. In real searches, however, there may be 20 or more segments that need to be evaluated. The greater the number of segments, the greater the potential for a wrong total.

Another problem arises when strong clues are present at the beginning of a search. Large perceived differences between segment POAs have been shown to result in a curious phenomenon. Some evaluators become so convinced that the subject is in a particular segment that they rate one or more of the other segments with a POA of zero. If it is physically possible for the subject to reach such "unlikely" segments, no matter how strong the clues in the more likely segments, some level of chance that the subject is in the search segment should be assigned. A zero POA means there is absolutely no chance the subject is in the segment, and can we ever be sure of that? If this is truly the case, then this particular search segment should have been part of the ROW.

Numerical Subjectivity can bias results because of the preconceived notions each evaluator has about the relative worth of numbers. An accountant may expect a number to be accurate to within two decimal places, while a trained statistician sees numbers as "indicators" surrounded by an error band in which the real value falls.

In the subjective context of a Mattson Consensus, the actual numeric value assigned to a segment is less important than what the number means relative to the values of the other segments. For example, suppose we have a search with three segments. After a Mattson Consensus, the initial POAs are 10%, 40%, and 45% for segments 1, 2, and 3, respectively. The ROW equals 5%. The segment with the POA of 10% has the lowest search priority, and, based on the experts' opinions, is less than one-fourth as likely to contain the subject than the highest segment.

In an 18-segment search area, however, suppose 17 segments had initial POAs of 5%, 1 had a POA of 10%, and the ROW was 5%. In this case, the segment with a POA of 10% obviously has the highest search priority. The experts who participated in this Mattson Consensus are indicating that they believe the chance of finding the subject in that particular segment is twice as likely as any other segment.

In a Mattson Consensus, one's feeling about what a particular number "means" is meaningless outside of how this number relates to the values assigned to other segments. An efficient Mattson Consensus will optimize the relative differences across segments and avoid hair-splitting exercises in assigning numeric probabilities.

A mathematically sophisticated search manager realizes that two POAs, one 21% and the other 19%, may be virtually identical, since they are based on a range of hunches, experience and numeric subjectivity of the evaluators. Only as the gap between the two POAs grows can the search manager increase his confidence that there is a real substantive difference between the two.

Finally, a Remainder Bias has been occasionally noted by the authors, in real and simulated search scenarios, when the search area contains a large number of segments. For example, an evaluator faced with 20 segments can easily mismanage the distribution of POA percentiles by giving disproportionately high numeric values to the early segments. By the time the 18th, 19th and 20th segments need to be evaluated, there may not be enough of the starting POA left to allocate. That is, not enough of the original 100% remains to allow the evaluator to indicate how he really feels.

Given this situation, an evaluator must go back and reduce the value of some segments to allow enough of a percentage for the last few, while ensuring that the total equals 1 or 100%. The temptation here is to shave percentages off the closest segments, e.g., the 16th and 17th, rather than go back and (as should be done) reallocate percentages across all 20 segments.

Conversely, an evaluator may have a large amount of the original 100% left by the time he or she arrives at the final segments. The temptation here is to dump this remainder onto these segments, giving them a search priority higher than intended. In either case, having too little or too much remaining POA to allocate across the last few search segments can bias the Mattson Consensus away from the true averages that should have been computed.

To circumvent the above problems, O'Connor has suggested an alternative to the standard Mattson Consensus, based on a scale of relative values. Instead of assigning a numerical value to each segment and the ROW, the expert specifies a letter corresponding to the likelihood that the subject is in a particular segment. Letters are assigned according to the scheme in Table 1.

Table 1
A - very likely in this segment
B 
C - likely in this segment
D 
E - even chance
F 
G - unlikely in this segment
H 
I - very unlikely in this segment

A numerical value is then associated with each letter, and a POA arrived at based on a simple algorithm. This scheme has some obvious and not so obvious advantages:

1.Gone are the days of worrying about adding to 100. Now the expert can concentrate on what has happened to the subject, i.e., where the subject really is, instead of wondering, "Are these values going to add up right?," a scenario we have observed time and again.
2.An expert does not have to try to put meaning to numbers. "What does a value of 20% in this segment really mean?" In recently conducted experiments, search experts using both a standard numerical Mattson Consensus and O'Connor's relative method generally seemed more comfortable with word (very likely) versus numeric (20%) labels.
3.A zero POA cannot be assigned to any segment, including the ROW. In our experiments, if an evaluator thought there was little or no chance a subject was in a particular segment, he or she seemed to feel that assigning very unlikely satisfied this opinion.
4.The same relative value can be assigned whether the segment is rated first or last, without worrying about how much of the total POA is left to allocate. Gone too are the days when, for a large number of segments in a search area (say 20, where the average POA would be 5%), search experts have to also worry about the distinction between 5.5% and 4.5%, when performing a Mattson Consensus.

How does one arrive at these POAs? The relative method used to reach a Mattson Consensus is based on 9 choices which have been refined to ensure uniformity across the entire scale. Each letter an expert uses (from Table 1) is assigned a numerical value according to the scheme in Table 2.

Table 2
A = 9,B = 8,...,I = 1,if the lowest letter used by that expert is an I.
A = 8,B = 7,...,H = 1,if the lowest letter used by that expert is an H.
A = 7,B = 6,...,G = 1,if the lowest letter used by that expert is an G.
A = 6,B = 5,...,F = 1,if the lowest letter used by that expert is an F.
A = 5,B = 4,...,E = 1,if the lowest letter used by that expert is an E.
A = 4,B = 3,...,D = 1,if the lowest letter used by that expert is an D.
A = 3,B = 2,C = 1, if the lowest letter used by that expert is an C.
A = 2,B = 1,  if the lowest letter used by that expert is an B.
A = 1,   if the lowest letter used by that expert is an A.

Next, the expert's total is obtained, and the ratio of the expert's numerically assigned value to the expert's total is that expert's POA for that segment.

A simplified example demonstrates how this rating system works. Imagine (in a search with 2 segments and the ROW) that an expert assigns values as follows:

Segment 1: G
Segment 2: A
ROW: G

The lowest letter selected was G, so we use the third line of Table 2. The expert's total will be 9 (1 + 7 + 1 or G + A + G). This individual expert's POA for Segment 1 is 1/9, for Segment 2 is 7/9, and for ROW is 1/9. These can be easily converted to the more useful percentages of 11%, 78%, and 11% (rounded). This expert's values are then averaged with the other experts, and a Mattson Consensus emerges.

Table 2 has been carefully designed so that choices grouped at the top or bottom of the scale maintain the same relative value. An expert does not always have to use an A. For example, consider a second expert who assigns values as follows:

Segment 1: H
Segment 2: B
ROW: H

The lowest letter was H, so we use the second line of Table 2 to find a total of 9 (1 + 7 + 1), with exactly the same averages (1/9, 7/9, and 1/9) as the previous expert. Note that the same relative scale has been maintained even though this second set of choices dropped by one level of likelihood.

O'Connor's relative method and the traditional numerical method of arriving at a Mattson Consensus have been tested together several times since Fall 1989. These experiments occurred at Cape Cod National Seashore, Massachusetts and Grand Canyon National Park, Arizona.

At Cape Cod, members of the National Park Service (NPS) staff reviewed common scenarios that simulated searches in various parts of the park. At Grand Canyon, NPS and Sheriff's Office search personnel, experienced in the Mattson method, were given old search scenarios that contained sketchy subject information typical of the first day of a search.

These individuals then gave their initial POAs using both the traditional numerical method and the new relative method. The initial POAs arrived at under either method were so close in value that search resources would have been deployed in virtually the same manner. While larger and smaller scales were tested, the scale of nine values in Table 1 proved to be excellent in mimicking the numerical choices of a traditional Mattson Consensus. Based on the positive feedback we've received, we may soon see the day when the "relative method" becomes the standard for arriving at a Mattson Consensus.

Some Thoughts on Search Segmentation

We all know that at the start of a search the search area is defined and should then be segmented. The resulting segments form the basis for developing a Mattson Consensus. What follows are four thoughts on this segmentation process:

1.Search segments should be realistic in size, so that a "typical" resource can search it in a single operational period. If the resource is unable to do this then that search segment will have to be split, indicating that the segment should have been made smaller at the start of the search.
2.Realize that your search area is two-dimensional, not three. We frequently don't give much thought to this. For example, imagine a subject is buried "in" your search area. Unless you have specifically designated the region under the earth as a search segment, then the subject is actually in the ROW. Usually we don't plan to search under the surface, although there are some cases (avalanches and drownings) where we do.
3Frequently we don't give much thought to caves and mine shafts in the initial segmentation of a search area. If you have many caves and/or mine shafts in your area and plan to search them at some stage, it might be appropriate to lump them together as one search segment, with one initial POA for all of them. Then, as each is searched, split it off from the original "lumped" segment and apply the appropriate POD. However, this requires careful application of search theory. (The Computer-Aided Search Information Exchange (CASIE) search software permits the splitting of search segments.)
4.It has become somewhat standard procedure to create segments out of trails. Such a segment should only contain the trail, not something on either side of it or under it. A subsequent search of the trail will probably have a large POD since the search is being conducted only on the trail. It is important that search credit taken for the trail not be extrapolated to include segments or areas adjacent to the trail. For example, we often see search segments that contain a mix of trails and terrain. Many times a search resource will only have time to examine the trail. In this case, the trail should be split out from its original segment and made into its own segment. A high POD for the trail does not imply a high POD for segments or areas near the trail; such areas need separate treatment, with their own POAs and PODs.


Section II - Section III - Section IV

Home Next