Validation is the process of comparing a model's output with a real system or, lacking one, with an expert's judgment. If the result is consistent with all known information and the expert's opinion, we consider it validated. A positive validation confirms that the model's output represents the most reasonable result, within the limits of uncertainty.
If a result is invalidated, we examine the model's assumptions, revisit the inputs to see whether they were estimated accurately, and/or adjust for any new constraints that were not previously expressed.
It's important to note that some degree of uncertainty surrounds every input, and some inputs -- such as the relative importance of a particular attribute -- can only be estimated by experts, and are likely to have relatively high uncertainty levels. For technologies that do not yet exist, virtually all inputs may have to be estimated amid considerable uncertainty.
If the experts involved in a study's validation process reaffirm the values for each attribute, the decision-maker may reconsider a conflicting opinion and bring it into accord with the study's results. Alternatively, the experts and decision-maker may revise some of the input values, leading to a different outcome.
Sensitivity and Uncertainty
We can calculate which attributes (such as mass, volume, cost, or an aspect of performance) were most influential in producing the study's outcome, versus some other particular outcome. In practice, this is most useful when a study's outcome differs from the outcome preferred or expected by a decision-maker.
If a small change in the value assigned a particular attribute would produce a large difference in the result, that attribute is said to have high sensitivity. Conversely, low sensitivity indicates that even a big change in the value assigned a given attribute would have little impact on the study's results.
Relative uncertainty in a result is deduced from the product of sensitivity and uncertainty in the data that led to the result. If an attribute's uncertainty is much higher than that of the other attributes, it may be worthwhile to try to reduce that level of uncertainty. If all attributes have about the same level of uncertainty, we focus on sensitivity.
We can use sensitivity information in two ways. First, with the dominant influences on the study's output brought to light, a decision-maker can decide whether these particular influences make sense. If, for example, the cost of testing has high sensitivity in a study of competing technologies, but the decision- maker doesn't think that the cost of testing should be much of a determining factor, that's a signal that we need to reconsider the factors that produced such a high sensitivity for that attribute.
Second, if our results don't agree with the decision-maker's judgment, sensitivity tells us which attributes to target for re-evaluation of the input values. Minor revisions to the values of a few highly sensitive attributes may bring the study's results into conformity with the expert's opinion. (See the Sensor for Hazard Detection and Avoidance case study for an example.)
The goal of this process, however, is not simply to make the study agree with an expert's preconceived ideas. It is to examine the underlying reasons for the difference in outcomes, and to determine whether any of the initial values should be changed on their own merits.
This procedure exposes the implications and ramifications of any given result, whether it is the study's initial output or the expert's preference. Result "A" means that all the values, preferences, and weightings that led to "A" are the best choices. Result "B" means that all the parameters that led to an output of "B" are the best choices. Going through this process leads a decision-maker to examine those values, preferences, and weightings, and to make sure that they are as accurate as they can be.
In doing so, we build a solid foundation for whatever result the study ultimately produces. If, after this re-examination process, the study confirms the decision-maker's original preference, it provides a comprehensive explanation for why that is the best prediction that can be made. On the other hand, if it leads to a change of mind, the decision-maker will know exactly why such a change was warranted.