Quantitative Assessment of Expected Space Mission Return
What is a reasonable, quantifiable method for assessing and comparing the value of potential NASA missions?
This study was a first attempt to address the problem of how to quantitatively assess the value of the various missions competing for NASA funding, as a tool to help decision-makers make funding decisions and to assist in future mission design.
From various NASA mission statements, we derived three top-level goal areas: science, exploration, and inspiration. (Note that the President's plans for manned missions to the Moon and Mars had not yet been announced at the time of this study.) It is extremely difficult to associate quantifiable metrics with these concepts. There is a large subjective component in determining which missions would provide the greatest scientific return, for example, let alone which ones would be the most inspirational. Nevertheless, we have begun an effort to develop quantifiable metrics for each of the goals.
Model structure and metrics
Our approach is based on establishing a hierarchical tree of metrics. We decomposed the top-level goals into categories, which we further decomposed into sets of quantifiable metrics, many of which we decomposed into lower-level sets of metrics, and so on.
For purposes of our study, we identified five categories: destination, science data, infrastructure, human exploration, and impact to Earth (for explanations of the categories, click here). To calculate return on investment (ROI), we also needed to assess cost, which can be considered a sixth category. Risk is also a useful metric, but beyond the scope of this study.
An illustrative sample of the first level of metrics under each category is shown in the above chart. (There are actually 15 metrics beneath Science Data, for example, though only 5 are shown here.) Once the entire tree was established, we assessed the metrics at the lowest level of each branch, and propagated the results upward.
Normalizing the Metrics
In order to compare metrics that had disparate units, we normalized each metric by calculating a ratio between targeted performance and state-of-the-art (SOA) performance. Five SOA missions were selected as baselines for comparison:
- MER (Mars Exploration Rover) for lander/rover missions
- Mars Odyssey for orbiter missions
- Apollo for human exploration missions
- ISS for space-station missions
- The sample-return part of Stardust for sample-return missions
For some metrics, we used a benchmark instead of an SOA mission metric, which allowed us to set a standard across all missions. For example, the benchmark for length of mission was one year.
Calculating expected gain
At the lowest level of the structure branches, we compared the SOA and target (test) mission metrics via the following binary log ratio:
Note that the ratio yields a unitless number, enabling comparison across different varieties of metrics. The binary log indicates how many times SOA performance must double to reach the target performance, and helps to keep the scale more manageable and intuitive.
Some of the metrics readily lent themselves to analysis -- for example, questions of how far, how deep, and how long can be easily answered quantitatively. Other questions have murkier answers: Is a mission to a new destination worth more than a return to a previous destination? Is a mission with a human crew worth more (in terms, perhaps, of public interest) than a robotic mission? Is an in situ mission worth more than an orbiter? How does one measure the value of potential commercial spinoffs? Many of these kinds of questions were not answered in our study and will require further investigation.
We selected a set of 17 future missions to help define the metrics and provide a test set.
Each metric above the lowest level was weighted according to the number of metrics below it and the number of metrics in its level.
For example, "drilled" (which is short for "drilling depth") is one of five metrics under "Depth Explored." MSR (Mars Sample Return) is expected to be capable of drilling to a depth of 5 cm, compared to 0.5 cm for the state-of-the-art MER. Log2 of that ratio (5 cm/0.5 cm) is 3.32, so the "drilled" metric is given the unitless value of 3.32. Through similar procedures, we derived unitless values for the other four metrics under Depth Explored. Averaging the values for all five metrics gave us an unweighted value for Depth Explored of 1.69.
That value of 1.69 was then multiplied by a weighting fraction. The numerator of a weighting fraction is the number of metrics directly below the metric in question. Depth Explored has 5 metrics beneath it, so its numerator is 5. The denominator is the number of metrics (in this case, categories) at the level of the metric in question. Depth Explored is one of 15 metrics under the category "Science Data," so the denominator is 15. The weighting figure is therefore 5/15, which reduces to 1/3. Multiplying the unweighted value of 1.69 by 1/3 yields a weighted value for Depth Explored of 0.56.
In a similar manner, we derived a weighted value for Science Data, and ultimately for the MSR mission as a whole. That mission value can be compared with the value similarly calculated for any other mission competing for NASA funding, giving decision-makers an objective basis for making or supporting their choices, and mission designers information that may help them improve their designs.
Note that a large number of metrics in the level beneath the metric in question increases the value of the weighting fraction, while a large number of metrics at the same level as the metric in question decreases the value of the weighting fraction. The rationale for this method is that when a metric is composed of many sub-metrics, there is a reduced risk of one or two erroneous sub-metrics skewing the results for that metric, and so we have greater confidence in the value we derive for it. Similarly, when a metric is one of a large number of metrics at the same level, it constitutes a correspondingly small portion of the total value for that level.
Not all decision-makers have the same sets of priorities, and categories can be omitted if the decision-maker is unconcerned about a particular type of return.
At this time, the best-developed metrics in our study have been derived from the Science goal and the robotic portion of the Exploration goal. There were too few available metrics for Human Exploration and for Inspiration to use in this study. Based on Science and Robotic Exploration then, our preliminary results gave Titan, Europa, and MSR the highest scores of the 17 test missions.
Europa benefited from its need for a drilling capability of several kilometers. Titan gained from being an aerial (blimp) mission that would cover a large area, while also including sondes to perform in situ experiments. Both of these missions had the advantage of being the first rover missions to their worlds. MSR got points for being the only sample-return mission, but was devalued for being the 14th mission to Mars, and for going to a planet that draws missions every two years.
This analytical model is of course not yet considered fully developed. We have primarily examined metrics related to engineering accomplishments (drilling depth, surface mobility, etc.) and have not yet formulated approaches to technology commercialization, public interest, etc. Quantification of "benefit" would play a valuable role in all future investment decisions, so further work in developing this kind of analytical tool is clearly warranted.