Follow this link to skip to the main content
JPL - Home Page JPL - Earth JPL - Solar System JPL - Stars & Galaxies JPL - Science & Technology
  NASA Logo
Jet Propulsion Laboratory
California Institute of Technology
BRING THE UNIVERSE TO YOU: 
JPL Toolbar
Search JPL
Technology Selection & Risk Assessment
START Home Red Block
Middle Red Block
Case Studies
white line
About Us
white line
Methodology
white line
Case Studies
white line
 Overview
white line
 NASA agency-level studies:
     START Lite
     Technology Development
     Assessing Future Missions
     Integrated Resource Allocation
white line
 Science (SMD)
     Carbon Uncertainty
     JPL Chief Technologist Analysis
     New Millennium Program

     Enabling Mars Missions:
       - Biosignature Detection
       - Landing Site Selection
       - Selecting Technologies
       - Lander vs Rover
       - Autonomy
       - Hazard Avoidance
       - Predicting Technology Cost
       - Automated Design Tool

     Titan:
       - Science Traceability Matrix
       - Mission Architecture

     Europa

     Space Telescopes:
       - Tech Investment Tools
       - Earth Observatory at L2
white line
 Exploration (ESMD)

     Astronauts on Asteroid

     Asteroid surveyors

     Schrödinger Mission

     Shackleton-Malapert Mission

     Tech Prioritization for
Constellation


     Human-Robot Missions
       - Comparing Architectures

       - Task Scheduling
          - Allocating Tasks 1
          - Allocating Tasks 2

       - Lunar Mission Pilot
          - Human-Robot Polar Mission
          - Robotic Precursor Mission

       - Performance Improvements
       - System Architectures

     Autonomous Inspection
white line
 Aeronautics (ARMD)

     Capability Assessment
white line
white line
Publications & Proceedings
white line
News
white line
Sitemap
white line
START Mission Value banner

Mission to Schrödinger Crater

For a given desired level of science (or any other objective), what are the sets of constraints that would be acceptable?

Image of Earth with Moon in the background. Virtually every NASA space-exploration mission represents a compromise between the interests of two expert, dedicated, but very different communities: scientists, who want to go quickly to the places that interest them most and spend as much time there as possible conducting sophisticated experiments, and the engineers and designers charged with maximizing the probability that a given mission will be successful (astronauts kept safe and objectives achieved) and cost-effective. Recent work at JPL seeks to enhance communication between these two groups, and to help them reconcile their interests, by developing advanced modeling capabilities with which they can analyze the achievement of science goals and objectives against engineering design and operational constraints.

Our analyses conducted prior to this study have been point-design driven. Each analysis has been of one hypothetical case which addresses the question: Given a set of constraints, how much science can be done? But the constraints imposed by the architecture team -- e.g., rover speed, time allowed for extravehicular activity (EVA), number of sites at which science experiments are to be conducted -- are all in early development and carry a great deal of uncertainty. Variations can be incorporated into the analysis, and indeed that has been done in sensitivity studies designed to see which constraint variations have the greatest impact on results.

But if a very large number of variations can be analyzed all at once, producing a table that includes the majority of the trade space under consideration, then we have a tool that enables scientists and mission architects to ask the inverse question: For a given desired level of science (or any other objective), what are the sets of constraints that would be acceptable? (Note that the solution is not unique.) With this tool, mission architects could determine, for example, what combinations of rover speed, EVA duration, and other constraints produce the desired results. Further, this tool would help them identify which technology-improvement investments would be likely to produce the largest or most important return.

However, the number of variations that need to be considered for such analysis quickly balloons to an unwieldy size. If three variations are considered for each of five constraints -- a very modest example -- there are a total of 243 (3 to the 5th power) variations to consider. If it takes 40 minutes to compute each variation (as when our automated optimization system, HURON, computes the complicated mission analysis described below), a full analysis consisting solely of computer runs would take 162 hours or nearly 7 days of round-the-clock computing to calculate the results. Adding further constraints or variations exponentially increases the amount of time that is needed.

In this study, we explore three methods -- radial basis functions (RBF), kriging, and regression -- for interpolating the bulk of the trade space based on actual computations of less than 20% of the space, which dramatically reduces the time needed to compute results over the full trade space. RBF is found to carry a higher error rate than the other two and to be the least suitable for our purposes. Choosing between kriging and regression, however, is more complicated. Depending on the intended use, one might choose kriging (as we did) for its lower average error rate or regression for its more consistent -- albeit somewhat higher -- error rate.

Figure 1: Geometry of a hypothetical 90-day excursion from an outpost at Shackleton crater (lower right) to Schr�dinger crater and back.
Figure 1: Geometry of a hypothetical 90-day excursion from an outpost at Shackleton crater (lower right) to Schr�dinger crater and back.

Baseline study

The subject of our study is a hypothetical mission to Schr�dinger crater, which is thought to expose underlying stratigraphic material from South Pole Aitken Basin, the oldest, largest basin on the Moon. The allotted round-trip time from an assumed base camp at Shackleton crater to Schr�dinger and back is 90 days.

Figure 1 lays out the geometry of the mission in the baseline study. Blue dots indicate primary (required) localities for science experiments, while orange dots indicate secondary (optional) localities where scientific experiments would enhance mission results. Each of these localities comprises 6 sites where scientific activities are to be performed. The mission is conducted with two 2-astronaut teams, each of which drives a pressurized rover that is periodically recharged by a separate, slower vehicle operated remotely from Earth. A target list of experiments and other activities is derived from the scientific objectives expressed by sources such as the National Research Council, the Global Exploration Strategy, and the Lunar Exploration Analysis Group. A set of constraints is provided by a mission-architecture team. Each experiment and activity is assigned a relative science value and a cost in terms of EVA time required.

A baseline solution of the Schr�dinger excursion problem is computed, using HURON. It is found that the 90 days allowed for the mission provide ample time to conduct the desired activities at all primary sites and many secondary sites. Given these conditions, the constraint on EVA time, during which all scientific activities are conducted, is found to be the primary driver of results. A significant amount of IVA time (i.e., intravehicular activity time, when the astronauts are inside the pressurized cabins of their rovers) is included in the mission profile, during which further science activities could potentially be conducted if that were permitted and enabled.

Response surface analysis

In the next phase of the study, the objective is to determine the ranges for a variety of architecture parameters that achieve equivalent levels of science return. With 3 variables for each of 5 constraints, the trade space is an irregular, multidimensional (aka hyperdimensional) grid. Forty-two cases are run on HURON, and the remaining 83% of cases are interpolated using each of the three methods stated above.

The analysis produces results for 5 parameters: productivity, cost, mission duration, kilometers traversed, and percentage of targeted experiments conducted (in which each experiment is weighted by the importance assigned by scientists participating in the study). We choose to sort the results by this percentage, but the sorting could just as easily be done by any or all of the other parameters. Results are validated and an error rate is computed for each interpolation method (Figure 2).

Figure 2: Comparison of the error rates of 3 interpolation methods.
Figure 2: Comparison of the error rates of 3 interpolation methods.

Kriging (the purple curve) is found to have the lowest average error rate and is used to compute a table in which the 243 cases are ranked by the percentage described above (Table 1). The rows near the top of the table thus represent the cases that would likely be selected by mission architects who wish to maximize the return from experiments conducted during this mission. Given this information, they would be able to further winnow the choices according to other constraints that are not included in this analysis -- e.g., public outreach considerations or participation by other countries.

Input Set ID EVA Constraint (h/day) Locality Duration (hours) Rover Speed (km/h) Number of Localities (required/optional) Egr/Ingr Time (mins) % Targeted Experiments Conducted Source
235 5 5 15 17 / 35 10 104.2 Estimated
154 4 5 15 17 / 35 10 103.6 Estimated
226 5 5 10 17 / 35 10 103.2 Estimated
73 3 5 15 17 / 35 10 103.0 Estimated
145 4 5 10 17 / 35 10 102.6 Estimated
 
215 5 4 15 12 / 27 20 77.1 Computed
134 4 4 15 12 / 27 20 76.9 Computed
147 4 5 10 17 / 35 30 76.6 Estimated
53 3 4 15 12 / 27 20 76.2 Estimated
66 3 5 10 17 / 35 30 76.0 Estimated
206 5 4 10 12 / 27 20 75.9 Computed
125 4 4 10 12 / 27 20 75.7 Computed
182 5 3 15 17 / 35 20 75.1 Estimated
187 5 3 15 12 / 27 10 75.1 Estimated
44 3 4 10 12 / 27 20 75.0 Estimated
 
45 3 4 10 12 / 27 30 56.7 Estimated
31 3 4 5 7 / 13 10 56.5 Estimated
239 5 5 15 7 / 13 20 56.0 Estimated
198 5 4 5 12 / 27 30 55.8 Estimated
158 4 5 15 7 / 13 20 55.4 Estimated
230 5 5 10 7 / 13 20 55.3 Estimated
117 4 4 5 12 / 27 30 55.0 Estimated
77 3 5 15 7 / 13 20 54.7 Estimated
149 4 5 10 7 / 13 20 54.7 Estimated
221 5 5 5 7 / 13 20 54.6 Estimated
 
168 5 3 5 7 / 13 30 22.7 Estimated
96 4 3 10 7 / 13 30 22.4 Estimated
87 4 3 5 7 / 13 30 21.9 Estimated
15 3 3 10 7 / 13 30 21.8 Computed
6 3 3 5 7 / 13 30 21.2 Computed

Table 1. The product of the response surface analysis. Sections are shown from the top, middle and bottom of the table, which consists of 243 cases. The orange-headed columns are the 5 constraints discussed above. The blue-headed column gives the number by which the cases are ranked: the percentage of targeted experiments that are conducted, in which each experiment is weighted by the importance assigned by scientists participating in the study.

Conclusions

We have shown that we can survey a very broad range of combinations of architect parameters through a combination of computer optimization runs and interpolation, and that we can provide an estimate of the quality of the results by a number of different methods. Increasing the number of computer runs decreases the interpolation error rate. Of the three interpolation methods employed in this study, kriging (which had the lowest average error rate) was found to be best for our purposes, but the more consistent results of the regression method (the yellow curve in Figure 2) may be preferable in certain circumstances.

For more information, contact Charles Weisbin at Charles.R.Weisbin@jpl.nasa.gov.



  About | Methodology | Case Studies | Publications & Proceedings | News | Sitemap | Home

PRIVACY / COPYRIGHT IMAGE POLICY CONTACT INFORMATION CREDITS
  NASA Home Page   Primary START Contact: Charles R Weisbin
  Last Updated: January 24, 2013