<<

National Aeronautics and Administration

DRAFT Modeling, Simulation, Information Technology & Processing Roadmap Technology Area 11

Mike Shafto,Chair Mike Conroy Rich Doyle Ed Glaessgen Chris Kemp Jacqueline LeMoigne Lui Wang

November • 2010 DRAFT This page is intentionally left blank

DRAFT Table of Contents Foreword Executive Summary TA11-1 1. General Overview TA11-2 1.1. Technical Approach TA11-2 1.2. Benefits TA11-2 1.3. Applicability/Traceability to NASA Strategic Goals TA11-2 1.4. Top Technical Challenges TA11-5 2. Detailed Portfolio Discussion TA11-7 2.1. Summary description and Technical Area Breakdown Structure (TABS) TA11-7 2.2. Description of each column in the TABS TA11-8 2.2.1. Computing TA11-8 2.2.1.1. Flight Computing TA11-8 2.2.1.2. Ground Computing TA11-11 2.2.2. Modeling TA11-11 2.2.2.1. Software Modeling and Model-Checking TA11-11 2.2.2.2. Integrated Hardware and Software Modeling TA11-12 2.2.2.3. -System Performance Modeling TA11-12 2.2.2.4. and Aerospace Engineering Modeling TA11-13 2.2.2.5. Frameworks, Languages, Tools, and Standards TA11-16 2.2.3. Simulation TA11-17 2.2.3.1. Distributed Simulation TA11-17 2.2.3.2. Integrated System Lifecycle Simulation TA11-17 2.2.3.3. Simulation-Based Systems Engineering TA11-18 2.2.3.4. Simulation-Based Training and Decision Support TA11-19 2.2.4. Information Processing TA11-19 2.2.4.1. Science, Engineering, and Mission Data Lifecycle TA11-19 2.2.4.2. Intelligent Data Understanding TA11-19 2.2.4.3. Semantic Technologies TA11-21 2.2.4.4. Collaborative Science and Engineering TA11-21 2.2.4.5. Advanced Mission Systems TA11-22 3. Interdependency with Other Technology Areas TA11-25 4. Possible Benefits to Other National Needs TA11-25 Acronyms TA11-26 Acknowledgements TA11-27 References TA11-27

DRAFT Foreword NASA’s integrated technology roadmap, including both technology pull and technology push strategies, considers a wide range of pathways to advance the nation’s current capabilities. The present state of this effort is documented in NASA’s DRAFT Space Technology Roadmap, an integrated set of fourteen technology area roadmaps, recommending the overall technology investment strategy and prioritization of NASA’s space technology activities. This document presents the DRAFT Technology Area 11 input: Modeling, Simulation, Information Technology & Processing. NASA developed this DRAFT Space Technology Roadmap for use by the National Research Council (NRC) as an initial point of departure. Through an open process of community engagement, the NRC will gather input, integrate it within the Space Technology Roadmap and provide NASA with recommendations on potential future technology investments. Because it is difficult to predict the wide range of future advances possible in these areas, NASA plans updates to its integrated technology roadmap on a regular basis.

DRAFT Executive Summary -system ( and ) modeling or Figure 1 is an initial draft strategic roadmap for modeling the performance of an autonomous Eu- the Modeling, Simulation, Information Technol- ropa lander. ogy and Processing Technology Area. Although Simulation focuses on the architectural chal- highly notional, it gives some idea of how the de- lenges of NASA’s distributed, heterogeneous, and velopments in Computing, Modeling, Simulation, long-lived mission systems. Simulation represents and Information Processing might evolve over the behavior. It allows us to better understand how next 10-20 years. In spite of the breadth and di- things work on their own and together. Simula- versity of the topic, a small number of recurring tion allows us to see and understand how systems themes unifies the requirements of aerospace engi- might behave very early in their lifecycle, without neering, remote science, current space operations, the risk or expense of actually having to build the and future human exploration missions. There is a system and transport it to the relevant environ- surprising consensus regarding the research need- ment. It also allows us to examine existing large, ed to realize the benefits of simulation-based sci- complex, expensive or dangerous systems without ence and engineering, and of emerging paradigms the need to have the physical system in our posses- in information technology. sion. This is especially attractive when the system For our purposes, Computing covers innovative in question is on orbit, or on another and approaches to flight and ground computing that impossible to see or share with others; or when have the potential to increase the robustness of fu- the system does not yet exist, as in the case of ad- ture aerospace systems and the science return on vanced autonomous vehicles. long-duration exploration missions. Flight Com- Information Processing includes the system- puting requires ultra-reliable, -hardened level technologies needed to enable large-scale re- platforms which, until recently, have been cost- mote science missions, automated and reconfig- ly and limited in performance. Ground Com- urable exploration systems, and long-duration puting requires supercomputing or other high- human exploration missions. Given its scope, po- performance platforms to support petabyte-scale tential impact, and broad set of implementation data analysis and multi-scale physical simulations issues, simulation provides many of the challenges of systems and vehicles in unique space environ- faced in advanced information processing systems. ments. In both multi-core flight computing and There are software engineering challenges related high-end ground computing, performance is con- to simulation reuse, composition, testing, and hu- strained by complex dependencies among hard- man interaction. There are systems questions re- ware, algorithms, and problem structure. We lated to innovative computing platforms. There assume that fundamental advances in the Com- are graphics and HCI questions related to the in- puting area will benefit modeling, simulation, and terface with users of varying disciplines and levels information processing. of expertise. In NASA’s science and engineering Modeling, simulation and decision making are domains, data-analysis, modeling, and simulation closely coupled and have become core technolo- requirements quickly saturate the best available gies in science and engineering. In the simplest COTS or special-purpose platforms. Multi-res- sense, a model represents the characteristics of olution, multi-, adaptive and hybrid sys- something, while a simulation represents its be- tems modeling raise challenges even for theoret- havior. Through the combination of the two, we ical . At the most practical level, can make better decisions and communicate those separation of concerns becomes critical when sys- decisions early enough in the design and develop- tems and services are no longer locally provisioned ment process that changes are easy and quick, as and coupled to local implementations. For exam- opposed to during production when they are ex- ple, systems that decouple storage and data pro- tremely costly and practically impossible. cessing, as well as data storage and data manage- Modeling covers several approaches to the inte- ment, are more likely to take early advantage of gration of heterogeneous models to meet challeng- elastic technology paradigms like : There is es ranging from Earth-systems and Heliophysics more to leverage the lowest common modeling to autonomous aerospace vehicles. The denominator services that cloud provides, such as main topics of this section are Software Model- compute and store, without navigating other large ing, Integrated Hardware and Software Mod- complex subsystems independent of pay-for-play eling, and large-scale, high-fidelity Science and cloud services. Aerospace Engineering Modeling, for example, DRAFT TA11-1 1. General Overview management. Simulation focuses on the design, A mission team’s ability to plan, act, react and planning, and operational challenges of NASA’s generally accomplish science and exploration ob- distributed, long-lived mission systems. And final- jectives resides partly in the minds and skills of ly, Information Processing includes the technol- engineers, crew, and operators, and partly in the ogies needed to enable large-scale remote science flight and ground software and computing plat- missions, automated and reconfigurable air-traffic forms that realize the vision and intent of those management systems, and long-duration human engineers, crew and operators. exploration missions. Challenges to NASA missions increasingly con- In spite of the breadth and diversity of the top- cern operating in remote and imperfectly under- ic, a small number of recurring themes unifies the stood environments, and ensuring crew safety in requirements of aerospace engineering, remote the context of unprecedented human spaceflight science, current space operations, and future hu- mission concepts. Success turns on being able to man exploration missions. Modeling, simulation predict the fine details of the remote environment, and decision making are closely coupled and have possibly including interactions among become core technologies in science and engi- and robots. A project team needs to have thought neering (McCafferty, 2010; Merali, 2010; NRC, through carefully what may go wrong, and have a 2010). In the simplest sense, a model represents contingency plan ready to go, or a generalized re- the characteristics of something, while a simula- sponse that will secure the crew, spacecraft, and tion represents its behavior. Through the combi- mission. nation of the two, we can make better decisions Balanced against these engineering realities are and communicate those decisions early enough in always-advancing science and exploration objec- the design and development process that chang- tives, as each mission returns exciting new results es are easy and quick, as opposed to during pro- and discoveries, leading naturally to new science duction when they are extremely costly and prac- and exploration questions that in turn demand tically impossible. We can also discover complex greater capabilities from future crew, spacecraft causal relations among natural processes occurring and missions. across vast spatiotemporal scales, involving previ- This healthy tension between engineering and ously unknown causal relations. system designs on the one hand, and science in- 1.2. Benefits vestigations and exploration objectives on the oth- Some of the major benefits of high-fidelity mod- er results in increasing demands on the function- eling and simulation, supported by high-perfor- ality of mission software and the performance of mance computing and information processing, the computers that host the on-board software include creativity (creative problem solving): ex- and analyze the rapidly growing volume of sci- perimentation; what-if analysis; insight into con- ence data. Mission software and computing must straints and causality; large-scale data analysis and become more sophisticated to meet the needs of integration, enabling new scientific discoveries; the missions, while extreme reliability and safety training and decision-support systems that amor- must be preserved. Mission software and comput- tize costs by using modeling and simulation di- ing also are inherently cross-cutting, in that capa- rectly in mission systems; simplification and ease bilities developed for one mission are often rele- of operation; testing and evaluation of complex vant to other missions as well, particularly those systems early in the lifecycle; high-reliability real- within the same class. control; mission design optimization, trade 1.1. Technical Approach studies, prediction; analysis of complex systems; Computing covers innovative approaches to risk reduction; reduced mission design-cycle time; flight and ground computing with the poten- and lower lifecycle cost. tial to increase robustness and science-return on 1.3. Applicability/Traceability to NASA long-duration missions. Innovative computing ar- Strategic Goals chitectures are required for integrated multi-scale Aerospace Engineering data-analysis and modeling in support of both science and engineering. Modeling covers sev- NASA’s expansive and long-term vision for aero- eral approaches to the integration of heteroge- space engineering is heavily dependent on numer- neous models to meet challenges ranging from ous developments and breakthroughs in MSITP Earth-systems modeling to NextGen air traffic including high-fidelity multi-scale physical mod- eling, data access and analysis, and improved HEC TA11-2 DRAFT Figure 1: Modeling, Simulation, Information Technology and Processing Technology Area Strategic Roadmap (TASR).

DRAFT TA11–3/4 This page is intentionally left blank resources. These developments support goals such Human Exploration as developing conflict alert capability for termi- Agent-based simulation-to-implementation nal operations (2013); integration of high-fidelity concepts have been successfully demonstrat- hypersonic design tools (2015); improving over- ed in studies and in ISS operations, show- all vehicle safety (2016); developing technologies ing how such technologies can be systematically to enable fuel burn reductions of 33% (2016), implemented to support Human Research such 50% (2020) and 70% (2026) for subsonic fixed as: Biomed Tech Demo (2012+); Performance wing vehicle; and demonstrating a 71dB cumu- Health Tech Demo (2014); and Medical lative noise level below Stage 4 for subsonic vehi- Suite Demo (2018). Integrated software-hard- cle (2028). ware modeling will benefit Closed-Loop ECLSS Space and Earth Science (2014); EVA Demo (2018); and Nuclear Ther- Future large-scale deep space missions following mal Propulsion (2020). Simulation-based systems the Mars Science Laboratory (MSL; 2011) will re- engineering is needed in planning and develop- quire model-based systems engineering to con- ing Flagship Technology Demonstrations, such as trol cost and schedule variations (MAX-C Rover, Advanced In-Space Propulsion Storage & Trans- 2018; Outer Flagship Mission – OPFM, fer (2015) and Advanced ECLSS on ISS (2017). 2020; Mars Sample Return – MSR, 2020+). These Modeling and simulation, as well as high-perfor- same model-based techniques may find additional mance computing, have been identified as key el- uses throughout the lifecycle to achieve system re- ements in planning and developing Exploration liability in challenging operational environments. Precursor Robotic Missions (2014-2019). Other concepts for future planetary missions, e.g., 1.4. Top Technical Challenges a Venus Lander, a Titan Aerobot, a Europa Land- er, can be expected to push further on needs for Technical Challenges in Priority Order greater onboard autonomy and flight computing. 1. Advanced Mission Systems (TABS 4.5): Adaptive Systems Future missions like the James Webb Space 2. Integrated System Lifecycle Simulation (TABS 3.2): Full Mission Telescope (JWST; 2014) will require high-fidel- Simulation 3. Simulation-Based Systems Engineering (TABS 3.3): NASA Digital ity integrated multi-scale physical modeling to Twin anticipate and mitigate design issues. NASA’s in- 4. Software Modeling (TABS 2.1): Formal analysis and traceability of creasingly integrative and productive Heliophys- requirements and design ics and Earth Science missions [Aquarius (2010), 5. Integrated Hardware and Software Modeling (TABS 2.2): Advanced NPP (2010), SDO (2010), LDCM (2011), Integrated Model V&V SMAP (2012), NPOESS (2012), GPM (2013), 6. Modeling (TABS 2.4): Cross-scale and inter-regional coupling 7. Flight Computing (TABS 1.1): System Software for Multi-Core HyspIRI (2013), MMS (2014), ICESat II (2015), Computing DESDynI (2017), ASCENDS (2018)] will re- 8. Integrated Hardware and Software Modeling (TABS 2.2): Complexity quire major innovations in data management and Analysis Tools high-performance computing, including onboard 9. Flight and Ground Computing (TABS 1.1 and 1.2): Eliminate the data analysis. Multi-core “Programmability Gap” Space Operations 10. Software Modeling (TABS 2.1): Software Verification Algorithms ISS operations are already incorporating ad- Flight and Ground Computing (TABS 1.1 vanced near-real-time data mining and semanti- and 1.2): Eliminate the Multi-core “Program- cally oriented technologies for operational data mability Gap”: For all current programming integration and intelligent search. Model-based models, problem analysis and programming approaches to operations automation have also choices have a direct impact on the performance been demonstrated and are operational today in of the resulting code. This leads to an incremental, the ISS MCC. These advanced technologies can costly approach to developing parallel distribut- be extended to support Commercial Cargo and ed-memory programs. What has emerged today is Commercial Crew Demo Flight (2014+). They a "Programmability Gap", the mismatch between will reduce staffing requirements and costs, as well traditional sequential software development and as enhancing software maintainability, extension, today's multi-core and accelerated computing en- and reuse. Technical challenges remain, however, vironments. New parallel languages break with in scaling up agent-based architectures and ensur- the conventional programming paradigm by of- ing their flexibility and reliability. fering high-level abstractions for control and data distribution, thus providing direct support for the

DRAFT TA11-5 Technical Challenges in Chronological Order ification Algorithms: Software verification and 2010-2016 Flight Computing (TABS 1.1): System Software for Multi- validation are critical for flight and ground sys- Core Computing tem reliability and encompass 70% of overall Software Modeling (TABS 2.1): Software Verification Algo- rithms software costs. For human space missions, soft- ware costs are roughly equal to hardware design Integrated Hardware and Software Modeling (TABS 2.2): Complexity Analysis Tools and development costs – and software costs and Flight and Ground Computing (TABS 1.1 and 1.2): Eliminate risks are expected to grow as increasing number the Multi-core “Programmability Gap” of system functions are implemented in software. Integrated System Lifecycle Simulation (TABS 3.2): Full Mis- To support future human space missions, we need sion Simulation to develop automated methods that increase the 2017-2022 Science Modeling (TABS 2.4): Cross-scale and inter-regional coupling assurance of these software systems while greatly Integrated Hardware and Software Modeling (TABS 2.2): decreasing costs. Advanced model-based methods Advanced Integrated Model V&V may be able to address these issues, but this will Software Modeling (TABS 2.1): Formal analysis and trace- require significant scaling up of current methods ability of requirements and design to address verification of automated operations 2023-2028 Simulation-Based Systems Engineering (TABS 3.3): NASA Digital Twin software, verification of system health manage- Advanced Mission Systems (TABS 4.5): Adaptive Systems ment software, verification for adaptive avionics and automated test-generation for system soft- specification of efficient parallel algorithms for ware validation. multi-level system hierarchies based on multicore Integrated Hardware and Software Modeling architectures. There is a need to experiment with (TABS 2.2): Complexity analysis tools: Com- new languages such as Chapel, X-10 and Haskell plexity analysis tools should be developed for use and to determine their utility for a broad range in concept development and requirements defi- of NASA applications. Virtual machine (VM) ap- nition. This would allow system engineering and proaches and architecture-aware compiler tech- analyses to be done early enough and to be incor- nology should also be developed. porated into Pre-Phase-A design centers, into mis- Flight Computing (TABS 1.1): System Soft- sion-costing models, and into various trade-space ware for Multi-Core Computing: The first evaluation processes. Ideally, these tools would multi-core computing chips for space applications link with design, implementation, and test tools. are emerging and the time is right to develop ar- Finally, process templates should be developed chitecturally sound concepts for multi-core sys- that build on this new class of tools. tem software that can use the unique attributes of Integrated Hardware and Software Modeling multi-core computing: software-based approaches (TABS 2.2): Advanced Integrated Model V&V: to fault tolerance that complement radiation hard- Massively parallel models of key spacecraft mod- ening strategies, management approaches ules are needed for integration into Sierra or sim- that account for the energy costs of data move- ilar high-fidelity frameworks. These include mod- ment, and programmability and compiler support els of optics, space environments, and control for exploiting parallelism. systems. New methods and tools are needed for Software Modeling (TABS 2.1): Formal anal- integrated model V&V, including rigorous meth- ysis and traceability of requirements and de- ods for selecting the most cost-effective test se- sign: Formal analysis and related tools have been quences. Component and subsystem test meth- effective in reducing flight software defects for ods for spacecraft-unique issues are needed, along missions like MSL. Current tools, however, focus with advanced workflow and rapid model-pro- on the coding and testing phases. No compara- totyping technologies. Orders-of-magnitude im- ble tools have been shown to be effective in the provements in accuracy will be required in order requirements and design phases, where the most to gain project acceptance and meet the goal of costly defects are introduced. Manual methods in application to at least one Flagship NASA mission these phases, based on unstructured natural lan- in parallel with conventional practices. guage, are almost always vague, incomplete, un- Science Modeling (TABS 2.4): Cross-scale traceable, and even self-contradictory. These need and inter-regional coupling: Cradle-to-grave to be replaced with methods based on formal se- modeling of a solar eruption, from its initiation mantics, so that checking and traceability can be to its impact on terrestrial, lunar, or spacecraft en- partially automated. vironments, requires the resolution of each sub- Software Modeling (TABS 2.1): Software Ver- domain and the coupling of sub-domains across

TA11-6 DRAFT their boundaries. Calculations of this kind entail safe and effective operations can begin. As an ex- not only the requirements for modeling of each ample, any NEO mission concept likely requires domain, but the added complications of informa- an initial gravitational characterization phase from tion transfer and execution synchronization. The a parking orbit to determine the gradients of the often vastly different time and spatial scales ren- body’s gravitational field before landing or touch- der the combined problem intractable with cur- and-go operations are attempted. Will it be possi- rent technology. A minimum computational re- ble to send spacecraft and human-robotic teams quirement for such forefront calculation involves into these environments to accomplish character- the sum of all coupled domain calculations (opti- ization prior to actual operations? What would be mistically assuming the calculations can be pipe- the role of onboard capabilities for intelligent sys- lined). Similar challenges exist in Earth-systems tems, data analysis, modeling, and possibly ma- modeling, for example, climate-weather model- chine learning? An adaptive-systems approach ing, and for multi-scale, multi-physics engineer- could revolutionize how missions are conceived ing models. and carried out, and would extend NASA’s reach Integrated System Lifecycle Simulation for scientific investigation and exploration much (TABS 3.2): Full-Mission Simulation: Perform farther out into the . complete mission lifecycle analysis, planning and operations utilizing best available system, sub- 2. Detailed Portfolio Discussion system and supporting models and simulations, both hardware and software. The mission - 2.1. Summary description and Technical cycle includes: design and development of mis- Area Breakdown Structure (TABS) sion elements, any supporting infrastructure, any The topic area ofModeling, Simulation, Infor- supporting integration work, any additional op- mation Technology and Processing is inherently portunities, launch operations, fight operations, broad (see Figure 2). Software and computing in mission operations, anomaly resolution, asset one form or another touches all of the other road- management, return operations, recovery opera- map topic areas. While those roadmap efforts are tions and system/data disposition. addressing needs from specific domain perspec- Simulation-Based Systems Engineering tives, this TA focuses on needed advances in un- (TABS 3.3): NASA Digital Twin: A digital twin derlying foundational capabilities for software and is an integrated multi-physics, multi-scale, proba- computing, and how they are to support increas- bilistic simulation of a vehicle or system that uses ing demands for modeling and simulation, as well the best available physical models, sensor updates, as needs for science and engineering information fleet history, etc., to mirror the life of its flying processing. twin. The digital twin is ultra-realistic and may In the area of Computing, there are needs for consider one or more important and interdepen- flight computing architectures that scale, separate dent vehicle systems, including propulsion/ener- data processing from control functions, and in- gy storage, avionics, life support, vehicle struc- clude intelligent fault-tolerance technology. Ad- ture, thermal management/TPS, etc. In addition vances in ground computing are needed to meet to the backbone of high-fidelity physical models, emerging requirements for modeling, simulation, the digital twin integrates sensor data from the ve- and analysis. There is also a need to close the per- hicle’s on-board integrated vehicle health man- formance and architectural gaps between ground agement (IVHM) system, maintenance histo- and flight computing, in order to increase capabil- ry, and all available historical/fleet data obtained ity on the flight side and enable the migration of using data mining and text mining. The systems functions from ground to flight systems. on board the digital twin are also capable of mit- Research in Modeling will address the need for igating damage or degradation by recommending coherent frameworks that have the potential to re- changes in mission profile to increase both the life duce ambiguity, uncertainty, and error. Efficien- span and the probability of mission success. cy of model development is also an important Advanced Mission Systems (TABS 4.5): theme. Coherence and efficiency are mutually re- Adaptive Systems: NASA missions are inherent- inforcing, focusing attention on composability, ly ventures into the unknown. The agency appears transparency, understandability, modularity, and poised on the cusp of a trend where science and interoperability of models and simulations. exploration missions must first undergo a charac- Effective and accurate Simulation in turn de- terization phase in the remote environment before pends on integrable families of models that cap-

DRAFT TA11-7 Figure 2. Modeling, Simulation, Information Technology and Processing Technology Area Breakdown Structure (TABS). ture different aspects of phenomena and different for theoretical computer science. scales of resolution. The variability of parameters 2.2. Description of each column in the in simulation exercises is currently represented by TABS probability distributions, but uncertainty due to lack of knowledge is difficult to quantify. Uncer- 2.2.1. Computing tainty and risk must be addressed if simulation is The computing roadmap is summarized in Ta- to be used intelligently in decision-support, and ble 1. these issues are equally important in science ap- 2.2.1.1. Flight Computing plications. The requirements for high-fidelity simulation Pinpoint landing, hazard avoidance, rendez- open the door to broader challenges in Informa- vous-and-capture, and surface mobility are di- tion Processing. There are software engineering rectly tied to the availability of high-performance challenges related to simulation reuse, composi- space-based computing. In addition, multi-core tion, testing, and human interaction. There are architectures have significant potential to imple- systems questions related to innovative comput- ment scalable computing, thereby lowering space- ing platforms. There are graphics and HCI ques- craft vehicle mass and power by reducing the tions related to the interface with users of vary- number of dedicated systems needed to imple- ing disciplines and levels of expertise. In NASA’s ment onboard functions. These requirements are science and engineering domains, data-analysis, equally important to space science and human ex- modeling, and simulation requirements quickly ploration missions. saturate the best available supercomputing plat- Intelligent adaptive capabilities will often be forms. Multi-resolution, multi-physics, adaptive tightly integrated with spacecraft command and and hybrid systems modeling raise challenges even control functions. Onboard computers must TA11-8 DRAFT Table 1. Computing Roadmap Milestone Relevance Timeframe Low power, high performance flight computing Flight computing capability of 10 GOPS @ 10 W for computationally intensive needs 2013 capability (e.g., multi-core) available (high data rate instruments) Develop and demonstrate multi-core fault mitiga- Combination of radiation hardening, RHBD, and software-based approaches to fault 2014 tion methods and integrate with avionics FDIR tolerance to achieve availability similar to existing flight processors methods Demonstrate capability to move cores on and off Programmers freed from managing details of multi-core management and resource 2015 line and continue computation without loss of data allocation Demonstrate a policy based approach for energy Low energy flight computation handled by compilers and runtime environment with 2016 management at runtime, complemented by energy- options for programmer optimization aware compilers Demonstration of flight use of multi-core processing Onboard data product generation for high date rate instruments and/or real-time 2020 vision-based processing for e.g., terrain-relative navigation Multi-core flight processing enables new types of Onboard model-based reasoning (e.g., mission planning, fault management) for 2025 missions surface, small body or similar missions requiring a high degree of adaptability in an uncertain operational environment Highly capable flight computing enables reconfigu- Onboard computation directly supports missions requiring a characterization phase 2030 rable spacecraft and versatile missions at the target (e.g., small body proximity ops, atmospheric mobility) to refine envi- ronmental models, operations concepts, and spacecraft configuration and mission design as needed therefore demonstrate high reliability and avail- Processor. This effort began as an initiative to de- ability. In systems deployed beyond low-earth or- velop a processor that performs with a high level of bit, robust operation must be maintained despite efficiency across all categories of application pro- radiation and thermal environments harsher than cessing, ranging from bit-level stream processing those encountered terrestrially or in low earth or- to symbolic processing, and encompassing proces- bit. sor capabilities ranging from special purpose dig- COTS platforms typically out-perform radi- ital signal processors to general-purpose proces- ation-hardened space platforms by an order of sors. A radiation hardened version of the Tile-64, magnitude or more. Systems based on commer- a commercial processor, Maestro was developed in cial processors that mitigate environment-induced a collaboration among several government agen- faults with temporal and spatial redundancy (e.g., cies. The throughput for this device is consistent triplication and voting) could be viable for space- with NASA needs and metrics. Its power dissipa- based use. The commercial sector is basing current tion, however, compromises its desirability for use and future high performance systems on multi- in numerous spaceflight systems. Accordingly, one core platforms because of their power and system- of NASA’s objectives for future space-based com- level efficiencies. The same benefits are applicable puting should be to leverage investments in Mae- to spaceflight systems. In addition, multi-core en- stro or other processor platforms to deliver a sys- ables numerous system architecture options not tem with throughput comparable to Maestro, but feasible with single-core processors. at lower power dissipation. Status: There are two promising processor de- Future Directions: Whereas radiation toler- velopment activities that are currently undergoing ant COTS-based boards that offer increased per- evaluation for their suitability for use in NASA’s formance are available (~1000 MIPS), the perfor- programs: the HyperX and the Maestro. mance is offered at the expense of reduced power TheHyperX (Hx) development involves collab- efficiency. NASA should seek to concurrently ad- oration between NASA’s Exploration Technology vance the state of the art of two metrics (sustained Development Program (ETDP) and NASA’s In- throughput and processing efficiency) of high-per- novative Partnership Program. Test results formed formance radiation-hardened processors. Goals the basis for single-event radiation mitigation are throughput greater than 2000 MIPS with effi- strategies that were developed in 2009. To vali- ciency better than 500 MIPS/W. date the radiation mitigation techniques, four Hy- The need for power-efficient high-performance perX processor boards are being flown as a part radiation-tolerant processors and the peripheral of the Materials International Space Station Ex- electronics required to implement functional sys- periment-7 (MISSE-7). As of May 14, 2010, the tems is not unique to NASA; this capability could boards had executed over 140,000 iterations with also benefit commercial aerospace entities and no faults. other governmental agencies that require high-ca- The second processor development activity of pability spaceflight systems. NASA should there- interest to NASA is the assessment of the Maestro fore leverage to the extent practical relevant ex- DRAFT TA11-9 ternal technology- and processor-development given energy cost. projects sponsored by other organizations and Programmability: Another important objective agencies. Although the primary discriminating for multi-core space computing is to reduce soft- factors — sustained processor performance, pow- ware development time while managing complex- er efficiency, and radiation tolerance — are among ity and enhancing testability of the overall flight the key metrics, an equally important factor is the code base. Programs on spacecraft are enormously availability of tools to support the software de- varied. Spacecraft run fixed- and variable-rate esti- velopment flow. The NASA and larger aerospace mation and control loops with both soft and hard community will not accept a very capable flight real-time requirements, perform power switching, computing capability if the associated tool set and collect large amounts of data in files, send telem- other support for software development and reli- etry to the ground, process commands from the ability is considered inadequate. ground, conduct science data processing, moni- Fault Tolerance and Fault Management: There tor board and peripheral hardware, and perform is a need to develop a hierarchical suite of fault a host of other functions. There is every reason to mitigation methods that takes advantage of the expect that the demand will only increase. differences and strengths of multi-core proces- The cost of programming multi-core processors sor architectures and lays the groundwork to be can be high; the prevailing method is to hand- integrated with mainstream spacecraft Fault De- code algorithms tailored to the problem and the tection Isolation and Response (FDIR) meth- processor. (See also Section 2.2.1.2.) NASA can- ods. With large numbers of cores, one imagines it not afford this approach, and it is also inconsistent should be possible to reduce power to the system if fault tolerance is a goal, since the core alloca- when the number of cores needed is lower. Con- tions for handcrafted algorithms carefully laid out versely, when the need arises, it should be possible by the programmer would likely be compromised to increase the number of powered cores to absorb when a core is taken off-line. Energy-management increased load. The actual power savings realizable considerations will also disrupt pre-planned allo- is a design consideration; for example, degree of cations of cores. For both fault tolerant behav- emphasis on optimizing power dissipation. ior and energy management efficiency, the system The ability to manage cores for energy consid- should be able to re-organize and adapt to re- erations is related to the ability to re-assign com- duced core availability. Some choices can be made putations to support fault tolerance. More specif- at compile time, transparently to the programmer, ically, fault tolerance considerations may identify but a robust flight computing system must sup- the need to take certain faulted cores off-line. En- port run-time re-allocation of cores. The issues for ergy management considerations may identify the fault tolerance and energy management are simi- need to reduce the number of active (powered) lar, and we should expect the mechanisms devel- cores. When the goal is to manage overall pow- oped for the one to be applicable to the other. er load, there would be no particular constraint Energy Management: The systems and soft- on the specific cores to be powered down. On the ware engineer should be provided with a set of other hand, when the goal is to manage energy models and tools to minimize energy usage in a expenditure efficiently for a given computation, multi-core flight processor environment through- choices about data movement may be the driv- out the lifecycle. Multi-core energy management er, indicating a need to coordinate among specific spans software code architectural definition, de- cores based on their locality. Thus both fault toler- velopment, and execution, i.e., the run-time en- ance and energy management considerations may vironment. Early work in multi-core architecture introduce objectives for allocating / de-allocat- analysis has shown that as chip density increas- ing certain cores; sometimes these objectives will es and the number of cores scales up, the ener- be aligned, and sometimes they may be compet- gy penalty price paid for moving data to and from ing. What is ultimately needed is an overall core the core to memory and I/O begins to dominate. management capability and set of strategies that This analysis implies that the importance of data are responsive to both fault tolerance and ener- locality becomes much more critical. This anal- gy management drivers. Computing systems do ysis further suggests that operating on data in a not currently have all of these controls, but multi- core’s local register set or local L1/L2 cache will core computing will need them. Computational achieve a higher level of energy/performance effi- throughput is a capability that can be modulated ciency than from data residing in main memory. for the contingencies of fault tolerance, and for a From a software-coding viewpoint, a low energy

TA11-10 DRAFT design approach would entail minimizing aggre- computation which are all equivalent to one an- gate data movement across high energy cost cen- other but may be easier or harder to implement ters in the chip. High fidelity chip energy models in different types of physical hardware. Advances can provide a guide to the programmer on the ap- have been made on specialized quantum comput- propriate trade space that can be explored in opti- ing devices, such as the quantum annealing ma- mizing the energy use. Additional support for the chine developed with the assistance of JPL. This programmer would also come from energy-aware has control of around 72 functional qubits, and compilers. has been used to solve combinatorial optimization 2.2.1.2. Ground Computing problems arising in computer vision and machine Supercomputing: Parallelization of numeri- learning. cal solvers on multi-processor architectures has At this stage quantum computing is still a re- been an active research area for decades. The search project with unclear eventual payoff for over-arching issues driving current research con- NASA. Nevertheless, it represents the only po- cern the complex interdependencies among hard- tentially radical advance in computer science in ware, software, and problem structure. For all cur- over 100 years. Quantum algorithms are predict- rent programming models, the problem analysis ed to solve certain computational problems expo- and programming choices have a direct impact nentially faster than is possible classically, and can on the performance of the resulting code. Cur- solve many more problems polynomially faster. rent approaches attempt to balance distributed Rudimentary quantum computers now exist and and shared memory-models, parallel performance have been used to solve actual problems. More so- benefits and ease of use. Languages such as UPC, phisticated special-purpose quantum computing Fortran, and Titanium require programmers spec- hardware is on the verge of surpassing the capabil- ify the distribution of data, while allowing them ities of mid-range classical computers on certain to use a global name space to specify data access- optimization problems. es. These languages provide an incremental ap- 2.2.2. Modeling proach to developing parallel distributed-memo- 2.2.2.1. Software Modeling and Model- ry programs. Checking Future Directions: PGAS models, such as Status: UPC, bridge the gap between the shared-memory Flight software defect prevention, detec- programming approach and the distributed-mem- tion, and removal are currently accomplished by a ory approach. The UPC model offers performance combination of methods applied at each phase of improvement through an iterative refinement pro- software development: requirements, design, cod- cess. However, in order to gain performance, the ing, and testing. Most test methods focus on the programmer must pay attention to where data is coding and testing phases, but the most serious located and must minimize remote data access. defects are inserted in the requirements and de- This is nontrivial for most codes. (See also Section sign phases. (This is true of systems development 2.2.1.1.) While the UPC language provides con- in general, not just software.) Automated and structs to achieve this goal, it would be preferable semi-automated (tool-supported) processes in all for the compiler to apply aggressive communica- phases are at various TRLs. These include require- tions-reducing optimizations automatically. How- ments capture and analysis, design verification ever, this seems to be beyond the capabilities of techniques, logic model checking, static source current compilers (Jin et al., 2009) code analysis, coding standards, code review pro- Quantum computing: cesses, model-driven verification, runtime moni- There are now around toring, fault containment strategies, and runtime a dozen alternative approaches to implementing verification methods. quantum computers. Major corporations have re- A good example of the state-of-the-art in tool- search groups working on quantum computing. based methods is the scrub system developed and In addition to progress on quantum algorithms used by Gerard Holzmann’s group at JPL. scrub there have been parallel improvements in quan- uses a broad spectrum of techniques, including tum computing hardware. Most importantly, it automatic tracking of all peer- and tool-reported is now known that quantum error correction is issues, as well as integration of peer-review com- possible in principle and so it is not necessary to ments and tool-generated automated analyses. achieve perfection in quantum hardware. Second- Future Directions: Although current methods ly, there are many different models of quantum have demonstrated substantial and quantifiable

DRAFT TA11-11 benefits on real flight software, there is widespread op one-off detectors, i.e., “monitors.” The recov- agreement among experts that many serious chal- ery engine operates in a similar fashion; it deter- lenges remain. The following list is based in part mines an appropriate recovery action (or sequence on Tallant et al. (2004), Welch et al. (2007), Par- of actions) based on operational constraints, sys- nas (2010), and Holzmann (personal communi- tem model, estimated system state, and intend- cation, 2010). ed system state (which is inferred from observed Near-term: Auto-coding methods; embedded commands). The recovery engine is sound and can run-time auto-testing; rapid prototyping tools be complete with respect to the system model and and frameworks; relational algebra and game the- operating constraints—there is no need to hand- oretic semantics; mathematical documentation of craft individual “responses.” code; automated verification management; simu- Planning & Execution Architectures: Model- lation-based design based planning & execution architectures provide Mid-term: Formal requirements specifications, failure diagnosis and recovery capabilities exceed- using predicates on observable behavior; formal ing those of model-based fault-management ar- methods for traceability analysis; linear logic & chitectures. In particular, their execution models other theorem proving technologies; formaliza- unify command and control authority such that tion of arrays, pointers, hidden-state, side-effects, (i) the distinction between “nominal” and “off- abstract data types, non-terminating programs, nominal” control is eliminated, (ii) conflicts (e.g., non-determinism; in general, treating features of resource, goal) are detected prior to execution, real- software as “normal” rather than anom- and (iii) short- (i.e., reactive) and long-term (i.e., alous; modeling time as an ordinary continuous deliberative) planning of spacecraft operations, re- variable, consistent with standard science and en- sources, and behavior is enabled. gineering usage 2.2.2.3. Human-System Performance Far-term: V&V of run-time design, including Modeling real-time performance; analytical test-planning Challenges: In NASA’s planned human explo- and test-reduction; probabilistic testing; model- ration missions, for the first time in the history based mission-system development of spaceflight, speed-of-light limitations will im- 2.2.2.2. Integrated Hardware and Software pose operationally significant communication de- Modeling lays between flight crews and mission control. Status: Recent experiences with deep-space sys- These delays will require revolutionary chang- tems have highlighted cost-growth issues during es in today’s ground-centered approach to off- integration and test (I&T) and operations. These nominal situation management. Crewmembers include changes to the design late in the life-cy- will have to identify, diagnose, and recover from cle, often resulting in a ripple-effect of additional the most time-critical anomalies during dynam- changes in other areas, unexpected results during ic flight phases without any real-time ground as- testing due to unplanned interaction of fault re- sistance. The list of needed technologies includes sponses, and operational limitations placed on the onboard decision support tools, innovative modes spacecraft based on how the system was tested (in of human-automation interaction, and distribut- order to “fly-as-you-test”). These issues cause cost ed, asynchronous command and control technol- and schedule growth during system development. ogies. Since the quality of these decision-support Future Directions technologies is critically dependent on systems en- Model-based Fault-Management Architec- gineering decisions such as sensor placement, sen- tures: In model-based architectures, engineers sor coverage, and the overall architecture of vehicle specify declarative models of operational con- and habitat command and data handling systems, straints and system components (including com- these technologies cannot be add-ons; they must ponent connectivity, command-ability, and be- be part of a model-based systems engineering de- havior). The component models are collectively sign and development process from the outset. reasoned over as a system model by diagnosis-and- Future Directions: It is necessary to develop and recovery engines. The diagnosis engine detects integrate high-fidelity models of candidate vehi- and diagnoses failures based on the system mod- cle and habitat systems and crew-vehicle interfaces el and observations. The detection and diagnosis into a virtual real-time mission operations simula- process is sound and can be complete with respect tion environment, complete with instrumentation to the system model—there is no need to devel- to measure crew-system interactions and crew per-

TA11-12 DRAFT formance (cf. Sections 2.2.2.2, 2.2.3.3, 2.2.3.4). Earth science models “help to quantify the in- The human-systems modeling approach, that is, teractions and balances between the various com- integrating systems engineering and human fac- ponents acting on a wide variety of scales in both tors methods into process and tool development, space and time” (2008 ESMA). will ensure that new and relevant capabilities are Assimilation here means the use of models to infused into all vehicle and habitat designs and as- synthesize diverse in-situ and satellite data streams sociated operations concepts. into a single product (analysis) that combines the This will enable cost-effective analyses of a wide strengths of each data set and also of the model. range of both nominal and off-nominal opera- For example, and land data assimilation sys- tions scenarios, to develop an understanding of tems are used for climate analyses and short-term both human decision processes and the enhance- climate prediction. ment to decision-making provided by advanced Earth science modeling and assimilation are core decision support tools, from both crew and mis- elements of the science program to improve the sion-control perspectives. The cost of developing prediction of weather and extreme weather events, this type of model-based technology is amortized global land cover change, global cycle, and by a number of additional uses of the results: a the climate system. These technologies support in- platform for industry and academic partners to in- ternational programs such as the World Climate corporate and test decision-support technologies Research Program (WCRP), the World Weather and conduct additional behavioral studies; an in- Research Programme (WWRP) and the Global cubator for human-systems technology spinoffs Earth Observation System of Systems (GEOSS). (such as natural language command and control They range from comprehensive, global whole- interfaces with smart device and smart home ap- earth systems models to local, more process-ori- plications) ; a medium for hands-on involvement ented models (ocean, , land surface). with deep-space mission operations concepts by Climate and weather modeling spans timescales the public through web-based operational simula- from local weather to short-term climate predic- tions and associated gaming capabilities; a means tion to long-term . Data assimila- to engage and involve the next generation of sci- tion uses available observations together with a entists and engineers in deep-space mission devel- model forecast to provide the best estimate of the opment activities and STEM education; an inte- state of a physical system. Some of the most im- gration laboratory to develop advanced solutions portant technologies are: to off-nominal situation management in related • High-resolution models: Increasing the operational domains such as robotic missions, dis- model spatial resolution provides a better tributed military operations, and next-generation representation of physical processes, as well as airspace operations; in-situ distributed training a better use of and a better comparison with system for just-in-time training for crew and mis- satellite data. It also enables to assess regional sion support personnel. impacts of climate variability and change, gives 2.2.2.4. Science and Aerospace Engineering better input to future mission design (through Modeling Observing System Simulation Experiments, Science Modeling: As stated in NASA Science OSSEs) and provides better forecasts. For Mission Directorate 2010 Science Plan (2010 example, increases in model resolution will SMD SP), the NASA Science program seeks an- bring better predictions hurricane intensity, by swers to questions such as, “How and why are taking full advantage of high-resolution data Earth’s climate and the environment changing? such as CloudSat, CALIPSO and future GPM How and why does the Sun vary and affect Earth data. and the rest of the solar system?” To answer these • Integrated models: Increasing computing power questions, models and assimilation systems as also enables to increase model complexity, well as numerical simulations are the main tools for example by integrating information from to synthesize the large array of information from several process-oriented models, understanding many current and future scientific measurements, the interactions between physical, chemical to relate them to physical processes and to plan and biological processes in the atmosphere, for future missions. Two specific examples of such ocean and land surface. Such capabilities modeling are Earth System Modeling and Assimi- will contribute to the development of an lation, and Heliophysics modeling. Integrated Earth System Analysis (IESA) that was identified as one of the top priorities by DRAFT TA11-13 the Interagency working Group for Climate ments; models coupling to fluid models; Variability and Change. analysis of complex data sets. • Adaptability data management and analytics: One possible cross cutting revolutionary tech- For example, as stated in (2008 ESMA), the nological concept for all Science and Explora- temporary storage requirement will grow from tion systems, is the “Sensor Web”, which repre- about 0.5 TB per run to 2 TB per run in 2013. sents a new paradigm for data assimilation and Online access to products for field campaigns could lead to significant improvements in Science will grow from 10 TB in 2009 to 40 TB in Modeling. It is defined to be an intelligent data 2013. Petabytes of data are shared between collection system comprised of widely deployed, climate centers (more than 1 Terabyte a day). heterogeneous sensors. A sophisticated communi- Innovative visualization and analysis of model cations fabric would enable rapid, seamless inter- output will enable new discoveries. action between instruments and Science numeri- In Heliophysics, modeling and numerical sim- cal models, enabling the data assimilation system ulations have recently become very important to to identify an “optimal” sequence of targeted ob- support the understanding of the overall dynam- servations and autonomously collect data at spe- ics of the Sun-to-Earth or Sun-to-Planet chain and cific locations in space and time. Future missions to forecast and describe space weather. As in Earth would be made more cost-effective, as the sensor Science, Heliophysics models are now widely used web capabilities would maximize the return on in- to assist in the scientific analysis of spacecraft-pro- vestment of next-generation observing systems. vided data sets, as well as in mission planning and As a summary, in order to take full advantage conception. Examples of Heliophysics models are of the nation’s investment in satellite- and space- those dealing with processes such as the magnetic craft-provided datasets and also to assist in future reconnection, the physics of shocks, particle accel- mission planning and design, some of the techno- eration and of rotational discontinuities, the tur- logical advancements necessary to Science Mod- bulent dissipation of magnetic and velocity fields eling are: in space plasmas, the initiation of Solar Flares and • Programming Environment (still expecting of Coronal Mass Ejections (CMEs), the structure Fortran to be the main programming language): and of Interplanetary Coronal Mass »» Near-Term: Ejections (ICMEs), the Photosphere-corona con- ◊ Development of Fortran-compatible, nection, cross-scale and inter-regional coupling of parallel libraries space plasmas (similar to multi-fluid like models) and Space Weather. Space Weather is of particular ◊ Development of software standards and importance, as it covers all the same topics as gen- interoperability standards eral heliophysics science, requires cross-scale cou- »» Mid-Term: pling, and has many implications on other model- ◊ Establishment of modeling testbeds and ing activities, particularly Earth Science. transition support to new High Performance Like Earth Science, Heliophysics moels deal Processors to take full advantage of new with very large volumes of data, both from ob- HPC technology, such as improved parallel servations and from simulations; for example, the I/O and optimal trade-offs between new Solar Dynamics Observatory (SDO) mis- memory, I/O and processor utilization sion generates data at a rate of about 1 TB per day • Improvement of software modeling (2008 HS). Significant improvements in storage, frameworks/community models access (such processing, assimilation and visualization technol- as ESMF and CCMC) ogies are required, including data mining, auto- »» Long-Term: mated metadata acquisition, and high-end com- • Development of software engineering tools to puting. facilitate a transparent adaptation of modeling Some specific Heliophysics modeling challeng- code to architecture evolution es include multi-scale problems for processes on scales of ~1km that determine evolution of sys- • Data Access and Analysis tem of > 107 km; time scales with the solar cycle »» Near-Term: of about 11 years, and the proton cyclotron time ◊ Development of tools to deal with of about 1s; systems of about 106 km that gener- exponentially increasing amounts of data, ate km-scale features, e.g., auroral arcs; coupling soon to reach Exabytes (1018 bytes) per day to lower atmosphere and other planetary environ- ◊ Development of fast and transparent access TA11-14 DRAFT between distributed and remote data storage synergies with national and homeland security, (bandwidth, firewalls) and simulations support to space). »» Mid-Term: • Intelligent and adaptive systems to significantly ◊ Development of “Sensor Web” capabilities improve the performance and robustness of with on-demand, real- or near-real-time vehicles and command-and-control systems. satellite and targeted observations • Complex interactive systems to better that can be used in the modeling process understand the of and options for ◊ Development of standards for data sharing improving the performance of advanced and distribution (format, metadata, naming aerospace systems encompassing a broad range conventions, ontologies) of crewed and autonomous platforms. »» Long-Term: Research in aerospace engineering can be di- ◊ Development of data mining and computer- vided among four major categories [consistent aided discovery tools with the five themes above]: Aerosciences; Pro- pulsion and Power; Dynamics, Control, Navi- ◊ Development of tools to perform gation, Guidance and Avionics; and Intelligent diagnostics and new data acquisition while and Human Integrated Systems, Operations, De- running the models cision Making and Networking. Additionally, a ◊ Full deployment of advanced Sensor Web- fifth category (Materials, Structures, Mechanical like frameworks for transparent, rapid and Systems and Manufacturing) was addressed by its seamless interaction between observing own roadmap (TA 12). Each of these themes and systems and Science numerical models categories is highly dependent on a combination • Optimal utilization of HEC resources available of advanced modeling, simulation, information to modeling, assimilation, simulation and technology and processing (MSITP). Specific ac- visualization, tivities in aerospace engineering that are relevant »» Short-Term (by 2015): to MSITP include: ◊ ~10-100 Teraflop sustained (4X – 10X • Aerosciences includes the development resolution, ensembles) (currently about 2 of rapid, computationally efficient and Teraflop) accurate computational methods to enable ◊ 300 TB RAM (currently, 70 TB) consideration of novel vehicle configurations at ◊ 20 PB+ online disk cache (currently, 1.5 off-design conditions; improved transition and PB) turbulence models for prediction of the location and extent of separated regions and shocks, the ◊ 100 Gbit/sec sustained network (currently, effects of streamline curvature and juncture 1 Gbit/sec) flows; and multidisciplinary, multiscale and »» Mid-Term: Use of coprocessors and multiphysics analyses to consider propulsion- accelerators, e.g.,FPGAs vehicle integration, aeroacoustics, aeroelasticity, »» Long-Term: Use of quantum computing and other cross-cutting design considerations Aerospace Engineering. There are five com- from the beginning of the design process; and mon themes among the high-priority challenges supporting elements such as development for NASA’s aerospace engineering programs: of higher-order algorithms, adaptive grid • Physics-based analysis tools to enable analytical generation techniques, and quantification of capabilities that go far beyond existing modeling uncertainty. and simulation capabilities and reduce the use • Propulsion and Power includes validated of empirical approaches in vehicle design. physics-based numerical simulation codes for • Multidisciplinary design tools to integrate component-level analysis and the improvement high-fidelity analyses with efficient design of multidisciplinary, system-level design tools methods and to accommodate uncertainty, for vehicle analysis; modeling and experimental multiple objectives, and large-scale systems. validation of combustion process physics such as • Advanced configurations to go beyond injection, mixing, ignition, finite-rate kinetics, the ability of conventional technologies to turbulence– interactions, and achieve ARMD Strategic Objectives (i.e., combustion instability to improve efficiency capacity, safety and reliability, efficiency and and life; computationally efficient modeling of performance, energy and the environment, highly complex, nonlinear, coupled aeroelastic/

DRAFT TA11-15 flight dynamics phenomena; and physics- tion, and integration. The framework is intend- based modeling tools such as computational ed to ensure that architecture descriptions can be fluid dynamics (CFD), life prediction tools, compared and related across boundaries. and steady-state and dynamic performance The System Modeling Language (SysML) is a evaluation. general purpose graphical modeling language for • Dynamics, Control, Navigation, Guidance and analyzing, designing and verifying complex sys- Avionics Intelligent and Human Integrated tems that may include hardware, software, in- Systems includes closed-loop flow control formation, personnel, procedures and facilities. algorithms to enable morphing of the shape of SysML models include many different diagram a control surface or to control the flow locally types, capturing structural, behavioral and archi- to increase lift and reduce drag; models to tectural information. The SysML approach inte- control interactions between flight controls, grates H/W and S/W disciplines, and it is scalable, propulsion, structures, noise, emission, adaptable to different domains, and is supported health monitoring and possibly closed loop by multiple commercial tools. aerodynamic control; and direct adaptation Future Directions: The benefits of using a uni- schemes and learning adaptive schemes to form architecture framework across the agen- develop intelligent and adaptive controls. cy cannot be realized without a set of case stud- • Intelligent and Human Integrated Systems, ies available for system engineers to review. There Operations, Decision Making and Networking is a need to develop some key exemplars of archi- includes analysis of complex interactive systems tectural views for specific organizations and proj- including development of safe separation ects in order to help clarify the use of this tech- algorithms that simultaneously meet precision nology. By modeling a full project such as ground sequencing, merging, spacing requirements system architecture within an architecture frame- in addition to environmental constraints; work, the method and the models will serve as a methods for modeling human-machine template for future projects. interactions; and risk models. Library of SysML models of NASA related sys- tems: Using a library of SysML and UML models, • Materials, Structures, Mechanical Systems and engineers will be able to design their systems by Manufacturing includes physics-based stiffness, reusing a set of existing models. Too often, these strength and damage models spanning length engineers have to begin from scratch the design scales from nano- to continuum (including of the systems. The envisioned library of models computational materials modeling); high- would provide a way for the engineers to design a fidelity analyses for structural design; modeling new spacecraft by assembling existing models that of pyrotechnic dynamics and pyrotechnic are domain specific, and therefore easy to adapt to shock generation; structural, landing, impact the target system. and deployment simulation; instability Profiles for spacecraft, space robotics, space and buckling simulation; multidisciplinary habitats: Profiles provide a means of tailoring design and analysis; material processing and UML/SysML for particular purposes. Extensions manufacturing processes; computational of the language can be inserted. This allows an or- NDE; and the Virtual Digital Fleet Leader ganization to create domain specific constructs (Digital Twin — see Section 2.2.3.3). which extend existing SysML modeling elements. 2.2.2.5. Frameworks, Languages, Tools, and By developing profiles for NASA domains such as Standards Spacecraft, Space Robotics and Space Habitats, Status: In order to support the increasing com- we will provide powerful mechanisms to NASA plexity of NASA missions and exploration of systems engineers for designing future space sys- game-changing technology, a modeling and sim- tems. Existing SysML Profiles such as MARTE for ulation framework is required. Using this frame- Real Time Embedded Systems should be assessed work, investments and alternatives can be evalu- and applied to current NASA Projects. ated in a repeatable manner, and mission system Requirements Modeling: SysML offers re- changes can be planned and monitored. Towards quirements modeling capabilities, thus provid- this objective, architecture frameworks have been ing ways to visualize important requirements re- established as a guide for the development of ar- lationships. There is a need to combine traditional chitectures. Such frameworks define a common requirements management and SysML require- approach for architecture description, presenta- ments modeling in a standardized and sustainable

TA11-16 DRAFT way. not rocket science, the problem needs to be solved Executable models: Executable models seek to in such a way that the data is accessible to those define, precisely, the execution semantics of the who need it, as well as the extended community, relevant elements of the models. The benefits of including participatory exploration. There is also a this are that the models can be unambiguous- need for high speed computer networks to move, ly interpreted; they can be executed, which, giv- share and allow secure interaction with these large en a suitable environment, means that they can be data sets. validated early in the development lifecycle; they 2.2.3.2. Integrated System Lifecycle can be translated to target code achieving 100% Simulation code generation. This can provide a strong anal- Status: NASA tools and analysis capabilities ad- ysis completion criterion, which is: "The model dress most, if not all, of the system lifecycle. Tools is complete when it successfully executes the tests exist to support: rapid concept development, sys- designed for it". tem integration, process development, supply 2.2.3. Simulation chain management, industrial base management, 2.2.3.1. Distributed Simulation simulation based test, operational support and ca- Status: NASA has several flavors of distributed pability re-utilization. However, NASA is just now simulation. In one case, systems and applications taking steps to allow these simulation and analysis run at a single location and the larger team views tools to share information with each other across the simulation from distributed immersive cli- Programs and Projects. ents. In this case, provisions are made to allow the Integration is impeded by ad hoc data formats, team members to view the simulations live or, in system incompatibilities, fragmented programs many cases, after the fact from stored and shared and the lack of a central data hosting capability. information. The other major distributed simula- Added to these challenges are the huge data ele- tion case is computationally distributed simula- ments and the need to store information across a tion. This case borrows heavily from the distribut- system’s multi-decade lifecycle. The issues can be ed simulation community of another government translated into the format, the fidelity, or the secu- agency for tools and standards. In this case, ap- rity/proprietary nature of the information. Oth- plications, models and simulations are hosted by er concerns identified the need to “Monte Carlo” each of the elements involved in the simulation the analysis tools. This translates into the ability to and these elements share information as needed take a piece of Fortran code from the 1980’s, feed to simulate the entire system. It is important to it data from a C++ vehicle planning tool, and send note that, while NASA and the other government the information to an excel spread sheet for graph- agency are using similar tools and protocols, non- ing. One area, data formats, has received signifi- NASA simulations typically involve a large num- cant attention in recent years. The desire to share ber of basic simulations while NASA simulations information, and the steady adoption of XML for typically involve a small number of complex sim- a data interface, has led to many tools being able ulations. to share information, either as files or by commu- Future Directions: nicating directly to one other. A very visible ex- There is a need for large ample of this is the earth science communities’ scale, shared, secure immersive environments to utilization of to share science infor- support team development and analysis of infor- mation via KML. mation. These environments must allow for rapid Future Directions: The primary need is not for development and inclusion of new information / point technologies, but the interfaces, algorithms, knowledge, provide an interface where intelligent and networked platforms necessary to apply them agents can work alongside the humans and man- to large, complex, multi-decadal, systems of sys- age the authority elements necessary to meet na- tems. Formats and integration still have a consid- tional and international intellectual property and erable way to go, but progress is being made. The national security requirements. In addition, these areas with less visible progress include application environments must allow for the significantly var- integration, provision for huge, multi-decadal ied communications capabilities like those associ- data and the legal implications associated with in- ated with teams comprised of both terrestrial and ternational partnerships and advanced technol- extraterrestrial members. ogy. However, based on stakeholder discussions, There is a need for very large (multi-Petabyte), progress has been made in some areas that can be distributed, managed, secure data. While this is leveraged. DRAFT TA11-17 Provisions to allow for storage and management uses the best available physical models, sensor up- of huge multi-decadal data are not far along. Dis- dates, fleet history, etc., to mirror the life of its tributed Simulation technologies will enable team corresponding flying twin. The digital twin is ul- members to provide the data needed, without pro- tra-realistic and may consider one or more im- viding the secret pieces, to ensure successful sub- portant and interdependent vehicle systems, in- systems and system level design integration and cluding propulsion/energy storage, avionics, life test. Multi-fidelity models, and the ability to share support, vehicle structure, thermal management/ only the “safe” fidelities, will enable detailed sys- TPS, etc. Manufacturing anomalies that may af- tems integration activities to be performed early fect the vehicle may also be explicitly consid- enough in the lifecycle to make a significant posi- ered. In addition to the backbone of high-fidelity tive impact on system lifecycle cost. physical models, the digital twin integrates sensor 2.2.3.3. Simulation-Based Systems data from the vehicle’s on-board integrated vehi- Engineering cle health management (IVHM) system, mainte- Status: The technical elements necessary to per- nance history and all available historical/fleet data form Simulation-Based System Engineering are in obtained using data mining and text mining. By place, and in use, all across NASA. Stakeholder combining all of this information, the digital twin interviews have pointed out instances and proj- continuously forecasts the health of the vehicle/ ect reviews have demonstrated the results. How- system, the remaining useful life and the probabil- ever, application has been limited to projects large ity of mission success. The systems on board the enough to absorb the early costs associated with digital twin are also capable of mitigating damage the effort, no the lifecycle benefits. or degradation by recommending changes in mis- Future generations of aerospace vehicles will re- sion profile to increase both the life span and the quire lighter mass while being subjected to high- probability of mission success. er loads and more extreme service conditions over One application of the digital twin is to fly the longer time periods than the present generation actual vehicle’s future mission(s) before its launch. of vehicles. The requirements placed on systems Even without the benefit of continuous sensor and subsystems ranging from propulsion and en- updates, the digital twin will enable the effects of ergy storage to avionics to thermal protection will various mission parameters to be studied; effect be greater than previously experienced while de- of various anomalies to be determined; and fault, mands on long-term reliability will increase. Thus, degradation and damage mitigation strategies to the extensive legacy of historical flight information be validated. Additionally, parametric studies can that has been relied upon since the Apollo era will be conducted to determine the flight plan and likely be insufficient to either certify these new ve- mission parameters that yield the greatest proba- hicles or to guarantee mission success. Additional- bility of mission success. This application becomes ly, the extensive physical testing that provided the the foundation for certification of the flying twin. confidence needed to fly previous missions has be- A second application is to mirror the actual flight come increasingly expensive to perform. of its flying twin. Once the vehicle is in flight, the Future Directions: The modeling and simula- continuous updates of actual load, tion approaches that are being advocated through- and other environmental factors will be input to out this roadmapping exercise represent improve- the models enabling continuous predictions for ments in the state of the art of their respective the flying twin. Additionally, updates of the fly- disciplines. What is not considered by these in- ing twin’s health parameters, such as the presence dividual efforts is a comprehensive integration of and extent of damage or the temperature of the best-physics models across disciplines. If those engine, can be incorporated to reflect flight con- best-physics (i.e., the most accurate, physical- ditions. Since the algorithms comprising the dig- ly realistic and robust) models can be integrated, ital twin are modular, the best-physics models of they will form a basis for certification of vehicles individual systems or subsystems can be upgraded by simulation and real-time, continuous, health throughout the life of the vehicle. management of those vehicles during their mis- A third application is to perform in-situ foren- sions. They will form the foundation of a NASA sics in the event of a potentially catastrophic fault digital twin. or damage. Because the digital twin closely mir- digital twin rors the state of health of the flying twin, it is well A is an integrated multiphysics, suited to analyzing potentially catastrophic events. multiscale simulation of a vehicle or system that Once the sensor suite on-board the flying twin has

TA11-18 DRAFT relayed the degraded state of health to the digital also gives the additional benefit of being more eas- twin, the digital twin can begin to diagnose the ily able to insert malfunctions and execute restart causes of the anomaly. procedures from checkpoints. A fourth application is to serve as a platform 2.2.4. Information Processing where the effects of modifications to mission pa- rameters, not considered during the design phase 2.2.4.1. Science, Engineering, and Mission of the flying twin, can be studied. If, for example, Data Lifecycle mission control wants to determine the effects of a NASA information systems are highly data in- failed actuator and the best mitigation, the digital tensive, requiring an understanding of their ar- twin can be used to determine new load distribu- chitectures and specialized software and hardware tions throughout the structure, the fatigue life of technologies. (cf. Section 2.2.2.4) In the scientif- the structure under the new loads and the corre- ic data systems domain there is an increasing de- sponding remaining life. As a result, mission man- mand for improving the throughput of data prod- agers can make the most informed decision possi- uct generation, for providing access to the data, ble about whether or not to make the change. and for moving systems towards greater distribu- 2.2.3.4. Simulation-Based Training and tion. A well-architected data system infrastructure Decision Support plays a key role in achieving this if it can support Note: Decision Support technologies are dis- many of these critical features discussed: distrib- cussed in Sections 2.2.2.3 and 2.2.4.5. uted operations, adaptability to capture and man- Status: For astronauts to remain proficient in age different types of data as well as the scalability their training, on-board Simulation-Based Train- and elasticity required to support evolving explo- ers (SBTs) are required to support long term mis- ration goals over time. The Table 2 summarizes sions; especially in the robotics and EVA domains. the key data-intensive system areas of interest thus This on-board training objective requires that far and categorizes our current state-of-the-art ver- SBT software systems be scalable with respect to sus where NASA should be in the next 5-10 years supported platforms, which usually means laptops in the particular category area. for on-board SBTs. 2.2.4.2. Intelligent Data Understanding SBT architecture is typically made up of the fol- Status: Modern spacecraft can acquire much lowing components: 1) real-world environment more data than can be downloaded to Earth. The simulation (physics models like gravity and dy- capability to collect data is increasing at a faster namics), 2) real or flight software, 3) real or flight rate than the capability to downlink. Onboard sensor/effector models, and 4) human interfac- data analysis offers a means of mitigating the issue es (hand controllers, displays, real-world views by summarizing the data and enabling the abili- or cameras). Depending on the required fidelity ty to download a subset containing the most valu- and cost factors, these components can range in able portion of the collected data. Onboard in- fidelity, but for accurate training, real-time per- telligent data understanding (IDU) includes the formance is one requirement that cannot be sac- capability to analyze data, and is closely coupled rificed. to the capability to detect and respond to interest- Future Directions: SBT computer resources are ing events. The overall concept goes beyond mere- getting smaller, faster and cheaper, so the future ly collecting and transmitting data, to analyzing looks bright with respect to training fidelity and the collected data and taking onboard intelligent computer and graphics technologies. Maintain- action based on the content. To date, there have able software architectures and simulation frame- been some notable successes with science event works always have room for improvement. Pro- detection and response on NASA spacecraft, en- viding a high fidelity configurable simulation of abled by onboard intelligent data understanding. the real system, while controlling cost and mainte- These include planning and executing follow-up nance, will always be a challenge. One way to ac- observations of volcanic eruptions, floods, for- complish this is by reducing the number of flight est fires, sea- break-up events and the like from computers and avionics equipment required in the Earth Observing One spacecraft, and more re- the simulator for the desired fidelity. Maintain- cently, tracking dust devils on the surface of Mars able, cost effective trainers will then be feasible us- from the MER rovers. ing COTS flight processor emulators that execute Future Directions: IDU includes a variety of ca- binary flight software builds on COTS PCs. This pabilities such as situational awareness, data min-

DRAFT TA11-19 Table 2. Summary of the key data-intensive system areas of interest

Data Intensive Systems Thrust Area Current Capability Future Capability Reference Architectures Limited reference information Explicit models for information and technical architecture Distributed Architectures Limited distributed infrastructure and data sharing Highly distributed architectures Information Architecture Limited semantic models Models that capture the semantics in science and mission data Core Infrastructure Data management services tightly coupled Distributed data management services (messag- ing and metadata/data storage) Data Processing and Production Locally hosted clusters and other computational hardware Wider use of map reduce and other open source capabilities Data Analysis Centralized data analysis for computation, tools and data Separation of computation, tools and data Data Access Limited data sharing and software services Standards-based approaches for accessing and sharing data Search Product and dataset-specific searches with form fields. Rich queries, including facet-based, free-text searches, web- service based indexing Data Movement Limited use of parallelized and high throughput data move- Movement towards higher performing data ment technologies movement technologies Data Dissemination Distribution tightly bound to existing data movement tech- Distribution of massive data across highly distrib- nologies in place. uted environments. ing for target identification, and triggering rapid es exclusively on novel or anomalous events. response. IDU serves as an approach to maximize Data summarization addresses limited down- information return on a limited downlink chan- link capacity while realizing the full observation nel. This permits collecting data at the capaci- potential of an instrument. This can include sta- ty of an instrument rather than preselecting the tistical approaches ensuring that the full diversi- time and location for observations. IDU permits ty of observations is delivered. Decision products extended monitoring of an environment for rare can be generated onboard and transmitted with events without overburdening downloads. The ca- reduced bandwidth, providing rapid response to pability enables the capture and subsequent im- events, with the full resolution data following lat- mediate follow-up of short-lived events, which is er. Scene summarization that consists of identify- not possible with traditional spacecraft paradigms. ing and characterizing observed features over time Also, the ability to analyze data and immediate- is another form of data summarization. This capa- ly use this information onboard can enable reac- bility can be used to provide extensive survey in- tion in a dynamic environment, a capability that formation when it is not possible to return the full could improve not only information collection resolution data collected for the entire survey. but also spacecraft safety such as reaction to un- Data prioritization. Upon detection or sum- anticipated hazards. This capability enables oper- mary of data, decisions must be made onboard as ations in uncertain and rapidly changing environ- to what is the highest information content data to ments where an adequately rapid feedback loop be transmitted. There are a number of approach- with ground operators is not possible. There are a es to prioritizing data from pre-specified priorities number of capabilities that require advancement to the use of machine learning methods that dy- in order to realize the full potential of onboard in- namically update priorities based on understand- telligent data understanding. ing of the data and information gathered to date. Event detection. There must be effective com- There is considerable work needed to mature these putational mechanisms to identify high informa- methods; however, even more advanced prioriti- tion content data. This may involve recognizing zation approaches that consider the interdepen- features or events that have been pre-specified as dence of both observations and objectives are pos- interesting, or identifying novel events. For some sible. purposes, detectors specialized to a specific feature Ultimately, models will be used to compare data and instrument are most appropriate. Another ap- collected with predicted observations to prioritize proach is to develop general-purpose detectors. the data and identify unexpected trends as well as This enables a single method to find a variety of individual events. Another area of continued de- features and to identify events that were not pre- velopment is automated data prioritization based defined. A third category of event detection focus- on conflicting science objectives. In every mission,

TA11-20 DRAFT there are conflicting goals such as surveying a wide tomated meta-data acquisition; automated con- area vs. conducting an in-depth study of a focus struction of ontologies from unstructured text; region. In order to facilitate priority assessments, machine learning techniques for adaptive natural increasingly sophisticated scientific interest met- language understanding; automated management rics will need to be developed and deployed. of multiple hypotheses and automated generation Long-term goals should include a proposed of hypothesis-testing plans. spacecraft with onboard data understanding base- 2.2.4.4. Collaborative Science and lined as part of the mission, i.e., designed and Engineering planned throughout the mission lifecycle. Even- Status: Science and engineering processes are tually, multi-spacecraft collaborative event detec- evolving as information technology enables design tion, analysis, and response, enabled by distribut- and development organizations, some geograph- ed onboard IDU, can be realized. ically dispersed, to collaborate in both synchro- 2.2.4.3. Semantic Technologies nous and asynchronous manners throughout the Status: Complex space systems accrue a signif- lifecycle of a project. Asynchronous collaboration icant number of maintenance data and problem has been recently dominated by electronic mail or reports. These are currently stored in unstructured document interchange via the Web. Synchronous text forms. The lack of common structure, seman- collaboration usually consists of periodic telecon- tics, and indexing creates major problems for en- ferences and videoconferences that are used to ex- gineers and others trying to discover recurring change updates of progress that was conducted anomalies, causal or similarity relations among re- “off-line” by members of the team. ports, and trends indicating systemic problems. Future Directions: As more NASA projects are Text-mining and related approaches are partic- executed by distributed teams, and as NASA pro- ularly useful for identifying unknown recurring vides more oversight of outsourced work, the need anomalies and unknown relations among them. for real-time, continuous virtual collaboration The methods used to discover anomalies are based and the supporting collaborative technology envi- on document clustering, the process of finding, by ronments increases. Collaborative technology en- quantitative methods, subsets of reports that have vironments will allow distributed teams with dis- similar features. parate expertise and resources, including those of Future Directions: Over the past 10-20 years, partner agencies and contractors, to work in a uni- numerous technologies have been developed and fied manner to exchange ideas, optimize their use tested for anomaly detection by text mining (Sriv- of resources and complete projects much more ef- astava & Zave-Ulman, 2005). More effort is need- ficiently than is currently possible. ed in two areas: (1) development and operational These collaborative technology environments testing of text mining systems for deployment in should be a human and technology resource that aerospace environments, including many techni- provides a collaborative engineering facility for cal adaptations to specifically tailor them to aero- projects in all phases of science and engineer- space operations; and (2) integration of methods ing from the initial proposal to the final report. that were formerly thought to be different and Skilled and engineers can use the facili- perhaps competing, namely, text mining, natu- ty’s collaborative process and sophisticated tools to ral language processing, and ontologies. These lie develop multidisciplinary solutions, develop new along a continuum from relatively unstructured, understanding of scientific phenomena and dis- data-driven, and inductive approaches, to highly cover the inter-relationships between phenomena structured, logic-based approaches (Malin et al., to an extent that would otherwise not be possible. 2010). Within this environment, geographically dis- As the use of semantic technologies becomes persed members of science and engineering teams routine, human-intensive coding needs to be can come together for real-time collaboration in eliminated. Also fixed logical structures will be re- an integrated environment where the capabilities placed by self-adapting approaches based on ma- for connecting their individual solutions and out- chine learning. Emerging technologies will in- comes are well formulated and well understood. clude representation and visualization of complex Moreover, the highly collaborative visualization data structures such as biological structures and nature of the work encourages broader thinking processes; new technologies for incremental anal- among discipline experts. Each discipline can see ysis of ultra large-scale; evolving text bases; au- directly the impact of his/her work on other disci-

DRAFT TA11-21 plines and benefit from the resulting interactions. of Hubble Space Telescope The customer can also directly observe the impact • EO-1 – over $1M/year reduction in operations of requirements and their changes on each disci- costs (~30% reduction overall) due to pline in terms of cost, schedule, and various scien- automation in uplink operations; reduction in tific and engineering outcomes. downtime from ground station anomalies from Telepresence is a technology that is progressing 5 days to 6-8 hours; 30% increase in weekly fairly rapidly and may soon be a viable means of observations collaboration. Within the next 20 years, immer- • Mars Exploration Rovers – MAPGEN mixed sive (sight, sound, and touch) virtual reality will initiative planning system used to plan likely be commonly available and enable real-time, operations for the and Opportunity continuous virtual collaboration. Here again, the rovers at Mars. encouragement and adoption of open standards are critical to rapid adoption and growth. • Reduction of two operations staff, increase There are many candidate mathematically of 26% in mission productivity, and 35% based representations, editors, and graphical de- reduction in mission planning errors due to scriptions that aim to further the goals of inte- use of ASPEN automated planning system grating continuous dynamics into discrete-event on a ground- and flight-based technology simulations, providing insight into causal rela- experiment for another government agency. tions, avoiding information overload and clutter, • 50% reduction in downlink data management and leveraging multiple M&S development ef- planning for Mars Express, increased robustness forts through efficient composition. Triple Graph due to ability to optimize and produce multi- Grammars (TGGs) are being evaluated for their day/week look-ahead plans ability to formalize and automate complex bidi- • AUTOGEN automation of Mars relay rectional model transformations. Another chal- operations has enabled cost avoidance of over lenge is finding appropriate ways to visualize very $7M with projected mission lifetime total different knowledge representations at different savings of over $18M levels of abstraction, including natural language or In many of the above examples, onboard mis- logic or equations (requirements), diagrams (de- sion planning and execution has been integrated sign), state or flow diagrams (implementation), with IDU to enable an overall onboard event de- and dynamic behavior (test and operations). tection and response capability. Another strongly related technology investment area is flight com- 2.2.4.5. Advanced Mission Systems puting. Onboard mission planning and execu- Status: Ground-based automation and flexi- tion represents a prominent example of the use ble crew and space system concepts are required of model-based reasoning techniques in flight for science and exploration mission reliability software. The search- and memory-intensive re- and risk management. This topic area spans flight quirements of model-based reasoning technol- and ground automated planning, event detec- ogies stress flight computing concepts and solu- tion and response, and fault protection, and tar- tions. Emerging multi-core capabilities for space gets both robotic and human-robotic operations applications hold promise for providing the need- concepts. Precision landing along with rendezvous ed general purpose and high performance flight and docking are examples of some anticipated fu- computing capacity. ture engineering functions implying a high degree Future Directions: of independence from ground-based control. Sci- Small Body and Near Earth Object (NEO) ence event detection and response ultimately will Missions: A number of future NASA missions in- generalize to multiple platforms, supported by clude small body targets. These mission concepts space networking. are expected to become more complex and as Automated planning and scheduling has already such, new capabilities in mission design tools will shown considerable improvements in space oper- be needed to provide rapid and highly accurate ation, primarily in ground usage. Surprisingly this orbital thrust profiles. In addition, there is a need technology is still not common in ground usage to decrease, from several months to a few hours and has to date only very limited use onboard. or less, the time required to compute small body Some notable examples of impact to space mis- landing trajectories for complex gravity fields and sions are indicated below: . These capabilities would improve • Over 30% increase in observation utilization operational efficiency, lower risk to spacecraft, and

TA11-22 DRAFT enhance scientific return in shorter time period. extreme for visitation by humans, as though they Long Lived In Situ Robotic Explorers: In situ were walking on their surface or flying through missions are a major focus of NASA’s solar system their atmosphere. NASA has extensive experience exploration program. Proposed future rovers and relevant to this concept with ground stations dis- balloons are expected to have the capability to ex- tributed around the world for spacecraft/satellite plore with more onboard decisions based on lo- communication and orbital relays that maintain cal information and less dependence on frequent constant communication with in-flight resourc- human intervention. This capability involves mi- es. However, unresolved challenges in distribut- grating much of the mission planning and se- ed mission operations include 1) the need to mit- quencing to the remote exploration element and igate the effects of time delay in communication ensuring that the associated software is sophisti- between the vehicle and the ground-based re- cated enough to handle the many unexpected sit- searcher, and 2) the ever-increasing need for more uations that are likely to occur. computational resources and data storage. The ca- Space Networking: Over the next twenty-five pability of performing science experiments over years, an increasing number of missions will be re- great distances and supplying that information in lying on the coordinated operation of multiple as- formats suitable for rapid assimilation must be fo- sets. At Mars, the viability of dedicated relay or- cal points of development. biters for communications with Earth has already Mission Planning and Execution is a technol- been demonstrated by the Mars Exploration Rov- ogy area addressing the need to achieve high-lev- ers, which have returned the vast majority of their el goals and requirements of a mission by creating data via the Mars Global Surveyor and Mars lower level activity plans or sequences. In opera- Odyssey spacecraft. Instead of planning and im- tional missions, automation of elements or all of plementing operational sequences and communi- this process has been operationally demonstrated cations for a single spacecraft, mission operators to increase reliability, improve mission return, in- will need techniques for carrying out similar func- crease mission responsiveness, and potentially re- tions, in a timely fashion, for multiple, and some- duce mission operations costs. very different, spacecraft. Automated data- A number of areas of technology investment transfer scheduling, routing, and validation will present major opportunities to increase the lever- be required. Evolution in communications based age of mission planning and execution systems on space-based networking will require evolution within space exploration missions. of the operations tools and services that depend Representing and Reasoning about Complex upon that infrastructure. States and Resources; Spatial reasoning: One of Distributed Mission Operations: Distributed the challenges of space exploration is the preva- mission operations are necessary because the cap- lence of complex, one-of-a-kind systems. Because ital resources that NASA develops, such as orbit- of the expense (including opportunity cost) of al and ground-based laboratories, are scarce, and such systems, they must be modeled with extreme- the number of scientists who want access to them ly high fidelity to ensure mission safety. Addition- is growing. Currently, distributed mission opera- al research is needed to enable representation of tions are conducted by either co-locating the col- more complex constraints such as pointing, pow- laborating scientists or by the use of teleconferenc- er (especially power generation), and thermal con- es and data sharing. straints. In particular, representing and reasoning Future needs include collaborative virtual en- about spatial constraints represents a major area vironments in which disparate teams of scien- in need of future work for mission planning and tists can explore distant objects in near real-time execution systems. Spatial constraints frequently by efficiently and intuitively utilizing robotic ve- arise in terms of observational instrument view- hicles to conduct their experiments. Virtual reali- ing geometries (e.g., under what situations can ty-based distributed mission operations, including this instrument view this target with the follow- virtual telepresence, has the ability to revolution- ing illumination constraints). Better techniques, ize the way that science experiments are performed algorithms, and experience in representing and by enabling scientists to interact with distant ob- reasoning about these constraints will increase the jects as missions are executed. Thus, robotic mis- effectiveness of future mission planning and exe- sions will enable virtual crews composed of scien- cution systems. tists in distributed locations to interrogate distant Optimization, Eliciting User Preferences, objects, including those having environments too Mixed Initiative Planning and Execution: Most

DRAFT TA11-23 space operations problems are optimization prob- ing human (ground-based) oversight, ultimate- lems in which some objective function is to be ly will be needed. Such distributed systems would optimized which characterizes the desirability of coordinate multiple assets to achieve joint goals, achieving different objectives (e.g. prioritized sci- form and disband teams appropriately, moni- ence objectives) as well as avoiding undesired states tor team execution, and reformulate and re-plan (e.g. expending consumable resources such as pro- in the face of unit failures or evolved goals. Ear- pellant, or attempting risky behaviors). Particu- ly work in Earth-based systems (operational since larly challenging are mixed problems, which can 2004 to track volcanoes, floods, and the like) has have both hard and soft constraints, and multi- illustrated the utility of such paradigms. Howev- objective optimization problems in which mul- er this early work uses only static, human-defined tiple competing objectives occur without any ex- collaborations among multiple assets; automat- plicit model of trading off between the competing ing asset discovery and coordination is an unad- objectives. Further research into representations, dressed challenge. search algorithms, and approaches for optimiza- Ground Systems Automation: A Mission Op- tion in mission planning and execution are all ar- erations System (MOS) or ground system con- eas of needed future work. stitutes that portion of an overall space mission Adaptive Planning: In the long term, space sys- system that resides on Earth. Over the past two tems will need to deal with changing environ- decades, technological innovations in computing ments, degrading hardware, and evolution of mis- and software technologies have allowed an MOS sion goals. Within this long-term horizon, space to support ever more complex missions while con- systems will need to be adaptive, that is, to ad- suming a decreasing fraction of project develop- just their control strategies and search strategies ment costs. Demand continues for ground sys- over changing context. To date, most operational tems which will plan more spacecraft activities automated mission planning systems use domain with fewer commanding errors, provide scientists models that have been painstakingly engineered and engineers with more functionality, process and validated by operations staff and technolo- and manage larger and more complex data more gists. In order to scale to more challenging and quickly, all while requiring fewer people to devel- unknown operational environments over the long op, deploy, operate and maintain them. Technolo- term, without incurring the costs of manual mod- gy needs are most prevalent in the following areas: el and search algorithm updating, adaptive (e.g., Monitoring Systems – capturing and distribut- machine learning) techniques must be developed ing Flight System data, maintaining knowledge of and validated. This area currently remains as a Flight System performance and ensuring its con- large area of relatively untapped potential. tinued health and safety. Multi Agent Planning, Distributed, Self-Or- Navigation & Mission Design – maintaining ganizing Systems: In order to further push the knowledge of Flight System location/velocity and exploration frontier, larger numbers of assets ulti- planning its trajectory for future Mission activities mately will be needed. Current means of manag- Instrument Operations – capabilities support- ing the operations of such platforms do not scale ing mission instrument operations planning, de- cost effectively to tens or hundreds of platforms. cision-making, and performance assessment for a Mission planning and execution systems that co- wide variety of instrument types. ordinate with other such systems, without requir-

Table 3. Information Processing Roadmap

Technology Capability Relevance Timeframe Ground-based automated mission planning and scheduling for Building block for automated mission planning 2010-2015 hard constraints and goals Onboard event driven goal re-selection Building block for more resource-aware onboard decision making 2015 Automated systems with more complex states and resources, Increased ability to deploy ground systems to make impacts in ongo- 2010-2020 complex optimization constraints, mixed initiative planning ing missions Embedded Planning and Scheduling Enables widespread use of onboard planning, scheduling, and execu- 2010-2020 tion systems Adaptive systems Enables robust performance in the presence of changing goals, envi- 2030+ ronments, and objectives Multi-agent systems Enables robust coordination of large numbers of remote assets, critical 2030+ to enable such missions without prohibitive operations costs

TA11-24 DRAFT 3. Interdependency with landing and impact dynamics; Deployment Other Technology Areas simulations; Instability and buckling • TA 4 ROBOTICS, TELE-ROBOTICS, AND simulations; Probabilistic/nondeterministic AUTONOMOUS SYSTEMS: Avionics & design; Material processing and manufacturing processors; Autonomous control; Automated simulations; Computational NDE; Digital rendezvous & docking; Autonomous precision Twin (Virtual Digital Fleet Leader). landing & hazard avoidance; Autonomous • TA 13 GROUND AND LAUNCH payload offloading & spacecraft servicing; SYSTEMS PROCESSING : Launch complex Autonomy for operations; Telerobotic & ground ops; Advanced process support; operations; Human-machine interaction; Advanced mission operations; Spaceport Robotic health monitoring and damage interoperability; Safe, reliable & efficient assessment; Intelligent software design; launch ops; Range tracking, surveillance, Integrated systems health management; Fault & safety; Inspection & system verification; management techniques; Extreme environment Telemetry; Weather prediction & decision electronics; Software reliability making • TA 5 COMMUNICATION AND • TA 14 THERMAL MANAGEMENT NAVIGATION: Computing-communications SYSTEMS: Conduction, convection and trade-offs. radiation heat transfer; Ablation, -surface • TA6 HUMAN HEALTH, LIFE SUPPORT interaction simulation; Aerothermodynamics AND HABITATION SYSTEMS and TA7 HUMAN EXPLORATION DESTINATION 4. Possible Benefits to SYSTEMS: & Other National Needs control; Life support & habitation systems; Safety benefits in transportation, medicine; Sen- Long duration in-space human health sorWeb communication systems for disaster re- maintenance and monitoring; Closed-loop sponse; education: to inspire and motivate cur- life support; Fire detection & suppression; rent, and future generations of students; outreach: Displays and Controls for Crew, Crew systems, virtual exploration; interdisciplinary collabora- Wearable computing; EVA Technology; tion; business applications: virtual interactive pro- Medical Prognosis, Diagnosis, detection, and cess design, manufacturing and logistics, digital treatment; Bioinformatics; Surface systems factory; Semantic Web: standards, mark-up lan- health monitoring and damage detection; guages, open-source component libraries, interop- Human-tended docking systems; In-space erable simulation; grid- and service-based archi- and surface operations; System commonality, tectures; high-performance simulation on COTS modularity, and interfaces hardware; data-based decision and policy support. • TA 9 ENTRY, DESCENT, AND LANDING: Multi-scale, multi-physics modeling. • TA 10 NANOTECHNOLOGY: Quantum mechanics, atomistic, molecular dynamics and multi-scale simulations; Simulations of deformation, stiffness and strength; Durability and damage tolerance; Oxidation and environmental effects; Thermal and electrical performance/conductivity; Interface design and performance; Material processing; Radiation effects and transport • TA 12 MATERIALS, STRUCTURAL AND MECHANICAL SYSTEMS, AND MANUFACTURING: Quantum mechanics, atomistic, molecular dynamics, discrete and continuum plasticity simulations; Stiffness and strength; Fatigue, fracture and damage mechanics; Environmental effects; Structural mechanics and kinematics; Structural, DRAFT TA11-25 Acronyms Experiment ARMD (NASA) Aeronautics Research Mission MMS Magnetospheric Multiscale Directorate MOTS Modified (or Modifiable) Off The Shelf

ASCENDS Active Sensing of CO2 Emissions (hardware or software) over Nights, Days, and Seasons MSITP Modeling, Simulation, Information CCMC Community Coordinated Modeling Center Technology and Processing CFD Computational Fluid Dynamics MSL Mars Science Laboratory CME Coronal Mass Ejection NAS NASA Advanced Supercomputer COTS Commercial Off The Shelf (hardware NAS National Airspace System or software) NextGen Next Generation (Air Traffic DARPA Defense Advanced Research Projects Management or National Airspace System) Administration NEO Near-Earth Object DESDynI Deformation, Structure NPOESS National Polar-orbiting Operational and Dynamics of Ice Environmental Satellite System ESMA Earth Science Modeling and Assimilation NPP NPOESS Preparatory Project ESMD (NASA) Exploration Systems Mission NRC National Research Council Directorate OCO Orbiting Carbon Observatory ETDP Exploration Technology Development OOD Object-Oriented Design Program OPFM Outer Planet Flagship Mission FDIR Failure (Fault) Detection, Isolation and Recovery PGAS Partitioned Global Address Space FPGA Field Programmable Gate Array PCA Principal Components Analysis FTD Flagship Technology Demonstrator RHBD Radiation-hardened by design GbE Gigabit Ethernet RHBP Radiation-hardened by process GEOSS Global Earth Observation System of SBT Simulation-Based Training Systems SDO Solar Dynamics Observatory GPM Global Precipitation Measurement SMAP Moisture Active & Passive Grail Gravity Recovery and Interior Laboratory SMD (NASA) Science Mission Directorate GSFC Goddard Space Flight Center SOMD (NASA) Space Operations Mission HCI Human-Computer Interaction Directorate HEC High End Computing STEM Science, Technology, Engineering, and (education) Hyspiri Hyperspectral-Infrared Imager STEREO Solar TErrestrial RElations IBEX Interstellar Boundary Explorer Observatory ICESat Ice, Cloud, and land Elevation Satellite SysML Systems Modeling Language ICME Interplanetary Coronal Mass Ejection TA Technical Area IDU Intelligent Data Understanding TABS Technical Area Breakdown Structure IESA Integrated Earth System Analysis TASR Technical Area Strategic Roadmap ISHM Integrated (Intelligent) System Health TGG Triple Graph Grammar Management (Monitoring) TRL Technology Readiness Level ISS International Space Station UML Unified Modeling Language I&T Integration and Test UPC Unified Parallel C IVHM Integrated (Intelligent) Vehicle Health Management (Monitoring) VM Virtual Machine JPL Jet Propulsion Laboratory WCRP World Climate Research Programme JSC WWRP World Weather Research Programme JWST James Webb Space Telescope XML Extensible Markup Language LDCM Landsat Data Continuity Mission LEO Low Earth Orbit LSA Latent Semantic Analysis MARTE Modeling and Analysis of Real-Time and Embedded (Systems) MAX-C Mars Explorer-Cacher MBSE Model-Based Systems Engineering u micro (imitating Greek letter “mu”) MIPS Million Instructions per Second MISSE Materials International Space Station

TA11-26 DRAFT Acknowledgements Press. The NASA technology area draft roadmaps were Nature Magazine, “Big Data” Issue, http://www. developed with the support and guidance from nature.com/nature/journal/v455/n7209/ the Office of the Chief Technologist. In addition full/455001a.html to the primary authors, major contributors for the Oblinger, D. (2006). Bootstrapped Learning: TA11 roadmap included the OCT TA11 Road- Creating the Electronic Student that learns mapping POC, Maria Bualat; the reviewers pro- from Natural Instruction. DARPA IPTO. vided by the NASA Center Chief Technologists Parnas, D.L. (2010). Really Rethinking ‘Formal and NASA Mission Directorate representatives, Methods’. IEEE Computer, 43(1), 28-34. and the following individuals: Rebecca Castaño, Science Mission Directorate, 2010 Science Plan, Micah Clark, Dan Crichton, Lorraine Fesq, Mi- http://science.nasa.gov/about-us/science- chael Hesse, Gerard Holzmann, Lee Peterson, Mi- strategy/ chelle Rienecker, David Smyth, and Colin Wil- liams. Segaran, Hammerbacher. Beautiful Data, http:// oreilly.com/catalog/9780596157128. References Srivastava, A., & Zane-Ulman, B. (2005). ACM Taxonomy / IEEE Computer Society Discovering Recurring Anomalies in Text Keywords. http://www.computer.org/portal/ Reports Regarding Complex Space Systems. web/publications/acmtaxonomy IEEE Aerospace Conf. Proc., CD-ROM, IEEE Aerospace, March 2005. Douglass, S., Sprinkle, J., Bogart, C., Cassimatis, N., Howes, A., Jones, R.M., & Lewis, R. (2009). Tallant, G.S., et al. (2004). Validation & Large-scale Cognitive Modeling Using Model- Verification of Intelligent and Adaptive integrated Computing. Mesa, AZ: AFRL. Control Systems. In Proceedings of the 2004 IEEE Aerospace Conference. DOI: 10.1109/ ESMA 2008 Panel Report AERO.2004.1367591 Extremely Large Databases Workshop, http:// Taylor, S.J.E., Lendermann, P., Paul, R.J., www-conf.slac.stanford.edu/xldb10/ Reichenthal,S.W., Straßburger, S., & Turner, Heliophysics 2008 Panel Report S.J. (2004). Panel on Future Challenges in Jin, H., Hood, R., & Mehrotra, P. (2009). A Modeling . In R.G. Ingalls, M.D. Practical Study of UPC Using the NAS Parallel Rossetti, J.S. Smith, and B.A. Peters (Eds.), Benchmarks. PGAS ’09 Ashburn, Virginia. Proceedings of the 2004 Winter Simulation Johnson, T., Paredis, C.J.J., & Burkhart, R. Conference. (2008). Integrating Models and Simulations of Welch, P., Brown, N., Moores, J., Chalmers, K., Continuous Dynamics into SysML. Presented Sputh, B.(2007). Integrating and extending at Modelica 2008. JCSP. In A.A. McEwan, S. Schneider, W. Ifill, Knill, E. (2010). Quantum Computing. Nature, & P. Welch (Eds.) Communicating Process 463(28), 441-443. Architectures. Amsterdam: IOS Press. Malin, J.T., Millward, C., Gomez, F., & Throop, Yilmaz, L., Davis, P., Fishwick, P.A., Hu, X., Miller, D. (2010). Semantic Annotation of Aerospace J.A., Hybinette, M., Ören, T.I., Reynolds, P., Problem Reports to Support Text Mining. Sarjoughian, H., & Tolk, A. (2008). Panel IEEE Intelligent Systems. Discussion: What Makes Good Research in McCafferty, D. (2010). Should Code Be Released? Modeling and Simulation? Sustaining the Communications of the ACM, 53(10), 16-17. Growth and Vitality of the M&S Discipline. In Merali, Z. (2010). Why Scientific Programming S.J. Mason, R.R. Hill, L. Mönch, O. Rose, T. Does not Compute. Nature, 467, 775-777. Jefferson, & J.W. Fowler (Eds.), Proceedings of the 2008 Winter Simulation Conference. NRC Steering Committee for the Decadal Survey of Civil Aeronautics. (2006). Decadal Survey of Zeigler, B., Praehofer, H., & Kim, T.G. (2000). Civil Aeronautics: Foundation for the Future. Theory of Modeling and Simulation (second http://www.nap.edu/catalog/11664.html edition). New York: Academic Press. NRC Computer Science and Telecommunications Board. (2010). Report of a Workshop on the Scope and Nature of Computational Thinking. Washington, DC: The National Academies DRAFT TA11-27 November 2010

National Aeronautics and Space Administration

NASA Headquarters Washington, DC 20546 www..gov TA11-28 DRAFT