NASA Experience with Automated and Autonomous Navigation in Deep Space
Total Page:16
File Type:pdf, Size:1020Kb
Jet Propulsion Laboratory California Institute of Technology NASA Experience with Automated and Autonomous Navigation in Deep Space Ed Riedel Optical Navigation Group Mission Design and Navigation Section Jet Propulsion Laboratory California Institute of Technology A description of work with many contributions from: Andrew Vaughan, Nick Mastrodemos, Shyam Bhaskaran, William M. Owen, Dan Kubitscheck, Bob Werner, Bob Gaskell (all JPL) Christopher Grasso (BSE) Jet Propulsion Laboratory California Institute of Technology Introductory Remarks 2 Jet Propulsion Laboratory California Institute of Technology What is Optical Navigation? Optical Navigation: + The location of a near-field + object (e.g. the Moon) + + relative to a well-known far- + + field object (e.g. the background starfield) or relative to well known camera attitude. (However, with a sufficiently wide-field imager, pointing knowledge can be obtained simultaneous to position knowledge) Optical Navigation variously requires: • Accurate star catalogs, and physical body models, including landmarks. • Accurate camera calibrations including geometric distortions and photometric modeling. • Astrometric-quality imaging systems (often) with high-dynamic range. • Filtering and estimation of optical-relevant parameters with s/c position and attitude. • Ground-based Optical Navigation processing is very similar to radiometric ground processing - with the addition of (sometimes difficult and labor-intensive) image processing. Jet Propulsion Laboratory California Institute of Technology Optical Navigation Syllabus • Opti-metrics: “Doppler” (Red/Blue shift) and range extracted from an optical com link • LASER Altimetry: such as done by LRO (LOLA) • Earth based astrometry: using telescopes to see the spacecraft, probably via the downlink optical com signal • Classical Optical Navigation – Observing distant point-like objects against stars – Observing large near-field objects, and extracting limbs or landmarks (also called Target-Relative-Navigation (TRN)) – These OpNav pictures can be processed on the ground or onboard • AutoNav: a software set that performed classical optical navigation on Deep Space 1, Stardust and Deep Impact (as well as NExT and EPOXI) 4 Jet Propulsion Laboratory A Brief History of Deep-Space Optical California Institute of Technology Navigation Viking at Mars, 1976 • 1976: Viking orbiters demonstrated optical navigation using Phobos and Deimos. • 1978-1989: Voyager requires optical navigation to meet mission objectives at Jupiter, Saturn, Uranus and Neptune. This method improves navigation performance over conventional radio by factors of 2 to 20. • 1991-1996: Galileo Mission utilizes optical navigation to capture images of Gaspra and Ida, and to accurately achieve orbit. Voyager 1 at Jupiter, 1980 • 1996-1998: Deep Space 1: optical navigation automated and adapted to onboard operation (AutoNav), used in cruise flight in 1999. • 2000-2001: NEAR orbital operations - JPL uses ground-based optical navigation to orbit and land. • 2001: Deep Space 1 AutoNav Captures Images of Comet Borelly Galileo at Gaspra, 1991 • 2002-present: Cassini optical navigation • 2004: Stardust AutoNav captures Images of comet Wild 2 • 2005: Deep Impact AutoNav impacts comet Tempel 1, and subsequently captures images of impact crater. • 2006: MRO performs demonstration navigation with the Mars NEAR at (and Optical Navigation Camera (ONC). on) Eros, 2001 • 2010: AutoNav targeting Deep Impact (EPOXI) at Hartley 2 • 2011: AutoNav targeting Stardust/NExT at Tempel 2 • 2012-2013 Optical Nav at Vesta for Dawn • 2014: Optical Nav at 67P/Churyumov-Gerasimenko (CG) for Rosetta Rosetta at 67P, (done by JPL as ‘shadow Nav’) 2014 • 2015-2016: Optical Nav at Ceres for Dawn • 2015: New Horizons at Pluto Jet Propulsion Laboratory California Institute of Technology Recent Optical Navigation Experience 6 Jet Propulsion Laboratory California Institute of Technology JPL Experience: Recent Autonomous OpNav and Autonomous Navigation Technology Successes DS1 AutoNav Stardust AutoNav Deep Cruise, Navigation at Annefrank and Wild 2, Sept.1999 Nov. 2002, Jan. 2004. DS1 AutoNav Deep Impact AutoNav At Borrelly Sept., 2001 at Tempel 1 July 2005 Deep Impact AutoNav Hayabusa Imaging Hartley 2, Nov. 4, 2010 Science: Itokawa MRO OpNav Shape Camera Stardust AutoNav Model, Sept. 2005 Validation Feb. Tempel 1, Feb 14, 2011 2006 ! Using Optical Navigation and Landmark-based Jet Propulsion Laboratory California Institute of Technology Navigation to Perform the Navigation at Vesta A control network of tens of thousands of landmarks has been created to navigate Vesta on a daily basis, merging radiometric and optical data Analysis shows that this work could be done optically only, but the inertial reference (SRU) on the Dawn s/c would need some serious analysis and modeling GN&C / 8 Jet Propulsion Laboratory California Institute of Technology AutoNav On July 4, 2005, AutoNav bagged the third of NASA’s first three comet nuclei missions (at left); the other two being: Borrelly, Sept 2001, Wild 2, Nov. 2002, both also captured with AutoNav. These were followed by Hartley2 in 2010, and a Tempel 1 revisit in 2011. AutoNav placed optical navigation elements onboard for otherwise impossible speedy turn-around of navigation operations. 9 Jet Propulsion Laboratory California Institute of Technology AutoNav on the DI Impactor 9P/Tempel 1 at Impact-2 hrs spans 10 pixels; AutoNav begins controlling the Impactor 10 Jet Propulsion Laboratory California Institute of Technology AutoNav on the DI Impactor 9P/Tempel 1 at Impact-2 hrs spans 10 pixels 11 Jet Propulsion Laboratory Flyby AutoNav - Deep Impact Encounter California Institute of Technology MRI & HRI Cameras 64 MRI pixel subframe 512 HRI pixel subframe (0.04º) (0.06º) 12 National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Landmarks 13 Jet Propulsion Laboratory Navigating in proximity to a natural California Institute of Technology body using landmarks Step 1: Survey pass or orbits, where many images are taken, with multiple views of distinct topography areas on the surface. These areas are modeled, including high-precision topocentric location, and these become landmarks in the Navigation Landmark Control Network, this requires accurate s/c position determination, usually via the combined use of radio and optical methods. Step 2: Using the landmark network, the s/c observes the surface, performs precision image-location of the known terrain elements, and - by knowing the location of the Step 3: The elements - determines its own instantaneous position. instantaneous s/c positions are combined into a navigation filter that estimates s/c dynamics (e.g. position, velocity and propulsive events) and body dynamics (e.g. gravity field components.) 14 The accuracy of terrain relative navigation using landmarks is potentially limited only by 14 the resolution of the imaging system - modest cameras can give very high accuracy. Developing Topography and Landmarks From Jet Propulsion Laboratory California Institute of Technology Multiple Stereo Views and Limbs Using Stereophotoclinometry Limb View Digital Terrain Map (DTM) of Landmark Vectors describing Local Reference Plane Local DTM Stereo View 2 Process : Center of Landmark • Multiple images (often dozens) of the same location are combined into an estimate of a local Digital Terrain Map (DTM): a set of local altitude vectors. example 50x50 pixel Stereo View 1 • Many landmarks are observed in many images, and observation patch the locations of all landmarks can be estimated simultaneously, the global residual “stress” is removed through a numerical relaxation process. Local Coordinate • The amalgam of DTMs forms a global topography System (Including and shape model. Simultaneous estimation of Local Normal) surface reflectance also produces albedo maps. Landmark DTM • Simultaneous Estimation of body gravity rn asteroid parameters, and s/c orbit parameters as well as (and albedo map) landmark locations can be made in the landmark or LMAP location estimate process. 15 • Solutions for all parameters can use optical and radiometric navigation data in combined solutions. Stereophotoclinometry potentially turns the entire surface of the body into high accuracy navigation beacons that can be used at all attitudes, ranges and lighting conditions. National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology OpNav at the Moon 16 National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology 1024x1024km Topography Map of the Lunar South Pole Region (centered on Cabeus) and Developed Using Stereophotoclinometry, from Apollo, LO, Clementine and LRO Images. Map developed by Robert Gaskell Resolution 100-200m 17 National Aeronautics and Space Administration Jet Propulsion Laboratory Navigating at the Moon with SPC- California Institute of Technology Derived Landmarks - LCROSS LCROSS Image Element LCROSS Image SPC Model Descent image taken from ~6000km altitude 18 National Aeronautics and Space Administration Jet Propulsion Laboratory Navigating at the Moon with SPC- California Institute of Technology Derived Landmarks - LCROSS LCROSS Image LCROSS Image Element SPC Model • Final Vis Descent image taken from ~200km altitude • With 150m resolution maps, LCROSS Navigation was able to resolve positions with the radiometric solutions to the 50m level. • It is believed that this result is consistent with the ensemble map