Anonymous Feature Processing for Enhanced Navigation
Robotics Automation and Control
Anonymous Feature Processing for Enhanced Navigation (MSC-TOPS-129)
Robust feature recognition and guidance for autonomous vehicles
Overview
Innovators at NASA Johnson Space Center have developed an innovative algorithmic and computational approach to vision-based feature recognition called Anonymous Feature Processing (AFP). The ‘anonymous’ approach allows feature-based navigation techniques to be performed without the need for explicit correspondence/identification between visual system observations and cataloged map data, thus helping to eliminate costs and risks induced by identification procedures.
By eliminating the error-prone and computationally burdensome identification and detection steps, AFP is designed to yield marked improvements in system robustness along with reducing algorithmic and software development costs. This novel method only requires a simple camera, or LIDAR sensor, and flight computer to track multiple targets and navigate vehicles more quickly, reliably, and safely.
The AFP approach is adaptable to a wide range of sensor types and platforms, capable of supporting challenging space exploration and terrestrial navigation systems with low visibility conditions or with cluttered surroundings. A new technique such as AFP that eliminates preprocessing while adding system robustness could have commercial applications in autonomous vehicles, manufacturing, and research-based imaging.
The Technology
This concept presents a new statistical likelihood function and Bayesian analysis update for non-standard measurement types that rely on associations between observed and cataloged features. These measurement types inherently contain non-standard errors that standard techniques, such as the Kalman filter, make no effort to model, and this mismodeling can lead to filter instability and degraded performance.
Vision-based navigation methods utilizing the Kalman filter involve a preprocessing step to identify features within an image by referencing a known catalog. However, errors in this pre-processing can cause navigation failures. AFP offers a new approach, processing points generated by features themselves without requiring identification. Points such as range or bearing are directly processed by AFP.
Operating on finite set statistics principles, AFP treats data as sets rather than individual features. This enables simultaneous tracking of multiple targets without feature labeling. Unlike the sequential processing of the Kalman filter, AFP processes updates in parallel, independently scoring each output based on rigorous mathematical functions. This parallel processing ensures robust navigation updates in dynamic environments, and without requiring an identification algorithm upstream of the filter.
Computational simulations conducted at Johnson Space Center demonstrate that AFP's performance matches or exceeds that of the ideal Kalman filter, even under non-ideal conditions. Anonymous Feature Processing for Enhanced Navigation is at a technology readiness level (TRL) 4 (component and/or breadboard validation in laboratory environment) and is now available for patent licensing. Please note that NASA does not manufacture products itself for commercial sale.
Benefits
- Capable of tracking multiple unidentified targets
- Compatible with existing navigation architecture
- Facilitates robust real-time computing
- Reduced sensitivity to false identifications
- Requires only single camera and flight computer
Applications
- Commercial Space: precision navigation for rendezvous, docking, and lunar landing
- Autonomous Vehicles: driverless car systems
- Manufacturing: vision-based quality control systems
- Medical Imaging: feature recognition for diagnosis and treatment
- Drone Navigation: optimized performance in degraded environments
Technology Details
Robotics Automation and Control
MSC-TOPS-129
MSC-26666-1
McCabe, J.S. and K.J. DeMars. Anonymous Feature-Based Terrain Relative Navigation. Journal of Guidance Control and Dynamics (JGCD). Volume 43, Number 3. March 2020. Published August 30, 2019. (link: https://arc.aiaa.org/doi/10.2514/1.G004423)
McCabe, J.S. and K.J. DeMars. Anonymous Feature Processing for Efficient Onboard Navigation. AIAA Scitech 2020 Forum. January 2020. (link: https://arc.aiaa.org/doi/abs/10.2514/6.2020-0598)
Tags:
|
Similar Results
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image.
The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose.
In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below.
The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites.
The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
Airborne Machine Learning Estimates for Local Winds and Kinematics
The MAchine learning ESTimations for uRban Operations (MAESTRO) system is a novel approach that couples commodity sensors with advanced algorithms to provide real-time onboard local wind and kinematics estimations to a vehicle's guidance and navigation system. Sensors and computations are integrated in a novel way to predict local winds and promote safe operations in dynamic urban regions where Global Positioning System/Global Navigation Satellite System (GPS/GNSS) and other network communications may be unavailable or are difficult to obtain when surrounded by tall buildings due to multi-path reflections and signal diffusion. The system can be implemented onboard an Unmanned Aerial Systems (UAS) and once airborne, the system does not require communication with an external data source or the GPS/GNSS. Estimations of the local winds (speed and direction) are created using inputs from onboard sensors that scan the local building environment. This information can then be used by the onboard guidance and navigation system to determine safe and energy-efficient trajectories for operations in urban and suburban settings. The technology is robust to dynamic environments, input noise, missing data, and other uncertainties, and has been demonstrated successfully in lab experiments and computer simulations.
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
3D Lidar for Improved Rover Traversal and Imagery
The SQRLi system is made up of three major components including the laser assembly, the mirror assembly, and the electronics and data processing equipment (electronics assembly) as shown in the figure below. The three main systems work together to send and receive the lidar signal then translate it into a 3D image for navigation and imaging purposes.
The rover sensing instrument makes use of a unique fiber optic laser assembly with high, adjustable output that increases the dynamic range (i.e., contrast) of the lidar system. The commercially available mirror setup used in the SQRLi is small, reliable, and has a wide aperture that improves the field-of-view of the lidar while maintaining a small instrument footprint. Lastly, the data processing is done by an in-house designed processor capable of translating the light signal into a high-resolution (sub-millimeter) 3D map. These components of the SQRLi enable successful hazard detection and navigation in visibility-impaired environments.
The SQRLi is applicable to planetary and lunar exploration by unmanned or crewed vehicles and may be adapted for in-space servicing, assembly, and manufacturing purposes. Beyond NASA missions, the new 3D lidar may be used for vehicular navigation in the automotive, defense, or commercial space sectors. The SQRLi is available for patent licensing.