Automatic Extraction of Planetary Image Features and Multi-Sensor Image Registration

information technology and software
Automatic Extraction of Planetary Image Features and Multi-Sensor Image Registration (GSC-TOPS-7)
A method for the extraction of Lunar data and other planetary features
Overview
Many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, but these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. The boundary of Lunar features is not always well defined, and it is therefore somewhat difficult to segment and characterize Lunar images. With the large quantity of new Lunar data that will be collected in the next few years, it is important to implement an automated method to extract these features, and to perform tasks such as image registration. This technology can be generalized for commercial applications with similar restraints such as medical imaging, where low contrast and uneven illumination image characteristics often pose issues.

The Technology
NASAs Goddard Space Flight Centers method for the extraction of Lunar data and/or planetary features is a method developed to extract Lunar features based on the combination of several image processing techniques. The technology was developed to register images from multiple sensors and extract features from images in low-contrast and uneven illumination conditions. The image processing and registration techniques can include, but is not limited to, a watershed segmentation, marked point processes, graph cut algorithms, wavelet transforms, multiple birth and death algorithms and/or the generalized Hough Transform.
Automatic Extraction of Planetary Image Features Feature extraction from data collected during the Mars Global Surveyor mission. The original image (a), the close contour features (b) and the elliptic shape features (c) are shown.
Benefits
  • Can analyze images with low contrast and uneven illumination characteristics.
  • Designed for extraction of Lunar features, but can be generalized for any imaging system
  • Provides accurate registration of multi-temporal, multi-sensor, and multi-view images.

Applications
  • Terrain mapping as a supplement to existing feature extraction methods.
  • Military synthetic-aperture radar (SAR) images
  • Medical imaging
  • Autonomous vehicles
Technology Details

information technology and software
GSC-TOPS-7
GSC-15730-1 GSC-17910-1 GSC-17911-1 GSC-17910-2
8355579
Similar Results
Technology Example
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Hierarchical Image Segmentation (HSEG)
Hierarchical Image Segmentation (HSEG)
Currently, HSEG software is being used by Bartron Medical Imaging as a diagnostic tool to enhance medical imagery. Bartron Medical Imaging licensed the HSEG Technology from NASA Goddard adding color enhancement and developing MED-SEG, an FDA approved tool to help specialists interpret medical images. HSEG is available for licensing outside of the medical field (specifically for soft-tissue analysis).
On February 11, 2013, the Landsat 8 satellite rocketed into a sunny California morning onboard a powerful Atlas V and began its life in orbit. In the year since launch, scientists have been working to understand the information the satellite has been sending back
Update of the Three Dimensional Version of RHSeg and HSeg
Image data is subdivided into overlapping sections. Each image subsection has an extra pixels forming an overlapping seam in the x, y, and z axes. The region labeling these overlapping seams are examined and a histogram is created of the correspondence between region labels across image subsections. A region from one subsection is then merged together with a region from another subsection when specific criterion is met. This innovations use of slightly overlapping processing windows eliminates processing window artifacts. This innovative approach can be used by any image segmentation or clustering approach that first operates on subsections of data and later combines the results from the subsections together to produce a result for the combined image.
The Apollo 11 Lunar Module Eagle, in a landing configuration was photographed in lunar orbit from the Command and Service Module Columbia.
eVTOL UAS with Lunar Lander Trajectory
This NASA-developed eVTOL UAS is a purpose-built, electric, reusable aircraft with rotor/propeller thrust only, designed to fly trajectories with high similarity to those flown by lunar landers. The vehicle has the unique capability to transition into wing borne flight to simulate the cross-range, horizontal approaches of lunar landers. During transition to wing borne flight, the initial transition favors a traditional airplane configuration with the propellers in the front and smaller surfaces in the rear, allowing the vehicle to reach high speeds. However, after achieving wing borne flight, the vehicle can transition to wing borne flight in the opposite (canard) direction. During this mode of operation, the vehicle is controllable, and the propellers can be powered or unpowered. This NASA invention also has the capability to decelerate rapidly during the descent phase (also to simulate lunar lander trajectories). Such rapid deceleration will be required to reduce vehicle velocity in order to turn propellers back on without stalling the blades or catching the propeller vortex. The UAS also has the option of using variable pitch blades which can contribute to the overall controllability of the aircraft and reduce the likelihood of stalling the blades during the deceleration phase. In addition to testing EDL sensors and precision landing payloads, NASA’s innovative eVTOL UAS could be used in applications where fast, precise, and stealthy delivery of payloads to specific ground locations is required, including military applications. This concept of operations could entail deploying the UAS from a larger aircraft.
Anonymous Feature Processing for Enhanced Navigation
This concept presents a new statistical likelihood function and Bayesian analysis update for non-standard measurement types that rely on associations between observed and cataloged features. These measurement types inherently contain non-standard errors that standard techniques, such as the Kalman filter, make no effort to model, and this mismodeling can lead to filter instability and degraded performance. Vision-based navigation methods utilizing the Kalman filter involve a preprocessing step to identify features within an image by referencing a known catalog. However, errors in this pre-processing can cause navigation failures. AFP offers a new approach, processing points generated by features themselves without requiring identification. Points such as range or bearing are directly processed by AFP. Operating on finite set statistics principles, AFP treats data as sets rather than individual features. This enables simultaneous tracking of multiple targets without feature labeling. Unlike the sequential processing of the Kalman filter, AFP processes updates in parallel, independently scoring each output based on rigorous mathematical functions. This parallel processing ensures robust navigation updates in dynamic environments, and without requiring an identification algorithm upstream of the filter. Computational simulations conducted at Johnson Space Center demonstrate that AFP's performance matches or exceeds that of the ideal Kalman filter, even under non-ideal conditions. Anonymous Feature Processing for Enhanced Navigation is at a technology readiness level (TRL) 4 (component and/or breadboard validation in laboratory environment) and is now available for patent licensing. Please note that NASA does not manufacture products itself for commercial sale.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo