Hierarchical Image Segmentation (HSEG)

information technology and software
Hierarchical Image Segmentation (HSEG) (GSC-TOPS-14)
Enhancing image processing using Earth imaging software from NASA
Overview
Hierarchical Image Segmentation (HSEG) software was originally developed to enhance and analyze images such as those taken of Earth from space by NASAs Landsat and Terra missions. The HSEG software analyzes single band, multispectral, or hyperspectral image data and can process any image with a resolution up to 8,000 x 8,000 pixels, then group the pixels that have similar characteristics to form regions, and ultimately combines regions based on their similarity, whether adjacent or disjointed. This grouping creates spatially disjoint regions. The software is accompanied by HSEGViewer, a companion visualization and segmentation selection tool that can be used to highlight and select data points from particular regions.

The Technology
Currently, HSEG software is being used by Bartron Medical Imaging as a diagnostic tool to enhance medical imagery. Bartron Medical Imaging licensed the HSEG Technology from NASA Goddard adding color enhancement and developing MED-SEG, an FDA approved tool to help specialists interpret medical images. HSEG is available for licensing outside of the medical field (specifically for soft-tissue analysis).
Hierarchical Image Segmentation (HSEG) Original mammogram before MED-SEG processing (Left). Credit: Bartron Medical Imaging; and Mammogram, with region of interest (white) labeled (Right), after MED-SEG processing. Credit: Bartron Medical Imaging.
Benefits
  • Faster than competing software
  • Improves analytical capabilities with increase speed over state-of-the-art
  • Refined results, maximum flexibility and control
  • User-friendly GUI

Applications
  • Image pre-processing (specifically, segmentation)
  • Image data mining
  • Crop monitoring
  • Medical Image analysis enhancements (Mammography, X-Rays, CT, MRI, and Ultrasound)
  • Facial recognition
Technology Details

information technology and software
GSC-TOPS-14
GSC-14305-1 GSC-16024-1 GSC-16250-1 GSC-14994-1
6,895,115 8526733 7,697,759
Similar Results
On February 11, 2013, the Landsat 8 satellite rocketed into a sunny California morning onboard a powerful Atlas V and began its life in orbit. In the year since launch, scientists have been working to understand the information the satellite has been sending back
Update of the Three Dimensional Version of RHSeg and HSeg
Image data is subdivided into overlapping sections. Each image subsection has an extra pixels forming an overlapping seam in the x, y, and z axes. The region labeling these overlapping seams are examined and a histogram is created of the correspondence between region labels across image subsections. A region from one subsection is then merged together with a region from another subsection when specific criterion is met. This innovations use of slightly overlapping processing windows eliminates processing window artifacts. This innovative approach can be used by any image segmentation or clustering approach that first operates on subsections of data and later combines the results from the subsections together to produce a result for the combined image.
MIDAR
Multispectral Imaging, Detection, and Active Reflectance (MiDAR)
The MiDAR transmitter emits coded narrowband structured illumination to generate high-frame-rate multispectral video, perform real-time radiometric calibration, and provide a high-bandwidth simplex optical data-link under a range of ambient irradiance conditions, including darkness. A theoretical framework, based on unique color band signatures, is developed for multispectral video reconstruction and optical communications algorithms used on MiDAR transmitters and receivers. Experimental tests demonstrate a 7-channel MiDAR prototype consisting of an active array of multispectral high-intensity light-emitting diodes (MiDAR transmitter) coupled with a state-of-the-art, high-frame-rate NIR computational imager, the NASA FluidCam NIR, which functions as a MiDAR receiver. A 32-channel instrument is currently in development. Preliminary results confirm efficient, radiometrically-calibrated, high signal-to-noise ratio (SNR) active multispectral imaging in 7 channels from 405-940 nm at 2048x2048 pixels and 30 Hz. These results demonstrate a cost-effective and adaptive sensing modality, with the ability to change color bands and relative intensities in real-time, in response to changing science requirements or dynamic scenes. Potential applications of MiDAR include high-resolution nocturnal and diurnal multispectral imaging from air, space and underwater environments as well as long- distance optical communication, bidirectional reflectance distribution function characterization, mineral identification, atmospheric correction, UV/fluorescent imaging, 3D reconstruction using Structure from Motion (SfM), and underwater imaging using Fluid Lensing. Multipurpose sensors, such as MiDAR, which fuse active sensing and communications capabilities, may be particularly well-suited for mass-limited robotic exploration of Earth and the solar system and represent a possible new generation of instruments for active optical remote sensing.
Technology Example
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
3D Laser Scanner
ShuttleSCAN 3-D
How It Works The scanners operation is based on the principle of Laser Triagulation. The ShuttleSCAN contains an imaging sensor; two lasers mounted on opposite sides of the imaging sensor; and a customized, on-board processor for processing the data from the imaging sensor. The lasers are oriented at a given angle and surface height based on the size of objects being examined. For inspecting small details, such as defects in space shuttle tiles, a scanner is positioned close to the surface. This creates a small field of view but with very high resolution. For scanning larger objects, such as use in a robotic vision application, a scanner can be positioned several feet above the surface. This increases the field of view but results in slightly lower resolution. The laser projects a line on the surface, directly below the imaging sensor. For a perfectly flat surface, this projected line will be straight. As the ShuttleSCAN head moves over the surface, defects or irregularities above and below the surface will cause the line to deviate from perfectly straight. The SPACE processors proprietary algorithms interpret these deviations in real time and build a representation of the defect that is then transmitted to an attached PC for triangulation and 3-D display or printing. Real-time volume calculation of the defect is a capability unique to the ShuttleSCAN system. Why It Is Better The benefits of the ShuttleSCAN 3-D system are very unique in the industry. No other 3-D scanner can offer the combination of speed, resolution, size, power efficiency, and versatility. In addition, ShuttleSCAN can be used as a wireless instrument, unencumbered by cables. Traditional scanning systems make a tradeoff between resolution and speed. ShuttleSCANs onboard SPACE processor eliminates this tradeoff. The system scans at speeds greater than 600,000 points per second, with a resolution smaller than .001". Results of the scan are available in real time, whereas conventional systems scan over the surface, analyze the scanned data, and display the results long after the scan is complete.
The Yellow Sea
MERRA/AS and Climate Analytics-as-a-Service (CAaaS)
NASA Goddard Space Flight Center now offers a new capability for meeting this Big Data challenge: MERRA Analytic Services (MERRA/AS). MERRA/AS combines the power of high-performance computing, storage-side analytics, and web APIs to dramatically improve customer access to MERRA data. It represents NASAs first effort to provide Climate Analytics-as-a-Service. Retrospective analyses (or reanalyses) such as MERRA have long been important to scientists doing climate change research. MERRA is produced by NASAs Global Modeling and Assimilation Office (GMAO), which is a component of the Earth Sciences Division in Goddards Sciences and Exploration Directorate. GMAOs research and development activities aim to maximize the impact of satellite observations in climate, weather, atmospheric, and land prediction using global models and data assimilation. These products are becoming increasingly important to application areas beyond traditional climate science. MERRA/AS provides a new cloud-based approach to storing and accessing the MERRA dataset. By combining high-performance computing, MapReduce analytics, and NASAs Climate Data Services API (CDS API), MERRA/AS moves much of the work traditionally done on the client side to the server side, close to the data and close to large compute power. This reduces the need for large data transfers and provides a platform to support complex server-side data analysesit enables Climate Analytics-as-a-Service. MERRA/AS currently implements a set of commonly used operations (such as avg, min, and max) over all the MERRA variables. Of particular interest to many applications is a core collection of about two dozen MERRA land variables (such as humidity, precipitation, evaporation, and temperature). Using the RESTful services of the Climate Data Services API, it is now easy to extract basic historical climatology information about places and time spans of interest anywhere in the world. Since the CDS API is extensible, the community can participate in MERRA/ASs development by contributing new and more complex analytics to the MERRA/AS service. MERRA/AS demonstrates the power of CAaaS and advances NASAs ability to connect data, science, computational resources, and expertise to the many customers and applications it serves.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo X Logo Linkedin Logo Youtube Logo