Spatial Standard Observer (SSO)
optics
Spatial Standard Observer (SSO) (TOP2-102)
Patent Only, No Software Available For License.
Overview
The Spatial Standard Observer (SSO) was developed to predict the detectability of spatial contrast targets such as those used in the ModelFest project. The SSO is a lumped parameter model basing its predictions on the visible contrast generalized energy. Visible contrast
means that the contrast has been reduced by a contrast sensitivity function (CSF). Generalized energy means that the visible contrast is raised to a power higher than 2 before spatial and temporal integration. To adapt the SSO to predict the effects of variations of optical image quality on tasks, the optical component of the SSO CSF needs to be removed, leaving the neural CSF. Also, since target detection is not the typical criterion task for assessing optical image quality, the SSO concept needs to be extended to other tasks, such as Sloan character recognition.
The Technology
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity.
A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.
Benefits
- Rapid, objective means of estimating degrees of visibility and discriminability
- Simple and efficient design that produces an accurate visibility metric
- Avoids the need for complicated spatial frequency filter banks
- Permits accurate visibility predictions of the visibility of oblique patterns
Applications
- Evaluating vision from unmanned aerial vehicles
- Predicting outcomes of corrective laser eye surgery
- Inspection of displays during the manufacturing process
- Estimation of the quality of compressed digital video
- Evaluation of legibility of text
- Measuring visibility of damage to aircraft and to the shuttle arbiter
Similar Results
Strobing to Mitigate Vibration for Display Legibility
The dominant frequency of the vibration that requires mitigation can be known in advance, measured in real time, or predicted with simulation algorithms. That frequency (or a lower frequency multiplier) is then used to drive the strobing rate of the illumination source. For example, if the vibration frequency is 20 Hz, one could employ a strobe rate of 1, 2, 4, 5, 10, or 20 Hz, depending on which rate the operator finds the least intrusive. The strobed illumination source can be internal or external to the display.
Perceptual psychologists have long understood that strobed illumination can freeze moving objects in the visual field. This effect can be used for artistic effect or for technical applications. The present innovation is instead applicable for environments in which the human observer rather than just the viewed object undergoes vibration. Such environments include space, air, land, and sea vehicles, or on foot (e.g., walking or running on the ground or treadmills). The technology itself can be integrated into handheld and fixed display panels, head-mounted displays, and cabin illumination for viewing printed materials.
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis.
The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement.
The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness.
The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism.
Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page.
Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
Video Acuity Measurement System
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Oculometric Testing for Detecting/Characterizing Mild Neural Impairment
To assess various aspects of dynamic visual and visuomotor function including peripheral attention, spatial localization, perceptual motion processing, and oculomotor responsiveness, NASA developed a simple five-minute clinically relevant test that measures and computes more than a dozen largely independent eye-movement-based (oculometric) measures of human neural performance. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance and may prove to be a useful assessment tool for mild functional neural impairments across a wide range of etiologies and brain regions. The technology may be useful to clinicians to localize affected brain regions following trauma, degenerative disease, or aging, to characterize and quantify clinical deficits, to monitor recovery of function after injury, and to detect operationally-relevant altered or impaired visual performance at subclinical levels. This novel system can be used as a sensitive screening tool by comparing the oculometric measures of an individual to a normal baseline population, or from the same individual before and after exposure to a potentially harmful event (e.g., a boxing match, football game, combat tour, extended work schedule with sleep disruption, blast or toxic exposure, space mission), or on an ongoing basis to monitor performance for recovery to baseline. The technology provides set of largely independent metrics of visual and visuomotor function that are sensitive and reliable within and across observers, yielding a signature multidimensional impairment vector that can be used to characterize the nature of a mild deficit, not just simply detect it. Initial results from peer-reviewed studies of Traumatic Brain Injury, sleep deprivation with and without caffeine, and low-dose alcohol consumption have shown that this NASA technology can be used to assess subtle deficits in brain function before overt clinical symptoms become obvious, as well as the efficacy of countermeasures.