Spatial Standard Observer (SSO)

optics
Spatial Standard Observer (SSO) (TOP2-102)
Patent Only, No Software Available For License.
Overview
The Spatial Standard Observer (SSO) was developed to predict the detectability of spatial contrast targets such as those used in the ModelFest project. The SSO is a lumped parameter model basing its predictions on the visible contrast generalized energy. Visible contrast means that the contrast has been reduced by a contrast sensitivity function (CSF). Generalized energy means that the visible contrast is raised to a power higher than 2 before spatial and temporal integration. To adapt the SSO to predict the effects of variations of optical image quality on tasks, the optical component of the SSO CSF needs to be removed, leaving the neural CSF. Also, since target detection is not the typical criterion task for assessing optical image quality, the SSO concept needs to be extended to other tasks, such as Sloan character recognition.

The Technology
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity. A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.
Spatial Standard Observer (SSO) One of the applications of the technology is predicting outcomes of corrective laser eye surgery.
Benefits
  • Rapid, objective means of estimating degrees of visibility and discriminability
  • Simple and efficient design that produces an accurate visibility metric
  • Avoids the need for complicated spatial frequency filter banks
  • Permits accurate visibility predictions of the visibility of oblique patterns

Applications
  • Evaluating vision from unmanned aerial vehicles
  • Predicting outcomes of corrective laser eye surgery
  • Inspection of displays during the manufacturing process
  • Estimation of the quality of compressed digital video
  • Evaluation of legibility of text
  • Measuring visibility of damage to aircraft and to the shuttle arbiter
Technology Details

optics
TOP2-102
ARC-14569-1 ARC-14569-2
7,783,130 8,139,892
Similar Results
Front image
Strobing to Mitigate Vibration for Display Legibility
The dominant frequency of the vibration that requires mitigation can be known in advance, measured in real time, or predicted with simulation algorithms. That frequency (or a lower frequency multiplier) is then used to drive the strobing rate of the illumination source. For example, if the vibration frequency is 20 Hz, one could employ a strobe rate of 1, 2, 4, 5, 10, or 20 Hz, depending on which rate the operator finds the least intrusive. The strobed illumination source can be internal or external to the display. Perceptual psychologists have long understood that strobed illumination can freeze moving objects in the visual field. This effect can be used for artistic effect or for technical applications. The present innovation is instead applicable for environments in which the human observer rather than just the viewed object undergoes vibration. Such environments include space, air, land, and sea vehicles, or on foot (e.g., walking or running on the ground or treadmills). The technology itself can be integrated into handheld and fixed display panels, head-mounted displays, and cabin illumination for viewing printed materials.
Technology Example
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Image from internal NASA presentation developed by inventor and dated May 4, 2020.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism. Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page. Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
Video Acuity Measurement System
Video Acuity Measurement System
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Purchased from Shutterstock, shutterstock_1478828816.pn
Computer-Brain Interface for Display Control
The basis of the NASA innovation is the brain signal created by flashing light, referred to as a Visually-Evoked Cortical Potential (VECP). The VECP brain signal can be detected by electroencephalogram (EEG) measurements recorded by electrode sensors placed over the brain’s occipital lobe. In the case of the NASA innovation, the flashing light is embedded as an independent function in an electronic display, e.g. backlit LCD or OLED display. The frequency of the flashing light can be controlled separate from the display refresh rate frequency so as to provide a large number of different frequencies for identifying specific display pixels or pixel regions. Also, the independently controlled flashing allows flashing rates to be chosen such that the display user sees no noticeable flickering. Further, because the VECP signal is correlated with the frequency of the signal in specific regions of the display, the approach determines the absolute location of eye fixation, eliminating the need to calibrate the gaze tracker to the display. Another key advantage of this novel method of brain-display eye gaze tracking is that it is only sensitive to where the user is focused and attentive to the information being displayed. Conventional optical eye tracking devices detect where the user is looking, regardless of whether they are paying attention to what they are seeing. An early-stage prototype has proven the viability of this innovation. NASA seeks partners to continue development and commercialization.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo