Video Acuity Measurement System
optics
Video Acuity Measurement System (TOP2-164)
Patent Only, No Software Available For License.
Overview
There is a widely acknowledged need for metrics to quantify the performance of video systems. NASAs new empirical Video Acuity metric, is simple to measure and relates directly to task performance. Video acuity is determined by the smallest letters that can be automatically identified using a video system. It is expressed most conveniently in letters per degree of visual angle. Video systems are used broadly for public safety, and range from very simple, inexpensive systems to very complex, powerful, and expensive systems. These systems are used by fire departments, police departments, homeland security, and a wide variety of commercial entities. They are used in streets, stores, banks, airports, cars, and aircraft, as well as many other settings. They are used for a variety of tasks, including detection of smoke and fire, recognition of weapons, face identification, and event perception. In all of these contexts, the quality of the video system impacts the performance in the visual task. The Video Acuity metric matches the quality of the system to the demands of its tasks.
The Technology
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Benefits
- Simple
- 100% objective
- Collapses all system issues into one single metric
- Metric is relevant to end user
- Metric can be related to human visual acuity
- Automated
Applications
- Monitor events and locations
- Video Surveillance
- Face identification
- Homeland security
- Safety and security
- Detection of smoke and fire
- Recognition of weapons
Similar Results
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Optical Head-Mounted Display System for Laser Safety Eyewear
The system combines laser goggles with an optical head-mounted display that displays a real-time video camera image of a laser beam. Users are able to visualize the laser beam while his/her eyes are protected. The system also allows for numerous additional features in the optical head mounted display such as digital zoom, overlays of additional information such as power meter data, Bluetooth wireless interface, digital overlays of beam location and others. The system is built on readily available components and can be used with existing laser eyewear. The software converts the color being observed to another color that transmits through the goggles. For example, if a red laser is being used and red-blocking glasses are worn, the software can convert red to blue, which is readily transmitted through the laser eyewear. Similarly, color video can be converted to black-and-white to transmit through the eyewear.
Photogrammetric Method for Calculating Relative Orientation
The NASA technology uses a photogrammetry algorithm to calculate the relative orientation between two rigid bodies. The software, written in LabVIEW and MATLAB, quantitatively analyzes the photogrammetric data collected from the camera system to determine the 6-DOF position and rotation of the observed object.
The system comprises an arrangement of arbitrarily placed cameras, rigidly fixed on one body, and a collection of photogrammetric targets, rigidly fixed on the second body. The cameras can be either placed on rigidly fixed objects surrounding the second body (facing inwards), or can be placed on an object directed towards the surrounding environment (facing outwards). At any given point in time, the cameras must capture at least five non-collinear targets. The 6-DOF accuracy increases as additional cameras and targets are used. The equipment requirements include a set of heterogeneous cameras, a collection of photogrammetric targets, a data storage device, and a processing PC. Camera calibration and initial target measurements are required prior to image capture.
A nonprovisional patent application on this technology has been filed.
Low Cost Star Tracker Software
The current Star Tracker software package is comprised of a Lumenera LW230 monochrome machine-vision camera and a FUJINON HF35SA-1 35mm lens. The star tracker cameras are all connected to and powered by the PC/104 stack via USB 2.0 ports. The software code is written in C++ and is can easily be adapted to other camera and lensing platforms by setting new variables in the software for new focal conditions. In order to identify stars in images, the software contains a star database derived from the 118,218-star Hipparcos catalog [1]. The database contains a list of every star pair within the camera field of view and the angular distance between those pairs. It also contains the inertial position information for each individual star directly from the Hipparcos catalog. In order to keep the star database size small, only stars of magnitude 6.5 or brighter were included. The star tracking process begins when image data is retrieved by the software from the data buffers in the camera. The image is translated into a binary image via a threshold brightness value so that on (bright) pixels are represented by 1s and off (dark) pixels are represented by 0s. The binary image is then searched for blobs, which are just connected groups of on pixels. These blobs represent unidentified stars or other objects such as planets, deep sky objects, other satellites, or noise. The centroids of the blob locations are computed, and a unique pattern recognition algorithm is applied to identify which, if any, stars are represented. During this process, false stars are effectively removed and only repeatedly and uniquely identifiable stars are stored. After stars are identified, another algorithm is applied on their position information to determine the attitude of the satellite. The attitude is computed as a set of Euler angles: right ascension (RA), declination (Dec), and roll. The first two Euler angles are computed by using a linear system that is derived from vector algebra and the information of two identified stars in the image. The roll angle is computed using an iterative method that relies on the information of a single star and the first two Euler angles.
[1] ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Spatial Standard Observer (SSO)
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity.
A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.