Search
Robotics Automation and Control
Anonymous Feature Processing for Enhanced Navigation
This concept presents a new statistical likelihood function and Bayesian analysis update for non-standard measurement types that rely on associations between observed and cataloged features. These measurement types inherently contain non-standard errors that standard techniques, such as the Kalman filter, make no effort to model, and this mismodeling can lead to filter instability and degraded performance.
Vision-based navigation methods utilizing the Kalman filter involve a preprocessing step to identify features within an image by referencing a known catalog. However, errors in this pre-processing can cause navigation failures. AFP offers a new approach, processing points generated by features themselves without requiring identification. Points such as range or bearing are directly processed by AFP.
Operating on finite set statistics principles, AFP treats data as sets rather than individual features. This enables simultaneous tracking of multiple targets without feature labeling. Unlike the sequential processing of the Kalman filter, AFP processes updates in parallel, independently scoring each output based on rigorous mathematical functions. This parallel processing ensures robust navigation updates in dynamic environments, and without requiring an identification algorithm upstream of the filter.
Computational simulations conducted at Johnson Space Center demonstrate that AFP's performance matches or exceeds that of the ideal Kalman filter, even under non-ideal conditions. Anonymous Feature Processing for Enhanced Navigation is at a technology readiness level (TRL) 4 (component and/or breadboard validation in laboratory environment) and is now available for patent licensing. Please note that NASA does not manufacture products itself for commercial sale.
Aerospace
Spacecraft with Artificial Gravity Modules
Conventionally, the approaches of creating artificial gravity in space was envisioned as a large rotating space station that creates an inertial force that mimics the effects of a gravitational force. However, generating artificial gravity with large rotating structures poses problems, including (1) the need to mass balance the entire rotating spacecraft in order to eliminate or minimize rotational imbalance causing gyroscopic precession/nutation motions and other oscillations of the rotating spacecraft; (2) the potentially prohibitive cost, time and schedule to build such a large rotating system; (3) the need to mass balance the spacecraft in real-time so as to minimize passenger discomfort and structural stress on the spacecraft; (4) the difficulty in docking other spacecraft to the rotating spacecraft; (5) the absence or minimal presence of non-rotating structure for 0G research and industrial use; and (6) the generation of extraneous Coriolis effect on spacecraft inhabitants. The novel technology can help solve the problems referenced above and other problems by (1) providing a non-rotating space station or structure, and connecting modules that generate artificial gravity by traveling along a circular path around the non-rotating space station; (2) providing modules that are more easily built and balanced; (3) providing a stationary structure that can provide a platform for other components that do not need gravity to function; (4) providing capability to actively interrogate what levels of mass imbalance are acceptable, for use in determining operational constraints; and (5) reducing or eliminating Coriolis effect on occupants in habitation modules. The concepts of the invention are very cost-effective and allow for building a minimal initial system to produce artificial gravity at the first phases of construction, before the full structure is built. An additional benefit is that construction and assembly of new capabilities can be performed without disrupting the ongoing artificial gravity environment of the existing structure.
information technology and software
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.