Much like how familiar landmarks can give travelers a sense of direction when their smart phones lose their lock on GPS signals, a NASA engineer is teaching a machine to use features on the Moon’s horizon to navigate across the lunar surface.
“For safety and science geotagging, it’s
important for explorers to know exactly where they are as they explore the
lunar landscape,” said Alvin Yew, a research engineer at NASA’s Goddard Space
Flight Center in Greenbelt, Maryland. “Equipping an onboard device with a local
map would support any mission, whether robotic or human.”
The collection of ridges, craters, and boulders that form a lunar horizon can be used by an artificial intelligence to accurately locate a lunar traveler. A system being developed by Research Engineer Alvin Yew would provide a backup location service for future explorers, robotic or human. Credits: NASA/MoonTrek/Alvin Yew
NASA is currently working with industry and other international agencies to
develop a communications and navigation architecture for the Moon. LunaNet will bring “internet-like” capabilities to the Moon, including
location services.
However, explorers in some regions on the lunar surface may require
overlapping solutions derived from multiple sources to assure safety should
communication signals not be available.
“It’s critical to have dependable backup systems when we’re talking about
human exploration,” Yew said. “The motivation for me was to enable lunar crater
exploration, where the entire horizon would be the crater rim.”
Yew started with data from NASA’s Lunar Reconnaissance Orbiter, specifically the Lunar Orbiter Laser Altimeter (LOLA). LOLA measures
slopes, lunar surface roughness, and generates high resolution topographic maps
of the Moon. Yew is training an artificial intelligence to recreate features on
the lunar horizon as they would appear to an explorer on the lunar surface
using LOLA’s digital elevation models. Those digital panoramas can be used to
correlate known boulders and ridges with those visible in pictures taken by a
rover or astronaut, providing accurate location identification for any given
region.
“Conceptually, it’s like going outside and trying to figure out where you
are by surveying the horizon and surrounding landmarks,” Yew said. “While a
ballpark location estimate might be easy for a person, we want to demonstrate
accuracy on the ground down to less than 30 feet (9 meters). This accuracy
opens the door to a broad range of mission concepts for future exploration.”
Making efficient use of LOLA data, a handheld device could be programmed
with a local subset of terrain and elevation data to conserve memory. According
to work published by Goddard researcher Erwan Mazarico, a lunar explorer can
see at most up to about 180 miles (300 kilometers) from any unobstructed
location on the Moon. Even on Earth, Yew’s location technology could help
explorers in terrain where GPS signals are obstructed or subject to
interference.
Yew’s geolocation system will leverage the capabilities of GIANT (Goddard
Image Analysis and Navigation Tool). This optical navigation tool developed
primarily by Goddard engineer Andrew Liounis previously double-checked and
verified navigation data for NASA’s OSIRIS-REx mission to collect a sample from
asteroid Bennu (see CuttingEdge, Summer
2021).
In contrast to radar or laser-ranging tools that pulse radio signals and
light at a target to analyze the returning signals, GIANT quickly and
accurately analyzes images to measure the distance to and between visible
landmarks. The portable version is cGIANT, a derivative library to Goddard’s
autonomous Navigation Guidance and Control system (autoGNC) which provides
mission autonomy solutions for all stages of spacecraft and rover operations.
Combining AI interpretations of visual panoramas against a known model of a
moon or planet’s terrain could provide a powerful navigation tool for future
explorers.
No comments:
Post a Comment