Two heads are better than one in the
latest images from NASA’s James Webb Space Telescope, which reveal new detail
in a mysterious, little-studied nebula surrounding a dying star.
Nebula PMR 1 is a cloud of gas and dust
that bears an uncanny resemblance to a brain in a transparent skull, inspiring
its nickname, the “Exposed Cranium” nebula. Webb captured its unusual features
in both near- and mid-infrared light. The nebula was first revealed in infrared light by a predecessor to Webb, NASA’s now-retired
Spitzer Space Telescope, more than a decade ago. Webb’s advanced instruments
show detail that enhances the nebula’s brain-like appearance.
Image: Exposed
Cranium Nebula (NIRCam and MIRI Images)
The differences in what Webb’s infrared instruments
reveal and conceal within the PMR 1 “Exposed Cranium” nebula is apparent in
this side-by-side view. More stars and background galaxies shine through
NIRCam’s view, while cosmic dust glows more prominently in MIRI’s mid-infrared.
Image: NASA, ESA, CSA, STScI; Image Processing: Joseph
DePasquale (STScI)
The nebula appears to have distinct
regions that capture different phases of its evolution — an outer shell of gas
that was blown off first and consists mostly of hydrogen, and an inner cloud
with more structure that contains a mix of different gases. Both Webb’s NIRCam
(Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) show a distinctive
dark lane running vertically through the middle of the nebula that defines its
brain-like look of left and right hemispheres. Webb’s resolution shows that
this lane could be related to an outburst or outflow from the central star,
which typically occurs as twin jets burst out in opposite directions. Evidence
for this is particularly notable at the top of the nebula in Webb’s MIRI image,
where it looks like the inner gas is being ejected outward.
While there is still much to be
understood about this nebula, it’s clear that it is being created by a star
near the end of its fuel-burning “life.” In their end stages, stars expel their
outer layers. It’s a dynamic and fairly fast process, in cosmic terms. Webb has
captured a moment in this star’s decline. What ultimately happens will depend
on the mass of the star, which is yet to be determined. If it’s massive enough,
it will explode in a supernova. A less massive Sun-like star will continue to
shed layers until only its core remains as a dense white dwarf, which will cool off over eons.
The James Webb Space Telescope is the
world’s premier space science observatory. Webb is solving mysteries in our
solar system, looking beyond to distant worlds around other stars, and probing
the mysterious structures and origins of our universe and our place in it. Webb
is an international program led by NASA with its partners, ESA (European Space
Agency) and CSA (Canadian Space Agency).
HoloRadar uses radio waves to see around
corners, allowing it to detect people at T-shaped intersections like the one
pictured here. Credit: Sylvia Zhang, Penn Engineering
Penn
Engineers have developed a system that lets robots see around corners using
radio waves processed by AI, a capability that could improve the safety and
performance of driverless cars as well as robots operating in cluttered indoor
settings like warehouses and factories.
The system, called HoloRadar, enables robots to reconstruct three-dimensional
scenes outside their direct line of sight, such as pedestrians rounding a
corner. Unlike previous approaches to non-line-of-sight (NLOS) perception that rely on visible light,
HoloRadar works reliably in darkness and under variable lighting conditions.
"Robots and autonomous vehicles
need to see beyond what's directly in front of them," says Mingmin Zhao,
Assistant Professor in Computer and Information Science (CIS) and senior author
of a paper describing HoloRadar, presented at the 39th annual Conference on Neural Information Processing
Systems (NeurIPS).
"This capability is essential to help robots and autonomous vehicles make
safer decisions in real time."
HoloRadar allows robots to see around corners in
varied lighting conditions by relying on radio signals and AI. Credit: Sylvia
Zhang and WAVES Lab, Penn Engineering
Turning walls into mirrors
At the heart of HoloRadar is a
counterintuitive insight into radio waves. Compared to visible light, radio
signals have much longer wavelengths, a property traditionally seen as a
disadvantage for imaging because it limits resolution. Zhao's team realized
that, for peering around corners, those longer wavelengths are actually an
advantage.
"Because radio waves are so
much larger than the tiny surface variations in walls," says Haowen Lai, a
doctoral student in CIS and co-author of the new paper, "those surfaces
effectively become mirrors that reflect radio signals in predictable
ways."
In practical terms, this means that
flat surfaces like walls, floors, and ceilings can bounce radio signals around
corners, carrying information about hidden spaces back to a robot. HoloRadar
captures these reflections and reconstructs what lies beyond direct view.
"It's similar to how human
drivers sometimes rely on mirrors stationed at blind intersections," says
Lai. "Because HoloRadar uses radio waves, the environment itself becomes
full of mirrors, without actually having to change the environment."
HoloRadar works by reconstructing 3D
scenarios from the bounces of radio waves. Credit: WAVES Lab,
Penn Engineering
Designed for in-the-wild operations
In recent years, other researchers
have demonstrated systems with similar capabilities, typically by using visible light. Those systems
analyze shadows or indirect reflections, making them highly dependent on
lighting conditions. Other attempts to use radio signals have relied on slow and
bulky scanning equipment, limiting real-world applications.
"HoloRadar is designed to work
in the kinds of environments robots actually operate in," says Zhao.
"This system is mobile, runs in real time, and doesn't depend on
controlled lighting."
HoloRadar augments the safety of
autonomous robots by complementing existing sensors rather than replacing them.
While autonomous vehicles already use LiDAR, a sensing system that uses lasers to detect objects
in the vehicles' direct line of sight, HoloRadar adds an additional layer of
perception by revealing what those sensors cannot see, giving machines more
time to react to potential hazards.
HoloRadar relies on compact and nimble
scanning equipment, opening up real-world applications. Credit:
Sylvia Zhang, Penn Engineering
A single radio pulse can bounce
multiple times before returning to the sensor, creating a tangled set of
reflections that are difficult to untangle using traditional signal-processing
methods alone.
To solve this problem, the team
developed a custom AI system that combines machine learning with physics-based
modeling. In the first stage, the system enhances the resolution of raw radio
signals and identifies multiple "returns" corresponding to different
reflection paths. In the second stage, the system uses a physics-guided model
to trace those reflections backward, undoing the mirror-like effects of the
environment and reconstructing the actual 3D scene.
"In some sense, the challenge
is similar to walking into a room full of mirrors," says Zitong Lan, a
doctoral student in Electrical and Systems Engineering (ESE) and co-author of
the paper. "You see many copies of the same object reflected in different
places, and the hard part is figuring out where things really are. Our system
learns how to reverse that process in a physics-grounded way."
By explicitly modeling how radio
waves bounce off surfaces, the AI can distinguish between direct and indirect
reflections and determine the correct physical locations of a variety of
objects, including people.
From the lab to the real world
The researchers tested HoloRadar on
a mobile robot in real indoor environments, including hallways and building
corners. In these settings, the system successfully reconstructed walls,
corridors, and hidden human subjects located outside the robot's line of sight.
Future work will explore outdoor
scenarios, such as intersections and urban streets, where longer distances and
more dynamic conditions introduce additional challenges.
"This is an important step
toward giving robots a more complete understanding of their surroundings,"
says Zhao. "Our long-term goal is to enable machines to operate safely and
intelligently in the dynamic and complex environments humans navigate every
day."