This NASA/ESA Hubble Space Telescope image features
the majestic spiral galaxy NGC 3430. ESA/Hubble & NASA, C. Kilpatrick
This NASA/ESA Hubble Space Telescope image treats viewers to a wonderfully detailed snapshot of the
spiral galaxy NGC 3430 that lies 100 million light-years from Earth in the
constellation Leo Minor. Several other galaxies, located relatively nearby to
this one, are just beyond the frame of this image; one is close enough that
gravitational interaction is driving some star formation in NGC 3430 — visible
as bright-blue patches near to but outside of the galaxy’s main spiral
structure. This fine example of a galactic spiral holds a bright core from
which a pinwheel array of arms appears to radiate outward. Dark dust lanes and
bright star-forming regions help define these spiral arms.
NGC 3430’s distinct shape may be one reason why astronomer Edwin Hubble used to it to help define his classification of galaxies. Namesake
of the Hubble Space Telescope, Edwin Hubble authored a paper in 1926 that
outlined the classification of some four hundred galaxies by their appearance —
as either spiral, barred spiral, lenticular, elliptical, or irregular. This
straightforward typology proved extremely influential, and the detailed schemes
astronomers use today are still based on Edwin Hubble’s work. NGC 3430 itself
is a spiral lacking a central bar with open, clearly defined arms — classified
today as an SAc galaxy.
Astronomer Edwin Hubble pioneered the study of galaxies based simply on
their appearance. This "Field Guide" outlines Hubble's classification
scheme using images from his namesake telescope. Credit: NASA's Goddard Space
Flight Center; Lead Producer: Miranda Chabot; Lead Writer: Andrea Gianopoulos
Group of chimpanzees including mothers,
juveniles, subadults, and infants grooming and playing. Credit: Catherine
Hobaiter
When
people are having a conversation, they rapidly take turns speaking and
sometimes even interrupt. Now, researchers who have collected the largest ever
dataset of chimpanzee "conversations" have found that they
communicate back and forth using gestures following the same rapid-fire
pattern. The findings are
reported on July 22 in the journal Current
Biology.
"While human languages are incredibly diverse, a hallmark we all share
is that our conversations are structured with fast-paced turns of just 200
milliseconds on average," said Catherine Hobaiter at the University of St
Andrews, UK. "But it was an open question whether this was uniquely human,
or if other animals share this structure."
"We found that the timing of
chimpanzee gesture and human conversational turn-taking is similar
and very fast, which suggests that similar evolutionary mechanisms are driving
these social, communicative interactions," says Gal Badihi, the study's
first author.
The researchers knew that human
conversations follow a similar pattern across people living in places and
cultures all over the world. They wanted to know if the same communicative
structure also exists in chimpanzees even though they communicate through
gestures rather than through speech. To find out, they collected data on
chimpanzee "conversations" across five wild communities in East
Africa.
Altogether, they collected data on more
than 8,500 gestures for 252 individuals. They measured the timing of
turn-taking and conversational patterns. They found that 14% of communicative
interactions included an exchange of gestures between two interacting
individuals. Most of the exchanges included a two-part exchange, but some
included up to seven parts.
Chimpanzees
exchange gestures after a conflict. Monica (left) reaches to Ursus (right) and
he taps her hand in response. Credit: Gal Badihi
Overall,
the data reveal a similar timing to human conversation, with short pauses between a gesture and a gestural
response at about 120 milliseconds. Behavioral responses to gestures were
slower.
"The similarities to human
conversations reinforce the description of these interactions as true gestural
exchanges, in which the gestures produced in response are contingent on those
in the previous turn," the researchers write.
"We did see a little variation
among different chimp communities, which again matches what we see in people
where there are slight cultural variations in conversation pace: some cultures
have slower or faster talkers," Badihi says.
"Fascinatingly, they seem to share
both our universal timing, and subtle cultural differences," says
Hobaiter. "In humans, it is the Danish who are 'slower' responders, and in
Eastern chimpanzees that's the Sonso community in Uganda."
This correspondence between human and
chimpanzee face-to-face communication points to shared underlying rules in
communication, the researchers say.
They note that these structures could
trace back to shared ancestral mechanisms. It's also possible that chimpanzees
and humans arrived at similar strategies to enhance coordinated interactions
and manage competition for communicative "space." The findings
suggest that human communication may not be as unique as one might think.
"It shows that other social species
don't need language to engage in close-range communicative exchanges with quick
response time," Badihi says.
"Human conversations may share
similar evolutionary history or trajectories to the communication systems of
other species, suggesting that this type of communication is not unique to
humans but more widespread in social animals."
In future studies, the researchers say
they want to explore why chimpanzees have these conversations to begin with.
They think chimpanzees often rely on gestures to ask something of one
another.
"We still don't know when these
conversational structures evolved, or why," Hobaiter says. "To get at
that question we need to explore communication in more distantly related
species—so that we can work out if these are an ape-characteristic, or ones
that we share with other highly social species, such as elephants or
ravens."
Daily global average temperature values
from MERRA-2 for the years 1980-2022 are shown in white, values for the year
2023 are shown in pink, and values from 2024 through June are shown in red.
Daily global temperature values from July 1-July 23, 2024, from GEOS-FP are
shown in purple. NASA/Global Modeling and Assimilation
Office/Peter Jacobs
July 22, 2024, was the hottest day on
record, according to a NASA analysis of global daily temperature data. July 21
and 23 of this year also exceeded the previous daily record, set in July 2023.
These record-breaking temperatures are part of a long-term warming trend driven
by human activities, primarily the emission of greenhouse gases. As part of its
mission to expand our understanding of Earth, NASA collects critical long-term
observations of our changing planet.
“In a year that has been the hottest on
record to date, these past two weeks have been particularly brutal,” said NASA
Administrator Bill Nelson. “Through our over two dozen Earth-observing
satellites and over 60 years of data, NASA is providing critical analyses of
how our planet is changing and how local communities can prepare, adapt, and
stay safe. We are proud to be part of the Biden-Harris Administration efforts
to protect communities from extreme heat.”
This preliminary finding comes from data
analyses from Modern-Era Retrospective analysis for Research and Applications,
Version 2 (MERRA-2) and Goddard Earth Observing System Forward Processing
(GEOS-FP) systems, which combine millions of global observations from
instruments on land, sea, air, and satellites using atmospheric models. GEOS-FP
provides rapid, near-real time weather data, while the MERRA-2 climate
reanalysis takes longer but ensures the use of best quality observations. These
models are run by the Global Modeling and Assimilation Office (GMAO) at NASA’s
Goddard Space Flight Center in Greenbelt, Maryland.
Daily global average temperature values
from MERRA-2 for the years 1980-2022 are shown in white, values for the year
2023 are shown in pink, and values from 2024 through June are shown in red.
Daily global temperature values from July 1 to 23, 2024, from GEOS-FP are shown
in purple. The results agree with an independent analysis from the European
Union’s Copernicus Earth Observation Programme. While the analyses have small
differences, they show broad agreement in the change in temperature over time
and hottest days.
The latest daily temperature records
follow 13 months of consecutive monthly
temperature records, according to scientists from NASA’s
Goddard Institute for Space Studies in New York. Their analysis was based on
the GISTEMP record, which uses surface
instrumental data alone and provides a longer-term view of changes in global
temperatures at monthly and annual resolutions going back to the late 19th
century.
SimPLE can transform unstructured
arrangements of objects (i.e., laying arbitrarily on the table) into structured
arrangements where the object configurations are known accurately (i.e., onto
black pedestals in this image) by performing precise pick-and-place. This is a
fundamental task in automation industries as it eliminates uncertainty and
greatly simplifies any downstream task. Credit: SimPLE video
Most
robotic systems developed to date can either tackle a specific task with high
precision or complete a range of simpler tasks with low precision. For
instance, some industrial robots can complete specific manufacturing tasks very
well but cannot easily adapt to new tasks. On the other hand, flexible robots
designed to handle a variety of objects often lack the accuracy necessary to be
deployed in practical settings.
This trade-off between precision and
generalization has so far hindered the wide-scale deployment of general-purpose
robots or, in other words, robots that can assist human users well across many
different tasks. One capability that is required for tackling various
real-world problems is that of "precise pick and place," which
involves locating, picking up, and placing objects precisely in specific
locations.
Researchers at Massachusetts Institute
of Technology (MIT) recently introduced SimPLE (Simulation to Pick Localize and
placE), a new learning-based, visuo-tactile method that could allow robotic systems to pick up and place a variety of objects. This method, introduced in Science
Robotics, uses simulation to learn how to pick up, re-grasp, and place
different objects, requiring only computer-aided designs of these objects.
"Over the course of several years
working in robotic manipulation, we have closely interacted with industry
partners," Maria Bauza and Antonia Bronars, first authors of the paper,
told Tech Xplore. "It turns out that one of the existing challenges in
automation is precise pick and place of objects. This problem is challenging as
it requires a robot to transform an unstructured arrangement of objects into an
organized arrangement, which can facilitate further manipulation."
Robot manipulation of five objects. Credit: Maria Bauza
Various industrial robots are already capable of picking up, grasping and
putting down different objects. Yet most of these approaches only generalize
across a small set of widely used objects, such as boxes, cups, or bowls and do
not emphasize precision.
Bauza, Bronars and their colleagues set
out to develop a new method that could allow robots to precisely pick up and
place any object, relying only on simulated data. This is in contrast with many
previous approaches, which learn via real-world robot interactions with
different objects.
"SimPLE relies on three main
components, which are developed in simulation," Bauza and Bronars said.
"First, a task-aware grasping module selects an object that is stable,
observable, and favorable to placing. Then, a visuo-tactile perception module
fuses vision and touch to localize the object with high precision. Finally, a
planning module computes the best path to the goal position, which can include
handing the object off to the other arm, if necessary."
Overview of the SimPLE approach and results. The
video highlights the main advantages of SimPLE, shows the method step by step,
and demonstrates a successful placement for each object. It also shows examples
of consecutive placements and representative failure cases. Credit: Maria Bauza
The three modules underlying the
SimPLE approach ultimately allow the robotic systems to compute robust and
efficient plans for manipulating varying objects with high precision. Its most
notable advantage is that the robots will not need to have previously
interacted with objects in the real world, which greatly speeds up their
learning process.
"Our work proposes an approach
to precise pick-and-place that achieves generality without requiring expensive
real robot experience," Bauza and Bronars said. "It does so by
utilizing simulation and known object shapes."
The researchers tested their
proposed method in a series of experiments. They found that it allowed a
robotic system to successfully pick and place 15 types of objects with a
variety of shapes and sizes, while also outperforming baseline techniques for enabling
object manipulation in robots.
SimPLE provides an approach capable of
precisely picking and placing objects, learned entirely in simulation. It
consists of three models: task-aware grasping, visuo-tactile perception, and
motion planning. We show high-fidelity transfer of the models to the real
system for the 15 objects shown at the bottom of the figure. Credit: Science Robotics (2024). DOI: 10.1126/scirobotics.adi8808
Notably,
this work is among the first to combine both visual and tactile information to
train robots on complex manipulation tasks. The team's promising results could
soon encourage other researchers to develop similar approaches for learning in
simulation.
"The practical implications of this
work are quite broad," Bauza and Bronars said. "SimPLE could fit well
in industries where automation is already standard, such as in the automotive
industry, but could also enable automation in many semi-structured environments
such as medium-size factories, hospitals, medical laboratories, etc., where
automation is less commonplace."
Semi-structured environments are
settings that do not change drastically in terms of the general layout or
structure, but can also be flexible in terms of where objects are placed or
what tasks need to be performed at a given time. SimPLE could be well-suited
for allowing robots to complete tasks in these environments, without requiring
extensive real-world training.
Deployment in the real world. Our
approach first selects the best grasp from a set of samples on a depth image
(A). The best grasp has the highest expected quality given the pose
distribution estimate from vision and the precomputed grasp quality scores.
Then, we executed the best grasp and updated the pose estimate, now including
information from tactile in addition to the original depth image (B). Next, we
took the best estimate from vision and tactile as the start pose and found a
plan that leads to the goal pose using the regrasp graph if necessary (C).
Last, we executed the plan (D). Credit: Maria Bauza
Generating models in simulation.Starting
from the object’s CAD model (A), we sampled two types of grasps on the object.
Table grasps (B) are accessible from the object’s resting pose on the table.
For each table grasp, we simulated corresponding depth and tactile images and
used these images to learn visuo-tactile perception models (E). In-air grasps
(C) are accessible during regrasps. We connected in-air grasp samples that are
kinematically feasible into a graph of regrasps (F). We used the visuo-tactile
model and regrasp graph to compute the observability (Obs) and manipulability
(Mani) of a grasp and combined these with grasp stability (GS) to evaluate the
quality of each table grasp (D). Credit: Maria
"In
these settings, being able to take an unstructured set of objects into a
structured arrangement is an enabler for any downstream task," Bauza and
Bronars explained. "For instance, an example of a pick-and-place task in a
medical lab would be taking new testing tubes from a box and placing them
precisely into a rack. After the tubes are arranged, they could then be placed
in a machine designed to test its content or could serve other scientific
purposes."
The promising method developed by this
team of researchers could soon be trained on a wider range of simulated data
and models of more objects, to further validate its performance and
generalizability. Meanwhile, Bauza, Bronars and their colleagues are working to
increase the dexterity and robustness of their proposed system.
"Two directions of future work
include enhancing the dexterity of the robot to solve even more complex tasks,
and providing a closed-loop solution that, instead of computing a plan,
computes a policy to adapt its actions continuously based on the sensors'
observations," Bauza and Bronars added.
"We made progress in the latter in TEXterity, which leverages continuous tactile information
during task execution, and we plan to continue pushing dexterity and robustness
for high-precision manipulation in our ongoing research."