Saturday, August 30, 2025

NASA Marsquake Data Reveals Lumpy Nature of Red Planet’s Interior - UNIVERSE

Scientists believe giant impacts — like the one depicted in this artist’s concept — occurred on Mars 4.5 billion years ago, injecting debris from the impact deep into the planet’s mantle. NASA’s InSight lander detected this debris before the mission’s end in 2022.

NASA/JPL-Caltech

Rocky material that impacted Mars lies scattered in giant lumps throughout the planet’s mantle, offering clues about Mars’ interior and its ancient past.

What appear to be fragments from the aftermath of massive impacts on Mars that occurred 4.5 billion years ago have been detected deep below the planet’s surface. The discovery was made thanks to NASA’s now-retired InSight lander, which recorded the findings before the mission’s end in 2022. The ancient impacts released enough energy to melt continent-size swaths of the early crust and mantle into vast magma oceans, simultaneously injecting the impactor fragments and Martian debris deep into the planet’s interior.

There’s no way to tell exactly what struck Mars: The early solar system was filled with a range of different rocky objects that could have done so, including some so large they were effectively protoplanets. The remains of these impacts still exist in the form of lumps that are as large as 2.5 miles (4 kilometers) across and scattered throughout the Martian mantle. They offer a record preserved only on worlds like Mars, whose lack of tectonic plates has kept its interior from being churned up the way Earth’s is through a process known as convection.

A cutaway view of Mars in this artist’s concept (not to scale) reveals debris from ancient impacts scattered through the planet’s mantle. On the surface at left, a meteoroid impact sends seismic signals through the interior; at right is NASA’s InSight lander.

NASA/JPL-Caltech

The finding was reported Thursday, Aug. 28, in a study published by the journal Science.

“We’ve never seen the inside of a planet in such fine detail and clarity before,” said the paper’s lead author, Constantinos Charalambous of Imperial College London. “What we’re seeing is a mantle studded with ancient fragments. Their survival to this day tells us Mars’ mantle has evolved sluggishly over billions of years. On Earth, features like these may well have been largely erased.”

InSight, which was managed by NASA’s Jet Propulsion Laboratory in Southern California, placed the first seismometer on Mars’ surface in 2018. The extremely sensitive instrument recorded 1,319 marsquakes before the lander’s end of mission in 2022.

NASA’s InSight took this selfie in 2019 using a camera on its robotic arm. The lander also used its arm to deploy the mission’s seismometer, whose data was used in a 2025 study showing impacts left chunks of debris deep in the planet’s interior.

NASA/JPL-Caltech

Quakes produce seismic waves that change as they pass through different kinds of material, providing scientists a way to study the interior of a planetary body. To date, the InSight team has measured the size, depth, and composition of Mars’ crust, mantle, and core. This latest discovery regarding the mantle’s composition suggests how much is still waiting to be discovered within InSight’s data.

“We knew Mars was a time capsule bearing records of its early formation, but we didn’t anticipate just how clearly we’d be able to see with InSight,” said Tom Pike of Imperial College London, coauthor of the paper.

Quake hunting

Mars lacks the tectonic plates that produce the temblors many people in seismically active areas are familiar with. But there are two other types of quakes on Earth that also occur on Mars: those caused by rocks cracking under heat and pressure, and those caused by meteoroid impacts.

Of the two types, meteoroid impacts on Mars produce high-frequency seismic waves that travel from the crust deep into the planet’s mantle, according to a paper published earlier this year in Geophysical Research Letters. Located beneath the planet’s crust, the Martian mantle can be as much as 960 miles (1,550 kilometers) thick and is made of solid rock that can reach temperatures as high as 2,732 degrees Fahrenheit (1,500 degrees Celsius).

Scrambled signals

The new Science paper identifies eight marsquakes whose seismic waves contained strong, high-frequency energy that reached deep into the mantle, where their seismic waves were distinctly altered.

“When we first saw this in our quake data, we thought the slowdowns were happening in the Martian crust,” Pike said. “But then we noticed that the farther seismic waves travel through the mantle, the more these high-frequency signals were being delayed.”

Using planetwide computer simulations, the team saw that the slowing down and scrambling happened only when the signals passed through small, localized regions within the mantle. They also determined that these regions appear to be lumps of material with a different composition than the surrounding mantle.

With one riddle solved, the team focused on another: how those lumps got there.

Turning back the clock, they concluded that the lumps likely arrived as giant asteroids or other rocky material that struck Mars during the early solar system, generating those oceans of magma as they drove deep into the mantle, bringing with them fragments of crust and mantle.

Charalambous likens the pattern to shattered glass — a few large shards with many smaller fragments. The pattern is consistent with a large release of energy that scattered many fragments of material throughout the mantle. It also fits well with current thinking that in the early solar system, asteroids and other planetary bodies regularly bombarded the young planets.

On Earth, the crust and uppermost mantle is continuously recycled by plate tectonics pushing a plate’s edge into the hot interior, where, through convection, hotter, less-dense material rises and cooler, denser material sinks. Mars, by contrast, lacks tectonic plates, and its interior circulates far more sluggishly. The fact that such fine structures are still visible today, Charalambous said, “tells us Mars hasn’t undergone the vigorous churning that would have smoothed out these lumps.”

And in that way, Mars could point to what may be lurking beneath the surface of other rocky planets that lack plate tectonics, including Venus and Mercury.

More about InSight

JPL managed InSight for NASA’s Science Mission Directorate. InSight was part of NASA’s Discovery Program, managed by the agency’s Marshall Space Flight Center in Huntsville, Alabama. Lockheed Martin Space in Denver built the InSight spacecraft, including its cruise stage and lander, and supported spacecraft operations for the mission.

A number of European partners, including France’s Centre National d’Études Spatiales (CNES) and the German Aerospace Center (DLR), supported the InSight mission. CNES provided the Seismic Experiment for Interior Structure (SEIS) instrument to NASA, with the principal investigator at IPGP (Institut de Physique du Globe de Paris). Significant contributions for SEIS came from IPGP; the Max Planck Institute for Solar System Research (MPS) in Germany; the Swiss Federal Institute of Technology (ETH Zurich) in Switzerland; Imperial College London and Oxford University in the United Kingdom; and JPL. DLR provided the Heat Flow and Physical Properties Package (HP3) instrument, with significant contributions from the Space Research Center (CBK) of the Polish Academy of Sciences and Astronika in Poland. Spain’s Centro de Astrobiología (CAB) supplied the temperature and wind sensors. 

Source: NASA Marsquake Data Reveals Lumpy Nature of Red Planet’s Interior - NASA   

AI trained to predict nationality from beliefs and values - Other Sciences - Social Sciences

Different countries have different cultures, and social scientists have developed theories about which values are most important in differentiating the world's cultures. Abhishek Sheetal and colleagues used the power of machine learning to identify the crucial distinguishing characteristics of the world's national cultures in a theory-blind manner. The findings are published in the journal PNAS Nexus.

The authors trained a neural network to predict an individual's country of origin from their attitudes, values, and beliefs, as measured by the World Values Survey, a global study that probes everything from religious beliefs to political views. Given an unknown individual's survey responses, the model was able to determine which of 98 countries the person was from with 90% accuracy.

Out of nearly 600 possible predictors, the authors extracted the top 60 most predictive survey questions, including in first place, "To what extent do you think maintaining order in society is the most important responsibility of the government?" and in second place, "To have a successful marriage, how important is it that spouses agree on politics?"

Themes related to political attitudes, environmental attitudesfamily values, and interpersonal relationships frequently appeared among the top 60 items, as expected based on broader research in the social sciences; however, there were also surprises.

Attitudes around the relationship between the government and society, gender roles, and marriage and family, which are seldom emphasized in social science theories related to culture, were also important for identifying individuals' country of origin. The authors include case studies to illustrate the possibilities in the context of cultural differences in environmental behavior and social distancing during the COVID-19 pandemic.

According to the authors, the study demonstrates that machine learning-based models of cultural values can serve as a viable alternative to traditional theory-driven models of cultural values, and these models offer a new tool that cross-cultural social scientists and international business researchers can use to uncover novel explanations for cultural differences. 

Source: AI trained to predict nationality from beliefs and values 

NASA Seeks Volunteers to Track Artemis II Mission


On the 19th day of the Artemis I mission, Dec. 4, 2022, a camera mounted on the Orion spacecraft captured the Moon just in frame.

Credits: NASA

NASA seeks volunteers to passively track the Artemis II Orion spacecraft as the crewed mission travels to the Moon and back to Earth.

The Artemis II test flight, a launch of the agency’s SLS (Space Launch System) rocket and Orion spacecraft, will send NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, on an approximately 10-day mission around the Moon.

The mission, targeted for no later than April 2026, will rely on NASA’s Near Space Network and Deep Space Network for primary communications and tracking support throughout its launch, orbit, and reentry. However, with a growing focus on commercialization, NASA wants to further understand industry’s tracking capabilities.  

This collaboration opportunity builds upon a previous request released by NASA’s SCaN (Space Communication and Navigation) Program during the Artemis I mission, where ten volunteers successfully tracked the uncrewed Orion spacecraft in 2022 on its journey thousands of miles beyond the Moon and back.

During the Artemis I mission, participants – ranging from international space agencies, academic institutions, commercial companies, nonprofits, and private citizens – attempted to receive Orion’s signal and use their respective ground antennas to track and measure changes in the radio waves transmitted by Orion.

“This data will help inform our transition to a commercial-first approach, ultimately strengthening the infrastructure needed to support long-term Moon to Mars objectives.

Kevin Coggins

Deputy Associate Administrator for SCaN

By offering this opportunity to the broader aerospace community, we can identify available tracking capabilities outside the government,” said Kevin Coggins, NASA’s deputy associate administrator for SCaN at NASA Headquarters in Washington. “This data will help inform our transition to a commercial-first approach, ultimately strengthening the infrastructure needed to support Artemis missions and our long-term Moon to Mars objectives.” 


Read the opportunity announcement here:

Responses are due by 5 p.m. EDT on Monday, Oct. 27.

NASA’s SCaN Program serves as the management office for the agency’s space communications and navigation systems. More than 100 NASA and non-NASA missions rely on SCaN’s two networks, the Near Space Network and the Deep Space Network, to support astronauts aboard the International Space Station and future Artemis missions, monitor Earth’s weather, support lunar exploration, and uncover the solar system and beyond.

Artemis II will help confirm the systems and hardware needed for human deep space exploration. This mission is the first crewed flight under NASA’s Artemis campaign and is another step toward new U.S.-crewed missions on the Moon’s surface that will help the agency prepare to send American astronauts to Mars. 

Source: NASA Seeks Volunteers to Track Artemis II Mission - NASA   

The Largest World Robot Conference in China 2025 - PRO ROBOTS

 

Jenna Ortega vs. the Wednesday cast | Hot Ones Versus - First We Feast


 

Best Fails You Can't Miss 🤣 Men Caught on Camera - FailArmy

 

Short Clip - Atomic Blonde’s (2017) Most Brutal Fight Scenes | All Action

 

Zoë Kravitz and Austin Butler Get Drunk Playing 90s Trivia, Talk 'Caught Stealing' - Rolling Stone

 

Superman | The Daily Planet Set | Behind the Scenes | Warner Bros. Entertainment

 

Funny and Weird Clips (3681)




















 

Friday, August 29, 2025

NASA’s New SPHEREx Mission Observes Interstellar Comet - UNIVERSE

Comet 3I/ATLAS

Cataloguing the journey of comet 3I/ATLAS through the solar system. Because the object comes from outside our solar system, it is just passing through – so we use all the tools at our disposal to observe it before it disappears back into the cosmic dark. A host of NASA missions are coming together to observe this interstellar object, which was first discovered in summer 2025, before it leaves forever. While the comet poses no threat to Earth, NASA’s space telescopes help support the agency's ongoing mission to find, track, and better understand solar system objects.

NASA/SPHEREx

NASA’s SPHEREx (Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer) observed interstellar comet 3I/ATLAS Aug. 7 to Aug. 15. The SPHEREx team has been analyzing insights from this data, and a research note is available online. The agency’s SPHEREx is one of NASA’s space telescopes observing this comet, together providing more information about its size, physical properties, and chemical makeup. For example, NASA’s Webb and Hubble space telescopes also recently observed the comet. While the comet poses no threat to Earth, NASA’s space telescopes help support the agency’s ongoing mission to find, track, and better understand solar system objects.


NASA/SPHEREx

Alise Fisher
NASA Headquarters, Washington

Source: NASA’s New SPHEREx Mission Observes Interstellar Comet - NASA Science

Can large language models figure out the real world? New metric measures AI's predictive power - Computer Sciences - Machine learning & AI

In the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system's planets would appear in the sky as they orbit the sun. But it wasn't until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood.

Although they were inspired by Kepler's laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon's pull controls the tides on Earth—or how to launch a satellite from Earth to the surface of the moon or planets.

Today's sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler's orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton's laws?

As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.

Now, researchers in MIT's Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large, the answer at this point, in the examples they studied, is—not so much.

The findings were presented at the International Conference on Machine Learning (ICML 2025), in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.

"Humans all the time have been able to make this transition from good predictions to world models," says Vafa, the study's lead author. So the question their team was addressing was, "Have foundation models—has AI—been able to make that leap from predictions to world models? And we're not asking are they capable, or can they, or will they. It's just, have they done it so far?" he says.

"We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well," says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. "Even defining what understanding means was a challenge."

In the Kepler versus Newton analogy, Vafa says, "They both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks." That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can "transcend the task that you're working on and be able to generalize to new kinds of problems and paradigms."

Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel's insight into the underlying laws of genetic inheritance.

"There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world," for example, in the natural sciences, he says. "It would need to adapt, have a world model to adapt to any possible task."

Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex, that ability faded fast.

The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias—that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.

The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it's doing—right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration?

The answer is yes: Predictive models do well at reconstructing the "world" in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.

"For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state," says Chang. "But as we increase the number of states, then it starts to have a divergence from real-world models."

A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.

The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.

With this new metric of inductive bias, "our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is," Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases "where we don't really know what the truth is," he says.

People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. "For the more realistic problems," Vafa says, "even for something like basic mechanics, we found that there seems to be a long way to go."

Chang says, "There's been a lot of hype around foundation models, where people are trying to build domain-specific foundation models—biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data" and training these models to make predictions, "and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks."

This work shows there's a long way to go, but it also helps to show a path forward. "Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we're training currently," Chang says. "As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric." 

Source: Can large language models figure out the real world? New metric measures AI's predictive power   

Neighborhood Bully Finally Gets What He Deserves - Midwest Safety

 

Austin Butler & Zoë Kravitz Take Lie Detector Tests | Vanity Fair

 

STREET RACERS VS POLICE - Craziest Pursuits Caught on Dashcam - Most Dangerous

 

Short Films - The Mermaid 1, 2 & 3 (2022) - Horror - ACMofficial