Members Can Post Anonymously On This Site
Contact Dynamics Predictions Utilizing theNESC Parameterless Contact Model
-
Similar Topics
-
By NASA
At Goddard Space Flight Center, the GSFC Data Science Group has completed the testing for their SatVision Top-of-Atmosphere (TOA) Foundation Model, a geospatial foundation model for coarse-resolution all-sky remote sensing imagery. The team, comprised of Mark Carroll, Caleb Spradlin, Jordan Caraballo-Vega, Jian Li, Jie Gong, and Paul Montesano, has now released their model for wide application in science investigations.
Foundation models can transform the landscape of remote sensing (RS) data analysis by enabling the pre-training of large computer-vision models on vast amounts of remote sensing data. These models can be fine-tuned with small amounts of labeled training and applied to various mapping and monitoring applications. Because most existing foundation models are trained solely on cloud-free satellite imagery, they are limited to applications of land surface or require atmospheric corrections. SatVision-TOA is trained on all-sky conditions which enables applications involving atmospheric variables (e.g., cloud or aerosol).
SatVision TOA is a 3 billion parameter model trained on 100 million images from Moderate Resolution Imaging Spectroradiometer (MODIS). This is, to our knowledge, the largest foundation model trained solely on satellite remote sensing imagery. By including “all-sky” conditions during pre-training, the team incorporated a range of cloud conditions often excluded in traditional modeling. This enables 3D cloud reconstruction and cloud modeling in support of Earth and climate science, offering significant enhancement for large-scale earth observation workflows.
With an adaptable and scalable model design, SatVision-TOA can unify diverse Earth observation datasets and reduce dependency on task-specific models. SatVision-TOA leverages one of the largest public datasets to capture global contexts and robust features. The model could have broad applications for investigating spectrometer data, including MODIS, VIIRS, and GOES-ABI. The team believes this will enable transformative advancements in atmospheric science, cloud structure analysis, and Earth system modeling.
The model architecture and model weights are available on GitHub and Hugging Face, respectively. For more information, including a detailed user guide, see the associated white paper: SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution All-Sky Remote Sensing Imagery.
Examples of image reconstruction by SatVision-TOA. Left: MOD021KM v6.1 cropped image chip using MODIS bands [1, 3, 2]. Middle: The same images with randomly applied 8×8 mask patches, masking 60% of the original image. Right: The reconstructed images produced by the model, along with their respective Structural Similarity Index Measure (SSIM) scores. These examples illustrate the model’s ability to preserve structural detail and reconstruct heterogeneous features, such as cloud textures and land-cover transitions, with high fidelity.NASAView the full article
-
By NASA
4 min read
Expanded AI Model with Global Data Enhances Earth Science Applications
On June 22, 2013, the Operational Land Imager (OLI) on Landsat 8 captured this false-color image of the East Peak fire burning in southern Colorado near Trinidad. Burned areas appear dark red, while actively burning areas look orange. Dark green areas are forests; light green areas are grasslands. Data from Landsat 8 were used to train the Prithvi artificial intelligence model, which can help detect burn scars. NASA Earth Observatory NASA, IBM, and Forschungszentrum Jülich have released an expanded version of the open-source Prithvi Geospatial artificial intelligence (AI) foundation model to support a broader range of geographical applications. Now, with the inclusion of global data, the foundation model can support tracking changes in land use, monitoring disasters, and predicting crop yields worldwide.
The Prithvi Geospatial foundation model, first released in August 2023 by NASA and IBM, is pre-trained on NASA’s Harmonized Landsat and Sentinel-2 (HLS) dataset and learns by filling in masked information. The model is available on Hugging Face, a data science platform where machine learning developers openly build, train, deploy, and share models. Because NASA releases data, products, and research in the open, businesses and commercial entities can take these models and transform them into marketable products and services that generate economic value.
“We’re excited about the downstream applications that are made possible with the addition of global HLS data to the Prithvi Geospatial foundation model. We’ve embedded NASA’s scientific expertise directly into these foundation models, enabling them to quickly translate petabytes of data into actionable insights,” said Kevin Murphy, NASA chief science data officer. “It’s like having a powerful assistant that leverages NASA’s knowledge to help make faster, more informed decisions, leading to economic and societal benefits.”
AI foundation models are pre-trained on large datasets with self-supervised learning techniques, providing flexible base models that can be fine-tuned for domain-specific downstream tasks.
Crop classification prediction generated by NASA and IBM’s open-source Prithvi Geospatial artificial intelligence model. Focusing on diverse land use and ecosystems, researchers selected HLS satellite images that represented various landscapes while avoiding lower-quality data caused by clouds or gaps. Urban areas were emphasized to ensure better coverage, and strict quality controls were applied to create a large, well-balanced dataset. The final dataset is significantly larger than previous versions, offering improved global representation and reliability for environmental analysis. These methods created a robust and representative dataset, ideal for reliable model training and analysis.
The Prithvi Geospatial foundation model has already proven valuable in several applications, including post-disaster flood mapping and detecting burn scars caused by fires.
One application, the Multi-Temporal Cloud Gap Imputation, leverages the foundation model to reconstruct the gaps in satellite imagery caused by cloud cover, enabling a clearer view of Earth’s surface over time. This approach supports a variety of applications, including environmental monitoring and agricultural planning.
Another application, Multi-Temporal Crop Segmentation, uses satellite imagery to classify and map different crop types and land cover across the United States. By analyzing time-sequenced data and layering U.S. Department of Agriculture’s Crop Data, Prithvi Geospatial can accurately identify crop patterns, which in turn could improve agricultural monitoring and resource management on a large scale.
The flood mapping dataset can classify flood water and permanent water across diverse biomes and ecosystems, supporting flood management by training models to detect surface water.
Wildfire scar mapping combines satellite imagery with wildfire data to capture detailed views of wildfire scars shortly after fires occurred. This approach provides valuable data for training models to map fire-affected areas, aiding in wildfire management and recovery efforts.
Burn scar mapping generated by NASA and IBM’s open-source Prithvi Geospatial artificial intelligence model. This model has also been tested with additional downstream applications including estimation of gross primary productivity, above ground biomass estimation, landslide detection, and burn intensity estimations.
“The updates to this Prithvi Geospatial model have been driven by valuable feedback from users of the initial version,” said Rahul Ramachandran, AI foundation model for science lead and senior data science strategist at NASA’s Marshall Space Flight Center in Huntsville, Alabama. “This enhanced model has also undergone rigorous testing across a broader range of downstream use cases, ensuring improved versatility and performance, resulting in a version of the model that will empower diverse environmental monitoring applications, delivering significant societal benefits.”
The Prithvi Geospatial Foundation Model was developed as part of an initiative of NASA’s Office of the Chief Science Data Officer to unlock the value of NASA’s vast collection of science data using AI. NASA’s Interagency Implementation and Advanced Concepts Team (IMPACT), based at Marshall, IBM Research, and the Jülich Supercomputing Centre, Forschungszentrum, Jülich, designed the foundation model on the supercomputer Jülich Wizard for European Leadership Science (JUWELS), operated by Jülich Supercomputing Centre. This collaboration was facilitated by IEEE Geoscience and Remote Sensing Society.
For more information about NASA’s strategy of developing foundation models for science, visit https://science.nasa.gov/artificial-intelligence-science.
Share
Details
Last Updated Dec 04, 2024 Related Terms
Earth Science & Research Explore More
9 min read Towards Autonomous Surface Missions on Ocean Worlds
Article
23 hours ago
5 min read NASA-Led Team Links Comet Water to Earth’s Oceans
Scientists find that cometary dust affects interpretation of spacecraft measurements, reopening the case for comets…
Article
23 hours ago
1 min read Coming Spring 2025: Planetary Defenders Documentary
ow would humanity respond if we discovered an asteroid headed for Earth? NASA’s Planetary Defenders…
Article
23 hours ago
Keep Exploring Discover Related Topics
Missions
Humans in Space
Climate Change
Solar System
View the full article
-
By NASA
3 min read
Sols 4345-4347: Contact Science is Back on the Table
NASA’s Mars rover Curiosity acquired this image using its Right Navigation Camera on sol 4343 — Martian day 4,343 of the Mars Science Laboratory mission — on Oct. 24, 2024 at 15:26:28 UTC. NASA/JPL-Caltech Earth planning date: Friday, Oct. 25, 2024
The changes to the plan Wednesday, moving the drive a sol earlier, meant that we started off planning this morning about 18 meters (about 59 feet) farther along the western edge of Gediz Vallis and with all the data we needed for planning. This included the knowledge that once again one of Curiosity’s wheels was perched on a rock. Luckily, unlike on Wednesday, it was determined that it was safe to still go ahead with full contact science for this weekend. This consisted of two targets “Mount Brewer” and “Reef Lake,” two targets on the top and side of the same block.
Aside from the contact science, Curiosity has three sols to fill with remote imaging. The first two sols include “targeted science,” which means all the imaging of specific targets in our current workspace. Then, after we drive away on the second sol, we fill the final sol of the plan with “untargeted science,” where we care less about knowing exactly where the rover is ahead of time. A lot of the environmental team’s (or ENV) activities fall under this umbrella, which is why our dedicated “ENV Science Block” (about 30 minutes of environmental activities one morning every weekend) tends to fall at the end of a weekend plan.
But that’s getting ahead of myself. The weekend plan starts off with two ENV activities — a dust devil movie and a suprahorizon cloud movie. While cloud movies are almost always pointed in the same direction, our dust devil movie has to be specifically targeted. Recently we’ve been looking southeast toward a more sandy area (which you can see above), to see if we can catch dust lifting there. After those movies we hand the reins back over to the geology team (or GEO) for ChemCam observations of Reef Lake and “Poison Meadow.” Mastcam will follow this up with its own observations of Reef Lake and the AEGIS target from Wednesday’s plan. The rover gets some well-deserved rest before waking up for the contact science I talked about above, followed by a late evening Mastcam mosaic of “Fascination Turret,” a part of Gediz Vallis ridge that we’ve seen before.
We’re driving away on the second sol, but before that we have about another hour of science. ChemCam and Mastcam both have observations of “Heaven Lake” and the upper Gediz Vallis ridge, and ENV has a line-of-sight observation, to see how much dust is in the crater, and a pre-drive deck monitoring image to see if any dust moves around on the rover deck due to either driving or wind. Curiosity gets a short nap before a further drive of about 25 meters (about 82 feet).
The last sol of the weekend is a ChemCam special. AEGIS will autonomously choose a target for imaging, and then ChemCam has a passive sky observation to examine changing amounts of atmospheric gases. The weekend doesn’t end at midnight, though — we wake up in the morning for the promised morning ENV block, which we’ve filled with two cloud movies, another line-of-sight, and a tau observation to see how dusty the atmosphere is.
Written by Alex Innanen, Atmospheric Scientist at York University
Share
Details
Last Updated Oct 28, 2024 Related Terms
Blogs Explore More
4 min read Sols 4343-4344: Late Slide, Late Changes
Article
3 days ago
2 min read Red Rocks with Green Spots at ‘Serpentine Rapids’
Article
3 days ago
4 min read Sols 4341-4342: A Bumpy Road
Article
4 days ago
Keep Exploring Discover More Topics From NASA
Mars
Mars is the fourth planet from the Sun, and the seventh largest. It’s the only planet we know of inhabited…
All Mars Resources
Explore this collection of Mars images, videos, resources, PDFs, and toolkits. Discover valuable content designed to inform, educate, and inspire,…
Rover Basics
Each robotic explorer sent to the Red Planet has its own unique capabilities driven by science. Many attributes of a…
Mars Exploration: Science Goals
The key to understanding the past, present or future potential for life on Mars can be found in NASA’s four…
View the full article
-
By NASA
NASA astronaut Jessica Meir conducts cardiac research using tissue chip platforms in the Life Sciences Glovebox aboard space station in March of 2022.NASA The International Space Station offers a unique microgravity environment where cells outside the human body behave similarly to how they do inside the human body. Tissue chips are small devices containing living cells that mimic complex functions of specific human tissues and organs. Researchers can run experiments using tissue chips aboard space station to understand disease progression and provide faster and safer alternatives for preparing medicine for clinical trials.
Researchers placed engineered heart tissues on tissue chips sent to study how microgravity impacts cardiac functions in space. Data collected by the chips showed these heart tissues experienced impaired contractions, subcellular structural changes, and increased stress, which can lead to tissue damage and disease. Previous studies conducted on human subjects have displayed similar outcomes. In the future, engineered heart tissues could accurately model the effects of spaceflight on cardiac function.
Another investigation used muscle-on-a-chip technology to evaluate whether engineered muscle tissues can mimic the characteristics of reduced muscle regeneration in microgravity. Researchers found that engineered muscle-on-a-chip platforms are viable for studying muscle-related bioprocesses in space. In addition, samples treated with drugs known to stimulate muscle regeneration showed partial prevention of the effects of microgravity. These results demonstrate that muscle-on-chip can also be used to study and identify drugs that may prevent muscle decline in space and age-related muscle decline on Earth.
NASA astronaut Megan McArthur works on the Cardinal Muscle investigation in the Life Sciences Glovebox aboard the space station in August of 2021.NASA Keep Exploring Discover More Topics From NASA
Benefits to Humanity
Humans In Space
International Space Station
Space Station Research and Technology
View the full article
-
By Space Force
U.S. Space Force senior leaders discussed the Personnel Management Act during a panel at the Air and Space Force’s Air, Space and Cyber Conference at National Harbor, Maryland, Sept. 18.
View the full article
-
-
Check out these Videos
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.