Members Can Post Anonymously On This Site
Using a data cube to monitor forest loss in the Amazon
-
Similar Topics
-
By NASA
Download PDF: Statistical Analysis Using Random Forest Algorithm Provides Key Insights into Parachute Energy Modulator System
Energy modulators (EM), also known as energy absorbers, are safety-critical components that are used to control shocks and impulses in a load path. EMs are textile devices typically manufactured out of nylon, Kevlar® and other materials, and control loads by breaking rows of stitches that bind a strong base webbing together as shown in Figure 1. A familiar EM application is a fall-protection harness used by workers to prevent injury from shock loads when the harness arrests a fall. EMs are also widely used in parachute systems to control shock loads experienced during the various stages of parachute system deployment.
Random forest is an innovative algorithm for data classification used in statistics and machine learning. It is an easy to use and highly flexible ensemble learning method. The random forest algorithm is capable of modeling both categorical and continuous data and can handle large datasets, making it applicable in many situations. It also makes it easy to evaluate the relative importance of variables and maintains accuracy even when a dataset has missing values.
Random forests model the relationship between a response variable and a set of predictor or independent variables by creating a collection of decision trees. Each decision tree is built from a random sample of the data. The individual trees are then combined through methods such as averaging or voting to determine the final prediction (Figure 2). A decision tree is a non-parametric supervised learning algorithm that partitions the data using a series of branching binary decisions. Decision trees inherently identify key features of the data and provide a ranking of the contribution of each feature based on when it becomes relevant. This capability can be used to determine the relative importance of the input variables (Figure 3). Decision trees are useful for exploring relationships but can have poor accuracy unless they are combined into random forests or other tree-based models.
The performance of a random forest can be evaluated using out-of-bag error and cross-validation techniques. Random forests often use random sampling with replacement from the original dataset to create each decision tree. This is also known as bootstrap sampling and forms a bootstrap forest. The data included in the bootstrap sample are referred to as in-the-bag, while the data not selected are out-of-bag. Since the out-of-bag data were not used to generate the decision tree, they can be used as an internal measure of the accuracy of the model. Cross-validation can be used to assess how well the results of a random forest model will generalize to an independent dataset. In this approach, the data are split into a training dataset used to generate the decision trees and build the model and a validation dataset used to evaluate the model’s performance. Evaluating the model on the independent validation dataset provides an estimate of how accurately the model will perform in practice and helps avoid problems such as overfitting or sampling bias. A good model performs well on
both the training data and the validation data.
The complex nature of the EM system made it difficult for the team to identify how various parameters influenced EM behavior. A bootstrap forest analysis was applied to the test dataset and was able to identify five key variables associated with higher probability of damage and/or anomalous behavior. The identified key variables provided a basis for further testing and redesign of the EM system. These results also provided essential insight to the investigation and aided in development of flight rationale for future use cases.
For information, contact Dr. Sara R. Wilson. sara.r.wilson@nasa.gov
View the full article
-
By NASA
4 Min Read NASA Finds ‘Sideways’ Black Hole Using Legacy Data, New Techniques
Image showing the structure of galaxy NGC 5084, with data from the Chandra X-ray Observatory overlaid on a visible-light image of the galaxy. Chandra’s data, shown in purple, revealed four plumes of hot gas emanating from a supermassive black hole rotating “tipped over” at the galaxy’s core. Credits: X-ray: NASA/CXC, A. S. Borlaff, P. Marcum et al.; Optical full image: M. Pugh, B. Diaz; Image Processing: NASA/USRA/L. Proudfit NASA researchers have discovered a perplexing case of a black hole that appears to be “tipped over,” rotating in an unexpected direction relative to the galaxy surrounding it. That galaxy, called NGC 5084, has been known for years, but the sideways secret of its central black hole lay hidden in old data archives. The discovery was made possible by new image analysis techniques developed at NASA’s Ames Research Center in California’s Silicon Valley to take a fresh look at archival data from the agency’s Chandra X-ray Observatory.
Using the new methods, astronomers at Ames unexpectedly found four long plumes of plasma – hot, charged gas – emanating from NGC 5084. One pair of plumes extends above and below the plane of the galaxy. A surprising second pair, forming an “X” shape with the first, lies in the galaxy plane itself. Hot gas plumes are not often spotted in galaxies, and typically only one or two are present.
The method revealing such unexpected characteristics for galaxy NGC 5084 was developed by Ames research scientist Alejandro Serrano Borlaff and colleagues to detect low-brightness X-ray emissions in data from the world’s most powerful X-ray telescope. What they saw in the Chandra data seemed so strange that they immediately looked to confirm it, digging into the data archives of other telescopes and requesting new observations from two powerful ground-based observatories.
Hubble Space Telescope image of galaxy NGC 5084’s core. A dark, vertical line near the center shows the curve of a dusty disk orbiting the core, whose presence suggests a supermassive black hole within. The disk and black hole share the same orientation, fully tipped over from the horizontal orientation of the galaxy.NASA/STScI, M. A. Malkan, B. Boizelle, A.S. Borlaff. HST WFPC2, WFC3/IR/UVIS. The surprising second set of plumes was a strong clue this galaxy housed a supermassive black hole, but there could have been other explanations. Archived data from NASA’s Hubble Space Telescope and the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile then revealed another quirk of NGC 5084: a small, dusty, inner disk turning about the center of the galaxy. This, too, suggested the presence of a black hole there, and, surprisingly, it rotates at a 90-degree angle to the rotation of the galaxy overall; the disk and black hole are, in a sense, lying on their sides.
The follow-up analyses of NGC 5084 allowed the researchers to examine the same galaxy using a broad swath of the electromagnetic spectrum – from visible light, seen by Hubble, to longer wavelengths observed by ALMA and the Expanded Very Large Array of the National Radio Astronomy Observatory near Socorro, New Mexico.
“It was like seeing a crime scene with multiple types of light,” said Borlaff, who is also the first author on the paper reporting the discovery. “Putting all the pictures together revealed that NGC 5084 has changed a lot in its recent past.”
It was like seeing a crime scene with multiple types of light.
Alejandro Serrano Borlaff
NASA Research Scientist
“Detecting two pairs of X-ray plumes in one galaxy is exceptional,” added Pamela Marcum, an astrophysicist at Ames and co-author on the discovery. “The combination of their unusual, cross-shaped structure and the ‘tipped-over,’ dusty disk gives us unique insights into this galaxy’s history.”
Typically, astronomers expect the X-ray energy emitted from large galaxies to be distributed evenly in a generally sphere-like shape. When it’s not, such as when concentrated into a set of X-ray plumes, they know a major event has, at some point, disturbed the galaxy.
Possible dramatic moments in its history that could explain NGC 5084’s toppled black hole and double set of plumes include a collision with another galaxy and the formation of a chimney of superheated gas breaking out of the top and bottom of the galactic plane.
More studies will be needed to determine what event or events led to the current strange structure of this galaxy. But it is already clear that the never-before-seen architecture of NGC 5084 was only discovered thanks to archival data – some almost three decades old – combined with novel analysis techniques.
The paper presenting this research was published Dec. 18 in The Astrophysical Journal. The image analysis method developed by the team – called Selective Amplification of Ultra Noisy Astronomical Signal, or SAUNAS – was described in The Astrophysical Journal in May 2024.
For news media:
Members of the news media interested in covering this topic should reach out to the NASA Ames newsroom.
Share
Details
Last Updated Dec 18, 2024 Related Terms
Black Holes Ames Research Center Ames Research Center's Science Directorate Astrophysics Chandra X-Ray Observatory Galaxies Galaxies, Stars, & Black Holes Galaxies, Stars, & Black Holes Research General Hubble Space Telescope Marshall Astrophysics Marshall Science Research & Projects Marshall Space Flight Center Missions NASA Centers & Facilities Science & Research Supermassive Black Holes The Universe Explore More
4 min read Space Gardens
Article 18 mins ago 8 min read NASA’s Kennedy Space Center Looks to Thrive in 2025
Article 1 hour ago 4 min read NASA Open Science Reveals Sounds of Space
NASA has a long history of translating astronomy data into beautiful images that are beloved…
Article 1 hour ago Keep Exploring Discover More Topics From NASA
Missions
Humans in Space
Climate Change
Solar System
View the full article
-
By NASA
This article is from the 2024 Technical Update
Autonomous flight termination systems (AFTS) are being progressively employed onboard launch vehicles to replace ground personnel and infrastructure needed to terminate flight or destruct the vehicle should an anomaly occur. This automation uses on-board real-time data and encoded logic to determine if the flight should be self-terminated. For uncrewed launch vehicles, FTS systems are required to protect the public and governed by the United States Space Force (USSF). For crewed missions, NASA must augment range AFTS requirements for crew safety and certify each flight according to human rating standards, thus adding unique requirements for reuse of software originally intended for uncrewed missions. This bulletin summarizes new information relating to AFTS to raise awareness of key distinctions, summarize considerations and outline best practices for incorporating AFTS into human-rated systems.
Key Distinctions – Crewed v. Uncrewed
There are inherent behavioral differences between uncrewed and crewed AFTS related to design philosophy and fault tolerance. Uncrewed AFTS generally favor fault tolerance against failure-to-destruct over failing silent
in the presence of faults. This tenet permeates the design, even downto the software unit level. Uncrewed AFTS become zero-fault-to-destruct tolerant to many unrecoverable AFTS errors, whereas general single fault
tolerance against vehicle destruct is required for crewed missions. Additionally, unique needs to delay destruction for crew escape, provide abort options and special rules, and assess human-in-the-loop insight, command, and/or override throughout a launch sequence must be considered and introduces additional requirements and integration complexities.
AFTS Software Architecture Components and Best-Practice Use Guidelines
A detailed study of the sole AFTS currently approved by USSF and utilized/planned for several launch vehicles was conducted to understand its characteristics, and any unique risk and mitigation techniques for effective human-rating reuse. While alternate software systems may be designed in the future, this summary focuses on an architecture employing the Core Autonomous Safety Software (CASS). Considerations herein are intended for extrapolation to future systems. Components of the AFTS software architecture are shown, consisting of the CASS, “Wrapper”, and Mission Data Load (MDL) along with key characteristics and use guidelines. A more comprehensive description of each and recommendations for developmental use is found in Ref. 1.
Best Practices Certifying AFTS Software
Below are non-exhaustive guidelines to help achieve a human-rating
certification for an AFTS.
References
NASA/TP-20240009981: Best Practices and Considerations for Using
Autonomous Flight Termination Software In Crewed Launch Vehicles
https://ntrs.nasa.gov/citations/20240009981 “Launch Safety,” 14 C.F.R., § 417 (2024). NPR 8705.2C, Human-Rating Requirements for Space Systems, Jul 2017,
nodis3.gsfc.nasa.gov/ NASA Software Engineering Requirements, NPR 7150.2D, Mar 2022,
nodis3.gsfc.nasa.gov/ RCC 319-19 Flight Termination Systems Commonality Standard, White
Sands, NM, June 2019. “Considerations for Software Fault Prevention and Tolerance”, NESC
Technical Bulletin No. 23-06 https://ntrs.nasa.gov/citations/20230013383 “Safety Considerations when Repurposing Commercially Available Flight
Termination Systems from Uncrewed to Crewed Launch Vehicles”, NESC
Technical Bulletin No. 23-02 https://ntrs.nasa.gov/citations/20230001890 View the full article
-
By NASA
At Goddard Space Flight Center, the GSFC Data Science Group has completed the testing for their SatVision Top-of-Atmosphere (TOA) Foundation Model, a geospatial foundation model for coarse-resolution all-sky remote sensing imagery. The team, comprised of Mark Carroll, Caleb Spradlin, Jordan Caraballo-Vega, Jian Li, Jie Gong, and Paul Montesano, has now released their model for wide application in science investigations.
Foundation models can transform the landscape of remote sensing (RS) data analysis by enabling the pre-training of large computer-vision models on vast amounts of remote sensing data. These models can be fine-tuned with small amounts of labeled training and applied to various mapping and monitoring applications. Because most existing foundation models are trained solely on cloud-free satellite imagery, they are limited to applications of land surface or require atmospheric corrections. SatVision-TOA is trained on all-sky conditions which enables applications involving atmospheric variables (e.g., cloud or aerosol).
SatVision TOA is a 3 billion parameter model trained on 100 million images from Moderate Resolution Imaging Spectroradiometer (MODIS). This is, to our knowledge, the largest foundation model trained solely on satellite remote sensing imagery. By including “all-sky” conditions during pre-training, the team incorporated a range of cloud conditions often excluded in traditional modeling. This enables 3D cloud reconstruction and cloud modeling in support of Earth and climate science, offering significant enhancement for large-scale earth observation workflows.
With an adaptable and scalable model design, SatVision-TOA can unify diverse Earth observation datasets and reduce dependency on task-specific models. SatVision-TOA leverages one of the largest public datasets to capture global contexts and robust features. The model could have broad applications for investigating spectrometer data, including MODIS, VIIRS, and GOES-ABI. The team believes this will enable transformative advancements in atmospheric science, cloud structure analysis, and Earth system modeling.
The model architecture and model weights are available on GitHub and Hugging Face, respectively. For more information, including a detailed user guide, see the associated white paper: SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution All-Sky Remote Sensing Imagery.
Examples of image reconstruction by SatVision-TOA. Left: MOD021KM v6.1 cropped image chip using MODIS bands [1, 3, 2]. Middle: The same images with randomly applied 8×8 mask patches, masking 60% of the original image. Right: The reconstructed images produced by the model, along with their respective Structural Similarity Index Measure (SSIM) scores. These examples illustrate the model’s ability to preserve structural detail and reconstruct heterogeneous features, such as cloud textures and land-cover transitions, with high fidelity.NASAView the full article
-
By European Space Agency
Launched in May 2024, ESA’s EarthCARE satellite is nearing the end of its commissioning phase with the release of its first data on clouds and aerosols expected early next year. In the meantime, an international team of scientists has found an innovative way of applying artificial intelligence to other satellite data to yield 3D profiles of clouds.
This is particularly news for those eagerly awaiting data from EarthCARE in their quest to advance climate science.
View the full article
-
-
Check out these Videos
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.