Jump to content

Recommended Posts

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By European Space Agency
      The second of the Meteosat Third Generation (MTG) satellites and the first instrument for the Copernicus Sentinel-4 mission are fully integrated and, having completed their functional and environmental tests, they are now ready to embark on their journey to the US for launch this summer.
      View the full article
    • By NASA
      5 min read
      Ultra-low-noise Infrared Detectors for Exoplanet Imaging
      A linear-mode avalanche photodiode array in the test dewar. The detector is the dark square in the center. Michael Bottom, University of Hawai’i One of the ultimate goals in astrophysics is the discovery of Earth-like planets that are capable of hosting life. While thousands of planets have been discovered around other stars, the vast majority of these detections have been made via indirect methods, that is, by detecting the effect of the planet on the star’s light, rather than detecting the planet’s light directly. For example, when a planet passes in front of its host star, the brightness of the star decreases slightly.
      However, indirect methods do not allow for characterization of the planet itself, including its temperature, pressure, gravity, and atmospheric composition. Planetary atmospheres may include “biosignature” gases like oxygen, water vapor, carbon dioxide, etc., which are known to be key ingredients needed to support life as we know it. As such, direct imaging of a planet and characterization of its atmosphere are key to understanding its potential habitability.
      But the technical challenges involved in imaging Earth-like extrasolar planets are extreme. First such planets are detected only by observing light they reflect from their parent star, and so they typically appear fainter than the stars they orbit by factors of about 10 billion. Furthermore, at the cosmic distances involved, the planets appear right next to the stars. A popular expression is that exoplanet imaging is like trying to detect a firefly three feet from a searchlight from a distance of 300 miles.
      Tremendous effort has gone into developing starlight suppression technologies to block the bright glare of the star, but detecting the light of the planet is challenging in its own right, as planets are incredibly faint. One way to quantify the faintness of planetary light is to understand the photon flux rate. A photon is an indivisible particle of light, that is, the minimum detectable amount of light. On a sunny day, approximately 10 thousand trillion photons enter your eye every second. The rate of photons entering your eye from an Earth-like exoplanet around a nearby star would be around 10 to 100 per year. Telescopes with large mirrors can help collect as much of this light as possible, but ultra-sensitive detectors are also needed, particularly for infrared light, where the biosignature gases have their strongest effects. Unfortunately, state-of-the-art infrared detectors are far too noisy to detect the low level of light emitted from exoplanets.
      With support from NASA’s Astrophysics Division and industrial partners, researchers at the University of Hawai’i are developing a promising detector technology to meet these stringent sensitivity requirements. These detectors, known as avalanche photodiode arrays, are constructed out of the same semiconductor material as conventional infrared sensors. However, these new sensors employ an extra “avalanche” layer that takes the signal from a single photon and multiplies it, much like an avalanche can start with a single snowball and quickly grow it to the size of a boulder. This signal amplification occurs before any noise from the detector is introduced, so the effective noise is proportionally reduced. However, at high avalanche levels, photodiodes start to behave badly, with noise exponentially increasing, which negates any benefits of the signal amplification. Late University of Hawai’i faculty member Donald Hall, who was a key figure in driving technology for infrared astronomy, realized the potential use of avalanche photodiodes for ultra-low-noise infrared astronomy with some modifications to the material properties.
      University of Hawai’i team members with cryogenic dewar used to test the sensors. From left to right, Angelu Ramos, Michael Bottom, Shane Jacobson, Charles-Antoine Claveau. Michael Bottom, University of Hawai’i The most recent sensors benefit from a new design including a graded semiconductor bandgap that allows for excellent noise performance at moderate amplification, a mesa pixel geometry to reduce electronic crosstalk, and a read-out integrated circuit to allow for short readout times. “It was actually challenging figuring out just how sensitive these detectors are,” said Michael Bottom, associate professor at the University of Hawai’i and lead of development effort. “Our ‘light-tight’ test chamber, which was designed to evaluate the infrared sensors on the James Webb Space Telescope, was supposed to be completely dark. But when we put these avalanche photodiodes in the chamber, we started seeing light leaks at the level of a photon an hour, which you would never be able to detect using the previous generation of sensors.”
      The new designs have a format of one megapixel, more than ten times larger than the previous iteration of sensors, and circuitry that allows for tracking and subtracting any electronic drifts. Additionally, the pixel size and control electronics are such that these new sensors could be drop-in replacements for the most common infrared sensors used on the ground, which would give new capabilities to existing instruments.
      Image of the Palomar-2 globular cluster located in the constellation of Auriga, taken with the linear-mode avalanche photodiode arrays, taken from the first on-sky testing of the sensors using the University of Hawai’i’s 2.2 meter telescope. Michael Bottom, University of Hawai’i Last year, the team took the first on-sky images from the detectors, using the University of Hawai’i’s 2.2-meter telescope. “It was impressive to see the avalanche process on sky. When we turned up the gain, we could see more stars appear,” said Guillaume Huber, a graduate student working on the project. “The on-sky demonstration was important to prove the detectors could perform well in an operational environment,” added Michael Bottom.
      According to the research team, while the current sensors are a major step forward, the megapixel format is still too small for many science applications, particularly those involving spectroscopy. Further tasks include improving detector uniformity and decreasing persistence. The next generation of sensors will be four times larger, meeting the size requirements for the Habitable Worlds Observatory, NASA’s next envisioned flagship mission, with the goals of imaging and characterizing Earth-like exoplanets.
      Project Lead: Dr. Michael Bottom, University of Hawai’i
      Sponsoring Organization:  NASA Strategic Astrophysics Technology (SAT) Program
      Share








      Details
      Last Updated Feb 18, 2025 Related Terms
      Technology Highlights Astrophysics Astrophysics Division Science-enabling Technology Explore More
      6 min read Webb Reveals Rapid-Fire Light Show From Milky Way’s Central Black Hole


      Article


      5 mins ago
      2 min read Hubble Captures a Cosmic Cloudscape


      Article


      4 days ago
      5 min read Webb Maps Full Picture of How Phoenix Galaxy Cluster Forms Stars


      Article


      5 days ago
      View the full article
    • By NASA
      This artist’s concept visualizes a super-Neptune world orbiting a low-mass star near the center of our Milky Way galaxy. Scientists recently discovered such a system that may break the current record for fastest exoplanet system, traveling at least 1.2 million miles per hour, or 540 kilometers per second.NASA/JPL-Caltech/R. Hurt (Caltech-IPAC) Astronomers may have discovered a scrawny star bolting through the middle of our galaxy with a planet in tow. If confirmed, the pair sets a new record for the fastest-moving exoplanet system, nearly double our solar system’s speed through the Milky Way.
      The planetary system is thought to move at least 1.2 million miles per hour, or 540 kilometers per second.
      “We think this is a so-called super-Neptune world orbiting a low-mass star at a distance that would lie between the orbits of Venus and Earth if it were in our solar system,” said Sean Terry, a postdoctoral researcher at the University of Maryland, College Park and NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Since the star is so feeble, that’s well outside its habitable zone. “If so, it will be the first planet ever found orbiting a hypervelocity star.”
      A paper describing the results, led by Terry, was published in The Astronomical Journal on February 10.
      A Star on the Move
      The pair of objects was first spotted indirectly in 2011 thanks to a chance alignment. A team of scientists combed through archived data from MOA (Microlensing Observations in Astrophysics) – a collaborative project focused on a microlensing survey conducted using the University of Canterbury Mount John Observatory in New Zealand — in search of light signals that betray the presence of exoplanets, or planets outside our solar system.
      Microlensing occurs because the presence of mass warps the fabric of space-time. Any time an intervening object appears to drift near a background star, light from the star curves as it travels through the warped space-time around the nearer object. If the alignment is especially close, the warping around the object can act like a natural lens, amplifying the background star’s light.
      This artist’s concept visualizes stars near the center of our Milky Way galaxy. Each has a colorful trail indicating its speed –– the longer and redder the trail, the faster the star is moving. NASA scientists recently discovered a candidate for a particularly speedy star, visualized near the center of this image, with an orbiting planet. If confirmed, the pair sets a record for fastest known exoplanet system.NASA/JPL-Caltech/R. Hurt (Caltech-IPAC) In this case, microlensing signals revealed a pair of celestial bodies. Scientists determined their relative masses (one is about 2,300 times heavier than the other), but their exact masses depend on how far away they are from Earth. It’s sort of like how the magnification changes if you hold a magnifying glass over a page and move it up and down.
      “Determining the mass ratio is easy,” said David Bennett, a senior research scientist at the University of Maryland, College Park and NASA Goddard, who co-authored the new paper and led the original study in 2011. “It’s much more difficult to calculate their actual masses.”
      The 2011 discovery team suspected the microlensed objects were either a star about 20 percent as massive as our Sun and a planet roughly 29 times heavier than Earth, or a nearer “rogue” planet about four times Jupiter’s mass with a moon smaller than Earth.
      To figure out which explanation is more likely, astronomers searched through data from the Keck Observatory in Hawaii and ESA’s (European Space Agency’s) Gaia satellite. If the pair were a rogue planet and moon, they’d be effectively invisible – dark objects lost in the inky void of space. But scientists might be able to identify the star if the alternative explanation were correct (though the orbiting planet would be much too faint to see).
      They found a strong suspect located about 24,000 light-years away, putting it within the Milky Way’s galactic bulge — the central hub where stars are more densely packed. By comparing the star’s location in 2011 and 2021, the team calculated its high speed.
      This Hubble Space Telescope image shows a bow shock around a very young star called LL Ori. Named for the crescent-shaped wave made by a ship as it moves through water, a bow shock can be created in space when two streams of gas collide. Scientists think a similar feature may be present around a newfound star that could be traveling at least 1.2 million miles per hour, or 540 kilometers per second. Traveling at such a high velocity in the galactic bulge (the central part of the galaxy) where gas is denser could generate a bow shock. NASA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: C. R. O’Dell (Vanderbilt University) But that’s just its 2D motion; if it’s also moving toward or away from us, it must be moving even faster. Its true speed may even be high enough to exceed the galaxy’s escape velocity of just over 1.3 million miles per hour, or about 600 kilometers per second. If so, the planetary system is destined to traverse intergalactic space many millions of years in the future.
      “To be certain the newly identified star is part of the system that caused the 2011 signal, we’d like to look again in another year and see if it moves the right amount and in the right direction to confirm it came from the point where we detected the signal,” Bennett said.
      “If high-resolution observations show that the star just stays in the same position, then we can tell for sure that it is not part of the system that caused the signal,” said Aparna Bhattacharya, a research scientist at the University of Maryland, College Park and NASA Goddard who co-authored the new paper. “That would mean the rogue planet and exomoon model is favored.”
      NASA’s upcoming Nancy Grace Roman Space Telescope will help us find out how common planets are around such speedy stars, and may offer clues to how these systems are accelerated. The mission will conduct a survey of the galactic bulge, pairing a large view of space with crisp resolution.
      “In this case we used MOA for its broad field of view and then followed up with Keck and Gaia for their sharper resolution, but thanks to Roman’s powerful view and planned survey strategy, we won’t need to rely on additional telescopes,” Terry said. “Roman will do it all.”
      Download additional images and video from NASA’s Scientific Visualization Studio.
      By Ashley Balzer
      NASA’s Goddard Space Flight Center, Greenbelt, Md.
      Media contact:
      Claire Andreoli
      NASA’s Goddard Space Flight Center, Greenbelt, Md.
      301-286-1940
      Share
      Details
      Last Updated Feb 10, 2025 EditorAshley BalzerContactAshley Balzerashley.m.balzer@nasa.govLocationGoddard Space Flight Center Related Terms
      Exoplanets Astrophysics Exoplanet Discoveries Exoplanet Science Goddard Space Flight Center Nancy Grace Roman Space Telescope Neptune-Like Exoplanets Science & Research Studying Exoplanets The Universe Explore More
      4 min read Discovery Alert: With Six New Worlds, 5,500 Discovery Milestone Passed!
      On Aug. 24, 2023, more than three decades after the first confirmation of planets beyond…
      Article 7 months ago 3 min read Discovery Alert: Water Vapor Detected on a ‘Super Neptune’
      The atmosphere of a “super Neptune” some 150 light-years distant contains water vapor, a new…
      Article 3 years ago 6 min read Why NASA’s Roman Mission Will Study Milky Way’s Flickering Lights
      Article 1 year ago
      View the full article
    • By NASA
      5 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      Jeremy Frank, left, and Caleb Adams, right, discuss software developed by NASA’s Distributed Spacecraft Autonomy project. The software runs on spacecraft computers, currently housed on a test rack at NASA’s Ames Research Center in California’s Silicon Valley, and depicts a spacecraft swarm virtually flying in lunar orbit to provide autonomous position navigation and timing services at the Moon. NASA/Brandon Torres Navarrete Talk amongst yourselves, get on the same page, and work together to get the job done! This “pep talk” roughly describes how new NASA technology works within satellite swarms. This technology, called Distributed Spacecraft Autonomy (DSA), allows individual spacecraft to make independent decisions while collaborating with each other to achieve common goals – all without human input. 
      NASA researchers have achieved multiple firsts in tests of such swarm technology as part of the agency’s DSA project. Managed at NASA’s Ames Research Center in California’s Silicon Valley, the DSA project develops software tools critical for future autonomous, distributed, and intelligent swarms that will need to interact with each other to achieve complex mission objectives. 
      “The Distributed Spacecraft Autonomy technology is very unique,” said Caleb Adams, DSA project manager at NASA Ames. “The software provides the satellite swarm with the science objective and the ‘smarts’ to get it done.”  
      What Are Distributed Space Missions? 
      Distributed space missions rely on interactions between multiple spacecraft to achieve mission goals. Such missions can deliver better data to researchers and ensure continuous availability of critical spacecraft systems.  
      Typically, spacecraft in swarms are individually commanded and controlled by mission operators on the ground. As the number of spacecraft and the complexity of their tasks increase to meet new constellation mission designs, “hands-on” management of individual spacecraft becomes unfeasible.  
      Distributing autonomy across a group of interacting spacecraft allows for all spacecraft in a swarm to make decisions and is resistant to individual spacecraft failures. 
      The DSA team advanced swarm technology through two main efforts: the development of software for small spacecraft that was demonstrated in space during NASA’s Starling mission, which involved four CubeSat satellites operating as a swarm to test autonomous collaboration and operation with minimal human operation, and a scalability study of a simulated spacecraft swarm in a virtual lunar orbit. 
      Experimenting With DSA in Low Earth Orbit
      The team gave Starling a challenging job: a fast-paced study of Earth’s ionosphere – where Earth’s atmosphere meets space – to show the swarm’s ability to collaborate and optimize science observations. The swarm decided what science to do on their own with no pre-programmed science observations from ground operators.  
      “We did not tell the spacecraft how to do their science,” said Adams. “The DSA team figured out what science Starling did only after the experiment was completed. That has never been done before and it’s very exciting!”  
      The accomplishments of DSA onboard Starling include the first fully distributed autonomous operation of multiple spacecraft, the first use of space-to-space communications to autonomously share status information between multiple spacecraft, the first demonstration of fully distributed reactive operations onboard multiple spacecraft, the first use of a general-purpose automated reasoning system onboard a spacecraft, and the first use of fully distributed automated planning onboard multiple spacecraft. 
      During the demonstration, which took place between August 2023 and May 2024, Starling’s swarm of spacecraft received GPS signals that pass through the ionosphere and reveal interesting – often fleeting – features for the swarm to focus on. Because the spacecraft constantly change position relative to each other, the GPS satellites, and the ionospheric environment, they needed to exchange information rapidly to stay on task.   
      Each Starling satellite analyzed and acted on its best results individually. When new information reached each spacecraft, new observation and action plans were analyzed, continuously enabling the swarm to adapt quickly to changing situations. 
      “Reaching the project goal of demonstrating the first fully autonomous distributed space mission was made possible by the DSA team’s development of distributed autonomy software that allowed the spacecraft to work together seamlessly,” Adams continued.
      Caleb Adams, Distributed Spacecraft Autonomy project manager, monitors testing alongside the test racks containing 100 spacecraft computers at NASA’s Ames Research Center in California’s Silicon Valley. The DSA project develops and demonstrates software to enhance multi-spacecraft mission adaptability, efficiently allocate tasks between spacecraft using ad-hoc networking, and enable human-swarm commanding of distributed space missions. NASA/Brandon Torres Navarrete Scaling Up Swarms in Virtual Lunar Orbit  
      The DSA ground-based scalability study was a simulation that placed virtual small spacecraft and rack-mounted small spacecraft flight computers in virtual lunar orbit. This simulation was designed to test the swarm’s ability to provide position, navigation, and timing services at the Moon. Similar to what the GPS system does on Earth, this technology could equip missions to the Moon with affordable navigation capabilities, and could one day help pinpoint the location of objects or astronauts on the lunar surface.   
      The DSA lunar Position, Navigation, and Timing study demonstrated scalability of the swarm in a simulated environment. Over a two-year period, the team ran close to one hundred tests of more complex coordination between multiple spacecraft computers in both low- and high-altitude lunar orbit and showed that a swarm of up to 60 spacecraft is feasible.  
      The team is further developing DSA’s capabilities to allow mission operators to interact with even larger swarms – hundreds of spacecraft – as a single entity. 
      Distributed Spacecraft Autonomy’s accomplishments mark a significant milestone in advancing autonomous distributed space systems that will make new types of science and exploration possible. 
      NASA Ames leads the Distributed Spacecraft Autonomy and Starling projects. NASA’s Game Changing Development program within the agency’s Space Technology Mission Directorate provides funding for the DSA experiment. NASA’s Small Spacecraft Technology program within the Space Technology Mission Directorate funds and manages the Starling mission and the DSA project. 
      Share
      Details
      Last Updated Feb 04, 2025 Related Terms
      Ames Research Center CubeSats Game Changing Development Program Small Spacecraft Technology Program Space Technology Mission Directorate Explore More
      2 min read NASA Awards Contract for Airborne Science Flight Services Support
      Article 23 hours ago 4 min read NASA Flight Tests Wildland Fire Tech Ahead of Demo
      Article 4 days ago 4 min read NASA Space Tech’s Favorite Place to Travel in 2025: The Moon!
      Article 2 weeks ago Keep Exploring Discover More Topics From NASA
      Ames Research Center
      Space Technology Mission Directorate
      STMD Small Spacecraft Technology
      Starling
      View the full article
    • By European Space Agency
      Today in Brussels, the European Space Agency (ESA) and the European Commission consolidated their cooperation on the European Quantum Communication Infrastructure (EuroQCI), marking the successful conclusion of negotiations and clearing the way for development to begin. EuroQCI is an advanced network that aims to protect everything from personal data to Europe's critical infrastructure, using proven principles of quantum physics.
      View the full article
  • Check out these Videos

×
×
  • Create New...