Jump to content

Towards Autonomous Surface Missions on Ocean Worlds


Recommended Posts

  • Publishers
Posted
9 Min Read

Towards Autonomous Surface Missions on Ocean Worlds

A four-legged square spacecraft perched on a brown and white icy surface. The spacecraft is extending a robot arm deploying a tool above the surface.
Artist’s concept image of a spacecraft lander with a robot arm on the surface of Europa.
Credits:
NASA/JPL – Caltech

Through advanced autonomy testbed programs, NASA is setting the groundwork for one of its top priorities—the search for signs of life and potentially habitable bodies in our solar system and beyond. The prime destinations for such exploration are bodies containing liquid water, such as Jupiter’s moon Europa and Saturn’s moon Enceladus. Initial missions to the surfaces of these “ocean worlds” will be robotic and require a high degree of onboard autonomy due to long Earth-communication lags and blackouts, harsh surface environments, and limited battery life.

Technologies that can enable spacecraft autonomy generally fall under the umbrella of Artificial Intelligence (AI) and have been evolving rapidly in recent years. Many such technologies, including machine learning, causal reasoning, and generative AI, are being advanced at non-NASA institutions.  

NASA started a program in 2018 to take advantage of these advancements to enable future icy world missions. It sponsored the development of the physical Ocean Worlds Lander Autonomy Testbed (OWLAT) at NASA’s Jet Propulsion Laboratory in Southern California and the virtual Ocean Worlds Autonomy Testbed for Exploration, Research, and Simulation (OceanWATERS) at NASA’s Ames Research Center in Silicon Valley, California.

NASA solicited applications for its Autonomous Robotics Research for Ocean Worlds (ARROW) program in 2020, and for the Concepts for Ocean worlds Life Detection Technology (COLDTech) program in 2021. Six research teams, based at universities and companies throughout the United States, were chosen to develop and demonstrate autonomy solutions on OWLAT and OceanWATERS. These two- to three-year projects are now complete and have addressed a wide variety of autonomy challenges faced by potential ocean world surface missions.

OWLAT

OWLAT is designed to simulate a spacecraft lander with a robotic arm for science operations on an ocean world body. The overall OWLAT architecture including hardware and software components is shown in Figure 1. Each of the OWLAT components is detailed below.

Image shows a block-diagram view of the software and hardware on the Ocean Worlds Lander Autonomy Testbed. An autonomy software module and a safety and performance monitoring module communicate with a scheduler, dispatcher, controller module. The scheduler, dispatcher and controller module also interact with the hardware and software components.
Figure 1. The software and hardware components of the Ocean Worlds Lander Autonomy Testbed and the relationships between them.
NASA/JPL – Caltech

The hardware version of OWLAT (shown in Figure 2) is designed to physically simulate motions of a lander as operations are performed in a low-gravity environment using a six degrees-of-freedom (DOF) Stewart platform. A seven DOF robot arm is mounted on the lander to perform sampling and other science operations that interact with the environment. A camera mounted on a pan-and-tilt unit is used for perception. The testbed also has a suite of onboard force/torque sensors to measure motion and reaction forces as the lander interacts with the environment. Control algorithms implemented on the testbed enable it to exhibit dynamics behavior as if it were a lightweight arm on a lander operating in different gravitational environments.

A robotic platform used to model a spacecraft lander, a robot arm, a camera mounted on a pan and tilt unit, force-torque sensors, and a scoop on the end of the robot arm.
Figure 2. The Ocean Worlds Lander Autonomy Testbed. A scoop is mounted to the end of the testbed robot arm.
NASA/JPL – Caltech

The team also developed a set of tools and instruments (shown in Figure 3) to enable the performance of science operations using the testbed. These various tools can be mounted to the end of the robot arm via a quick-connect-disconnect mechanism. The testbed workspace where sampling and other science operations are conducted incorporates an environment designed to represent the scene and surface simulant material potentially found on ocean worlds.

Five images of tools connected to a robotic arm, with a closeup image of each tool below
Figure 3. Tools and instruments designed to be used with the testbed.
NASA/JPL – Caltech

The software-only version of OWLAT models, visualizes, and provides telemetry from a high-fidelity dynamics simulator based on the Dynamics And Real-Time Simulation (DARTS) physics engine developed at JPL. It replicates the behavior of the physical testbed in response to commands and provides telemetry to the autonomy software. A visualization from the simulator is shown on Figure 4.

Figure 7. Screenshot of OceanWATERS lander on a terrain modeled from the Atacama Desert. A scoop operation has just been completed.
NASA/JPL – Caltech

The autonomy software module shown at the top in Figure 1 interacts with the testbed through a Robot Operating System (ROS)-based interface to issue commands and receive telemetry. This interface is defined to be identical to the OceanWATERS interface. Commands received from the autonomy module are processed through the dispatcher/scheduler/controller module (blue box in Figure 1) and used to command either the physical hardware version of the testbed or the dynamics simulation (software version) of the testbed. Sensor information from the operation of either the software-only or physical testbed is reported back to the autonomy module using a defined telemetry interface. A safety and performance monitoring and evaluation software module (red box in Figure 1) ensures that the testbed is kept within its operating bounds. Any commands causing out of bounds behavior and anomalies are reported as faults to the autonomy software module.

One team member at the computer station on the left side of the image is initializing the software for operating the testbed. Another team member, standing beside the robot arm, is verifying its initial configuration in preparation for use.
Figure 5. Erica Tevere (at the operator’s station) and Ashish Goel (at the robot arm) setting up the OWLAT testbed for use.
NASA/JPL – Caltech

OceanWATERS

At the time of the OceanWATERS project’s inception, Jupiter’s moon Europa was planetary science’s first choice in searching for life. Based on ROS, OceanWATERS is a software tool that provides a visual and physical simulation of a robotic lander on the surface of Europa (see Figure 6). OceanWATERS realistically simulates Europa’s celestial sphere and sunlight, both direct and indirect. Because we don’t yet have detailed information about the surface of Europa, users can select from terrain models with a variety of surface and material properties. One of these models is a digital replication of a portion of the Atacama Desert in Chile, an area considered a potential Earth-analog for some extraterrestrial surfaces.

A multi-legged lander on a gray, rocky surface with crevasses in the distance, with space and a view of Jupiter on the horizon.  The lander’s arm is extended, with a scoop pointed towards the ground.
Figure 6. Screenshot of OceanWATERS.
NASA/JPL – Caltech

JPL’s Europa Lander Study of 2016, a guiding document for the development of OceanWATERS, describes a planetary lander whose purpose is collecting subsurface regolith/ice samples, analyzing them with onboard science instruments, and transmitting results of the analysis to Earth.

The simulated lander in OceanWATERS has an antenna mast that pans and tilts; attached to it are stereo cameras and spotlights. It has a 6 degree-of-freedom arm with two interchangeable end effectors—a grinder designed for digging trenches, and a scoop for collecting ground material. The lander is powered by a simulated non-rechargeable battery pack. Power consumption, the battery’s state, and its remaining life are regularly predicted with the Generic Software Architecture for Prognostics (GSAP) tool. To simulate degraded or broken subsystems, a variety of faults (e.g., a frozen arm joint or overheating battery) can be “injected” into the simulation by the user; some faults can also occur “naturally” as the simulation progresses, e.g., if components become over-stressed. All the operations and telemetry (data measurements) of the lander are accessible via an interface that external autonomy software modules can use to command the lander and understand its state. (OceanWATERS and OWLAT share a unified autonomy interface based on ROS.) The OceanWATERS package includes one basic autonomy module, a facility for executing plans (autonomy specifications) written in the PLan EXecution Interchange Language, or PLEXIL. PLEXIL and GSAP are both open-source software packages developed at Ames and available on GitHub, as is OceanWATERS.

Mission operations that can be simulated by OceanWATERS include visually surveying the landing site, poking at the ground to determine its hardness, digging a trench, and scooping ground material that can be discarded or deposited in a sample collection bin. Communication with Earth, sample analysis, and other operations of a real lander mission, are not presently modeled in OceanWATERS except for their estimated power consumption. Figure 7 is a video of OceanWATERS running a sample mission scenario using the Atacama-based terrain model.

Figure 7. Screenshot of OceanWATERS lander on a terrain modeled from the Atacama Desert. A scoop operation has just been completed.
NASA/JPL – Caltech

Because of Earth’s distance from the ocean worlds and the resulting communication lag, a planetary lander should be programmed with at least enough information to begin its mission. But there will be situation-specific challenges that will require onboard intelligence, such as deciding exactly where and how to collect samples, dealing with unexpected issues and hardware faults, and prioritizing operations based on remaining power. 

Results

All six of the research teams funded by the ARROW and COLDTech programs used OceanWATERS to develop ocean world lander autonomy technology and three of those teams also used OWLAT. The products of these efforts were published in technical papers, and resulted in development of software that may be used or adapted for actual ocean world lander missions in the future. The following table summarizes the ARROW and COLDTech efforts.

  Principal Investigator (PI) PI Institution Project Testbed Used Purpose of Project
ARROW Projects Jonathan Bohren Honeybee Robotics Stochastic PLEXIL (SPLEXIL) OceanWATERS Extended PLEXIL with stochastic decision-making capabilities by employing reinforcement learning techniques.
Pooyan Jamshidi University of South Carolina Resource Adaptive Software Purpose-Built for Extraordinary Robotic Research Yields (RASPBERRY SI) OceanWATERS & OWLAT Developed software algorithms and tools for fault root cause identification, causal debugging, causal optimization, and causal-induced verification.
COLDTech Projects Eric Dixon Lockheed Martin Causal And Reinforcement Learning (CARL) for COLDTech OceanWATERS Integrated a model of JPL’s mission-ready Cold Operable Lunar Deployable Arm (COLDarm) into OceanWATERS and applied image analysis, causal reasoning, and machine learning models to identify and mitigate the root causes of faults, such as ice buildup on the arm’s end effector.
Jay McMahon University of Colorado Robust Exploration with Autonomous Science On-board, Ranked Evaluation of Contingent Opportunities for Uninterrupted Remote Science Exploration (REASON-RECOURSE) OceanWATERS Applied automated planning with formal methods to maximize science return of the lander while minimizing communication with ground team on Earth.
Melkior Ornik U Illinois, Urbana-Champaign aDaptive, ResIlient Learning-enabLed oceAn World AutonomY (DRILLAWAY) OceanWATERS & OWLAT Developed autonomous adaptation to novel terrains and selecting scooping actions based on the available image data and limited experience by transferring the scooping procedure learned from a low-fidelity testbed to the high-fidelity OWLAT testbed.
Joel Burdick Caltech Robust, Explainable Autonomy for Scientific Icy Moon Operations (REASIMO) OceanWATERS & OWLAT Developed autonomous 1) detection and identification of off-nominal conditions and procedures for recovery from those conditions, and 2) sample site selection

Acknowledgements: The portion of the research carried out at the Jet Propulsion Laboratory, California Institute of Technology was performed under a contract with the National Aeronautics and Space Administration (80NM0018D0004).  The portion of the research carried out by employees of KBR Wyle Services LLC at NASA Ames Research Center was performed under a contract with the National Aeronautics and Space Administration (80ARC020D0010). Both were funded by the Planetary Science Division ARROW and COLDTech programs.

Project Leads: Hari Nayar (NASA Jet Propulsion Laboratory, California Institute of Technology), K. Michael Dalal (KBR, Inc. at NASA Ames Research Center)

Sponsoring Organizations: NASA SMD PESTO

View the full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By European Space Agency
      The second of the Meteosat Third Generation (MTG) satellites and the first instrument for the Copernicus Sentinel-4 mission are fully integrated and, having completed their functional and environmental tests, they are now ready to embark on their journey to the US for launch this summer.
      View the full article
    • By NASA
      Official crew portrait for NASA’s SpaceX Crew-10 mission with NASA astronauts Anne McClain and Nichole Ayers, JAXA (Japan Aerospace Exploration Agency) astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov.Credit: NASA NASA and its partners will discuss the upcoming Expedition 73 mission aboard the International Space Station during a pair of news conferences on Monday, Feb. 24, from the agency’s Johnson Space Center in Houston.
      Mission leadership will participate in an overview news conference at 2 p.m. EST live on NASA+, covering preparations for NASA’s SpaceX Crew-10 launch in March and the agency’s crew member rotation launch on Soyuz in April. Learn how to watch NASA content through a variety of platforms, including social media.
      NASA also will host a crew news conference at 4 p.m. and provide coverage on NASA+, followed by individual crew member interviews beginning at 5 p.m. This is the final media opportunity with Crew-10 before the crew members travel to NASA’s Kennedy Space Center in Florida for launch.
      The Crew-10 mission, targeted to launch Wednesday, March 12, will carry NASA astronauts Anne McClain and Nichole Ayers, JAXA (Japan Aerospace Exploration Agency) astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov to the orbiting laboratory.
      NASA astronaut Jonny Kim, scheduled to launch to the space station on the Soyuz MS-27 spacecraft no earlier than April 8, also will participate in the crew briefing and interviews. Kim will be available again on Tuesday, March 18, for limited virtual interviews prior to launch. NASA will provide additional details on that opportunity when available.
      For the Crew-10 mission, a SpaceX Falcon 9 rocket and Dragon spacecraft will launch from Launch Complex 39A at NASA Kennedy. The three-person crew of Soyuz MS-27, including Kim and Roscosmos cosmonauts Sergey Ryzhikov and Alexey Zubritsky, will launch from the Baikonur Cosmodrome in Kazakhstan.
      United States-based media seeking to attend in person must contact the NASA Johnson newsroom no later than 5 p.m. on Friday, Feb. 21, at 281-483-5111 or at jsccommu@mail.nasa.gov. U.S. and international media interested in participating by phone must contact NASA Johnson by 9:45 a.m. the day of the event.
      U.S. and international media seeking remote interviews with the crew must submit requests to the NASA Johnson newsroom by 5 p.m. on Feb. 21. A copy of NASA’s media accreditation policy is available online.
      Briefing participants include (all times Eastern and subject to change based on real-time operations):
      2 p.m.: Expedition 73 Overview News Conference
      Ken Bowersox, associate administrator, Space Operations Mission Directorate at NASA Headquarters in Washington Steve Stich, manager, NASA’s Commercial Crew Program, NASA Kennedy Bill Spetch, operations integration manager, NASA’s International Space Station Program, NASA Johnson William Gerstenmaier, vice president, Build & Flight Reliability, SpaceX Mayumi Matsuura, vice president and director general, Human Spaceflight Technology Directorate, JAXA 4 p.m.: Expedition 73 Crew News Conference
      Jonny Kim, Soyuz MS-27 flight engineer, NASA Anne McClain, Crew-10 spacecraft commander, NASA Nichole Ayers, Crew-10 pilot, NASA Takuya Onishi, Crew-10 mission specialist, JAXA Kirill Peskov, Crew-10 mission specialist, Roscosmos 5 p.m.: Crew Individual Interview Opportunities
      Crew-10 members and Kim available for a limited number of interviews Official portrait of NASA astronaut Jonny Kim, who will serve as a flight engineer during Expedition 73.Credit: NASA Kim is making his first spaceflight after selection as part of the 2017 NASA astronaut class. A native of Los Angeles, Kim is a U.S. Navy lieutenant commander and dual designated naval aviator and flight surgeon. Kim also served as an enlisted Navy SEAL. He holds a bachelor’s degree in Mathematics from the University of San Diego and a medical degree from Harvard Medical School in Boston. He completed his internship with the Harvard Affiliated Emergency Medicine Residency at Massachusetts General Hospital and Brigham and Women’s Hospital. After completing the initial astronaut candidate training, Kim supported mission and crew operations in various roles, including the Expedition 65 lead operations officer, T-38 operations liaison, and space station capcom chief engineer. Follow @jonnykimusa on X and @jonnykimusa on Instagram.
      Selected by NASA as an astronaut in 2013, this will be McClain’s second spaceflight. A colonel in the U.S. Army, she earned her bachelor’s degree in Mechanical Engineering from the U.S. Military Academy at West Point, New York, and holds master’s degrees in Aerospace Engineering, International Security, and Strategic Studies. The Spokane, Washington, native was an instructor pilot in the OH-58D Kiowa Warrior helicopter and is a graduate of the U.S. Naval Test Pilot School in Patuxent River, Maryland. McClain has more than 2,300 flight hours in 24 rotary and fixed-wing aircraft, including more than 800 in combat, and was a member of the U.S. Women’s National Rugby Team. On her first spaceflight, McClain spent 204 days as a flight engineer during Expeditions 58 and 59, and completed two spacewalks, totaling 13 hours and 8 minutes. Since then, she has served in various roles, including branch chief and space station assistant to the chief of NASA’s Astronaut Office. Follow @astroannimal on X and @astro_annimal on Instagram.
      The Crew-10 mission will be the first spaceflight for Ayers, who was selected as a NASA astronaut in 2021. Ayers is a major in the U.S. Air Force and the first member of NASA’s 2021 astronaut class named to a crew. The Colorado native graduated from the Air Force Academy in Colorado Springs with a bachelor’s degree in Mathematics and a minor in Russian, where she was a member of the academy’s varsity volleyball team. She later earned a master’s in Computational and Applied Mathematics from Rice University in Houston. Ayers served as an instructor pilot and mission commander in the T-38 ADAIR and F-22 Raptor, leading multinational and multiservice missions worldwide. She has more than 1,400 total flight hours, including more than 200 in combat. Follow @astro_ayers on X and @astro_ayers on Instagram.
      With 113 days in space, this mission also will mark Onishi’s second trip to the space station. After being selected as an astronaut by JAXA in 2009, he flew as a flight engineer for Expeditions 48 and 49, becoming the first Japanese astronaut to robotically capture the Cygnus spacecraft. He also constructed a new experimental environment aboard Kibo, the station’s Japanese experiment module. After his first spaceflight, Onishi became certified as a JAXA flight director, leading the team responsible for operating Kibo from JAXA Mission Control in Tsukuba, Japan. He holds a bachelor’s degree in Aeronautics and Astronautics from the University of Tokyo, and was a pilot for All Nippon Airways, flying more than 3,700 flight hours in the Boeing 767. Follow astro_onishi on X.
      The Crew-10 mission will also be Peskov’s first spaceflight. Before his selection as a cosmonaut in 2018, he earned a degree in Engineering from the Ulyanovsk Civil Aviation School and was a co-pilot on the Boeing 757 and 767 aircraft for airlines Nordwind and Ikar. Assigned as a test cosmonaut in 2020, he has additional experience in skydiving, zero-gravity training, scuba diving, and wilderness survival.
      Learn more about how NASA innovates for the benefit of humanity through NASA’s Commercial Crew Program at:
      https://www.nasa.gov/commercialcrew
      -end-
      Joshua Finch / Jimi Russell
      Headquarters, Washington
      202-358-1100
      joshua.a.finch@nasa.gov / james.j.russell@nasa.gov
      Kenna Pell / Sandra Jones
      Johnson Space Center, Houston
      281-483-5111
      kenna.m.pell@nasa.gov / sandra.p.jones@nasa.gov
      Share
      Details
      Last Updated Feb 18, 2025 LocationNASA Headquarters Related Terms
      Humans in Space Anne C. McClain Astronauts Commercial Crew International Space Station (ISS) ISS Research Johnson Space Center Jonny Kim Nichole Ayers View the full article
    • By NASA
      NASA’s SPHEREx is situated on a work stand ahead of prelaunch operations at the Astrotech Processing Facility at Vandenberg Space Force Base in California. The SPHEREx space telescope will share its ride to space on a SpaceX Falcon 9 rocket with NASA’s PUNCH mission.
      Credit: USSF 30th Space Wing/Christopher
      NASA will provide live coverage of prelaunch and launch activities for SPHEREx (Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer), the agency’s newest space telescope. This will lift off with another NASA mission, Polarimeter to Unify the Corona and Heliosphere, or PUNCH, which will study the Sun’s solar wind.
      The launch window opens at 10:09 p.m. EST (7:09 p.m. PST) Thursday, Feb. 27, for the SpaceX Falcon 9 rocket that will lift off from Space Launch Complex 4 East at Vandenberg Space Force Base in California. Watch coverage on NASA+. Learn how to watch NASA content through a variety of platforms, including social media.
      The SPHEREx mission will improve our understanding of how the universe evolved and search for key ingredients for life in our galaxy.
      The four small spacecraft that comprise PUNCH will observe the Sun’s corona as it transitions into solar wind.
      The deadline for media accreditation for in-person coverage of this launch has passed. NASA’s media credentialing policy is available online. For questions about media accreditation, please email: ksc-media-accreditat@mail.nasa.gov.
      NASA’s mission coverage is as follows (all times Eastern and subject to change based on real-time operations):
      Tuesday, Feb. 25
      2 p.m. – SPHEREx and PUNCH Science Overview News Conference
      Shawn Domagal-Goldman, acting director, Astrophysics Division, NASA Headquarters Joe Westlake, director, Heliophysics Division, NASA Headquarters Nicholeen Viall, PUNCH Mission Scientist, NASA’s Goddard Space Flight Center Rachel Akeson, SPHEREx science data center lead, Caltech/IPAC Phil Korngut, SPHEREx instrument scientist, Caltech The news conference will stream on NASA+. Media may ask questions in person or via phone. Limited auditorium space will be available for in-person participation. For the dial-in number and passcode, media should contact the NASA Kennedy newsroom no later than one hour before the start of the event at ksc-newsroom@mail.nasa.gov.
      Wednesday, Feb. 26
      3:30 p.m. – SPHEREx and PUNCH Prelaunch News Conference
      Mark Clampin, acting deputy associate administrator, Science Mission Directorate, NASA Headquarters David Cheney, PUNCH program executive, NASA Headquarters James Fanson, SPHEREx project manager, NASA’s Jet Propulsion Laboratory Denton Gibson, launch director, NASA’s Launch Services Program Julianna Scheiman, director, NASA Science Missions, SpaceX U.S. Air Force 1st Lt. Ina Park, 30th Operations Support Squadron launch weather officer Coverage of the prelaunch news conference will stream live on NASA+.
      Media may ask questions in person and via phone. Limited auditorium space will be available for in-person participation. For the dial-in number and passcode, media should contact the Kennedy newsroom no later than one hour before the start of the event at ksc-newsroom@mail.nasa.gov.
      Thursday, Feb. 27
      12 p.m. – SPHEREx and PUNCH Launch Preview will stream live on NASA+.
      9:15 p.m. – Launch coverage begins on NASA+.
      10:09 p.m. – Launch window opens.
      Audio Only Coverage
      Audio only of the launch coverage will be carried on the NASA “V” circuits, which may be accessed by dialing 321-867-1220, or -1240. On launch day, “mission audio,” countdown activities without NASA+ media launch commentary, will be carried on 321-867-7135.
      NASA Website Launch Coverage
      Launch day coverage of the mission will be available on the agency’s website. Coverage will include links to live streaming and blog updates beginning no earlier than 9:15 p.m., Feb. 27, as the countdown milestones occur. On-demand streaming video and photos of the launch will be available shortly after liftoff.
      For questions about countdown coverage, contact the Kennedy newsroom at 321-867-2468. Follow countdown coverage on the SPHEREx blog.
      Attend the Launch Virtually
      Members of the public can register to attend this launch virtually. NASA’s virtual guest program for this mission also includes curated launch resources, notifications about related opportunities or changes, and a stamp for the NASA virtual guest passport following launch.
      Watch, Engage on Social Media
      You can also stay connected by following and tagging these accounts:
      X: @NASA, @NASAJPL, @NASAUnivese, @NASASun, @NASAKennedy, @NASA_LSP
      Facebook: NASA, NASAJPL, NASA Universe, NASASunScience, NASA’s Launch Services Program
      Instagram: @NASA, @NASAKennedy, @NASAJPL, @NASAUnivese
      For more information about these missions, visit:
      https://science.nasa.gov/mission/spherex/
      https://science.nasa.gov/mission/punch/
      -end-
      Alise Fisher – SPHEREx
      Headquarters, Washington
      202-617-4977
      alise.m.fisher@nasa.gov
      Sarah Frazier – PUNCH
      Goddard Space Flight Center, Greenbelt, Md.
      202-853-7191
      sarah.frazier@nasa.gov
      Laura Aguiar
      Kennedy Space Center, Florida
      321-593-6245
      laura.aquiar@nasa.gov
      Share
      Details
      Last Updated Feb 18, 2025 LocationNASA Headquarters Related Terms
      SPHEREx (Spectro-Photometer for the History of the Universe and Ices Explorer) Missions Polarimeter to Unify the Corona and Heliosphere (PUNCH) Science Mission Directorate View the full article
    • By NASA
      Artistic rendering of Intuitive Machines’ Nova-C lander on the surface of the Moon.Credit: Intuitive Machines NASA’s Polar Resources Ice Mining Experiment-1 (PRIME-1) is preparing to explore the Moon’s subsurface and analyze where lunar resources may reside. The experiment’s two key instruments will demonstrate our ability to extract and analyze lunar soil to better understand the lunar environment and subsurface resources, paving the way for sustainable human exploration under the agency’s Artemis campaign for the benefit of all. 
      Its two instruments will work in tandem: The Regolith and Ice Drill for Exploring New Terrains (TRIDENT) will drill into the Moon’s surface to collect samples, while the Mass Spectrometer Observing Lunar Operations (MSOLO) will analyze these samples to determine the gas composition released across the sampling depth. The PRIME-1 technology will provide valuable data to help us better understand the Moon’s surface and how to work with and on it. 
      “The ability to drill and analyze samples at the same time allows us to gather insights that will shape the future of lunar resource utilization,” said Jackie Quinn, PRIME-1 project manager at NASA’s Kennedy Space Center in Florida. “Human exploration of the Moon and deep space will depend on making good use of local resources to produce life-sustaining supplies necessary to live and work on another planetary body.” 
      The PRIME-1 experiment is one of the NASA payloads aboard the next lunar delivery through NASA’s CLPS (Commercial Lunar Payload Services) initiative, set to launch from the agency’s Kennedy Space Center no earlier than Wednesday, Feb. 26, on Intuitive Machines’ Athena lunar lander and explore the lunar soil in Mons Mouton, a lunar plateau near the Moon’s South Pole. 
      Developed by Honeybee Robotics, a Blue Origin Company, TRIDENT is a rotary percussive drill designed to excavate lunar regolith and subsurface material up to 3.3 feet (1 meter) deep. The drill will extract samples, each about 4 inches (10 cm) in length, allowing scientists to analyze how trapped and frozen gases are distributed at different depths below the surface.  
      The TRIDENT drill is equipped with carbide cutting teeth to penetrate even the toughest lunar materials. Unlike previous lunar drills used by astronauts during the Apollo missions, TRIDENT will be controlled from Earth. The drill may provide key information about subsurface soil temperatures as well as gain key insight into the mechanical properties of the lunar South Pole soil. Learning more about regolith temperatures and properties will greatly improve our understanding of the environments where lunar resources may be stable, revealing what resources may be available for future Moon missions.  
      A commercial off-the-shelf mass spectrometer, MSOLO, developed by INFICON and made suitable for spaceflight at Kennedy, will analyze any gas released from the TRIDENT drilled samples, looking for the potential presence of water ice and other gases trapped beneath the surface. These measurements will help scientists understand the Moon’s potential for resource utilization. 
      Under the CLPS model, NASA is investing in commercial delivery services to the Moon to enable industry growth and support long-term lunar exploration. As a primary customer for CLPS deliveries, NASA is one of many customers on future flights. PRIME-1 was funded by NASA’s Space Technology Mission Directorate Game Changing Development program. 
      Learn more about CLPS and Artemis at: 
      https://www.nasa.gov/clps
      View the full article
    • By NASA
      “I do evolutionary programming,” said NASA Goddard oceanographer Dr. John Moisan. “I see a lot of possibility in using evolutionary programming to solve many large problems we are trying to solve. How did life start and evolve? Can these processes be used to evolve intelligence or sentience?”Courtesy of John Moisan Name: John Moisan
      Formal Job Classification: Research oceanographer
      Organization: Ocean Ecology Laboratory, Hydrosphere, Biosphere, Geophysics (HBG), Earth Science Directorate (Code 616) – duty station at NASA’s Wallops Flight Facility on Virginia’s Eastern Shore
      What do you do and what is most interesting about your role here at Goddard? How do you help support Goddard’s mission?
      I develop ecosystem models and satellite algorithms to understand how the ocean’s ecology works. My work has evolved over time from when I coded ocean ecosystem models to the present where I now use artificial intelligence to evolve the ocean ecosystem models.
      How did you become an oceanographer?
      As a child, I watched a TV series called “Sea Hunt,” which involved looking for treasure in the ocean. It inspired me to want to spend my life scuba diving.
      I got a Bachelor of Science in marine biology from the University of New England in Biddeford, Maine, and later got a Ph.D. from the Center for Coastal Physical Oceanography at Old Dominion University in Norfolk, Virginia.
      Initially, I just wanted to do marine biology which to me meant doing lots of scuba diving, maybe living on a sailboat. Later, when I was starting my graduate schoolwork, I found a book about mathematical biology and a great professor who helped open my eyes to the world of numerical modeling. I found out that instead of scuba diving, I needed instead to spend my days behind a computer, learning how to craft ideas into equations and then code these into a computer to run simulations on ocean ecosystems.
      I put myself through my initial education. I went to school fulltime, but I lived at home and hitchhiked to college on a daily basis. When I started my graduate school, I worked to support myself. I was in school during the normal work week, but from Friday evening through Sunday night, I worked 40 hours at a medical center cleaning and sterilizing the operating room instrument carts. This was during the height of the AIDS epidemic.
      What was most exciting about your two field trips to the Antarctic?
      In 1987, I joined a six-week research expedition to an Antarctic research station to explore how the ozone hole was impacting phytoplankton. These are single-celled algae that are responsible for making half the oxygen we breathe. Traveling to Antarctica is like visiting another planet. There are more types of blue than I’ve ever seen. It is an amazingly beautiful place to visit, with wild landscapes, glaciers, mountains, sea ice, and a wide range of wildlife. After my first trip I returned home and went back in a few months later as a biologist on a joint Polish–U.S. (National Oceanic and Atmospheric Administration) expedition to carry out a biological survey and measure how much fast the phytoplankton was growing in different areas of the Southern Ocean. We used nets to measure the amounts of fish and shrimp and took water samples to measure salinity, the amount of algae and their growth rates. We ate well, for example the Polish cook made up a large batch of smoked ice fish.
      What other field work have you done?
      While a graduate student, I helped do some benthic work in the Gulf of Maine. This study was focused on understanding the rates of respiration in the muds on the bottom of the ocean and on understanding how much biomass was in the muds. The project lowered a benthic grab device to the bottom where it would push a box core device into the sediments to return it to the surface. This process is sort of like doing a biopsy of the ocean bottom.
      What is your goal as a research oceanographer at Goddard?
      Ocean scientists measure the amount and variability of chlorophyll a, a pigment in algae, in the ocean because it is an analogue to the amount of algae or phytoplankton in the ocean. Chlorophyll a is used to capture solar energy to make sugars, which the algae use for growth. Generally, areas of the ocean that have more chlorophyll are also areas where growth or primary production is higher. So, by estimating how much chlorophyll is in the ocean we can study how these processes are changing with an aim in understanding why. NASA uses the color of the ocean using satellites to estimate chlorophyll a because chlorophyll absorbs sunlight and changes the color of the ocean. Algae have other kinds of pigments, each of which absorbs light at different wavelengths. Because different groups of algae have different levels of pigments, they are like fingerprints that can reveal the type of algae in the water. Some of my research aims at trying to use artificial intelligence and mathematical techniques to create new ways to measure these pigments from space to understand how ocean ecosystems change.
      In 2024, NASA plans to launch the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite, which will measure the color of the ocean at many different wavelengths. The data from this satellite can be used with results from my work on genetic programs and inverse modeling to estimate concentrations of different pigments and possibly concentrations of different types of algae in the ocean.
      You have been at Goddard over 22 years. What is most memorable to you?
      I develop ecosystem models. But ecosystems do not have laws in the same way that physics has laws. Equations need to be created so that the ecosystem models represent what is observed in the real world. Satellites have been a great source for those observations, but without a lot of other types of observations that are collected in the field, the ocean, it is difficult to develop these equations. In my time at NASA, I have only been able to develop models because of the great but often tedious work that ocean scientists around the world have been doing when they go on ocean expeditions to measure various ocean features, be it simple temperature or the more complicated measurements of algal growth rates. My experience with their willingness to collaborate and share data is especially memorable. This experience is also what I enjoyed with numerous scientists at NASA who have always been willing to support new ideas and point me in the right direction. It has made working at NASA a phenomenal experience.
      What are the philosophical implications of your work?
      The human capacity to think rapidly, to test and change our opinions based on what we learn, is slow compared to that of a computer. Computers can help us adapt more quickly. I can put 1,000 students in a room developing ecosystem model models. But I know that this process of developing ecosystem models is slow when compared what a computer can do using an artificial intelligence approach called genetic programming, it is a much faster way to generate ecosystem model solutions.
      Philosophically, there is no real ecosystem model that is the best. Life and ecosystems on Earth change and adapt at rates too fast for any present-day model to resolve, especially considering climate change. The only real ecosystem model is the reality itself. No computer model can perfectly simulate ecosystems. By utilizing the fast adaptability that evolutionary computer modeling techniques provide, simulating and ultimately predicting ecosystems can be improved greatly.
      How does your work have implications for scientists in general?
      I do evolutionary programming. I see a lot of possibility in using evolutionary programming to solve many large problems we are trying to solve. How did life start and evolve? Can these processes be used to evolve intelligence or sentience?
      The artificial intelligence (AI) work answers questions, but you need to identify the questions. This is the greater problem when it comes to working with AI. You cannot answer the question of how to create a sentient life if you do not know how to define it. If I cannot measure life, how can I model it? I do not know how to write that equation. How does life evolve? How did the evolutionary process start? These are big questions I enjoy discussing with friends. It can be as frustrating as contemplating “nothing.”
      Who inspires you?
      Many of the scientists that I was fortunate to work with at various research institutes, such as Scripps Institution of Oceanography at the University of California, San Diego. These are groups of scientists are open to always willing to share their ideas. These are individuals who enjoy doing science. I will always be indebted to them for their kindness in sharing of ideas and data.
      Do you still scuba dive?
      Yes, I wish I could dive daily, it is a very calming experience. I’m trying to get my kids to join me.
      What else do you do for fun?
      My wife and I bike and travel. Our next big bike trip will hopefully be to Shangri-La City in China. I also enjoy sailing and trying to grow tropical plants. But, most of all, I enjoy helping raise my children to be resilient, empathic, and intelligent beings.
      What are your words to live by?
      Life. So much to see. So little time.
      Conversations With Goddard is a collection of question and answer profiles highlighting the breadth and depth of NASA’s Goddard Space Flight Center’s talented and diverse workforce. The Conversations have been published twice a month on average since May 2011. Read past editions on Goddard’s “Our People” webpage. Conversations With Goddard is a collection of Q&A profiles highlighting the breadth and depth of NASA’s Goddard Space Flight Center’s talented and diverse workforce. The Conversations have been published twice a month on average since May 2011. Read past editions on Goddard’s “Our People” webpage.
      Share
      Details
      Last Updated Feb 10, 2025 EditorJessica EvansContactRob Garnerrob.garner@nasa.gov Related Terms
      Goddard Space Flight Center Artificial Intelligence (AI) People of Goddard Wallops Flight Facility Keep Exploring Discover More Topics From NASA
      Missions
      Humans in Space
      Climate Change
      Solar System
      View the full article
  • Check out these Videos

×
×
  • Create New...