Jump to content

AURA's OPUS Software Licensed to Celera Genomics


HubbleSite

Recommended Posts

low_STSCI-H-p-0136a-k-1340x520.png

The Association of Universities for Research in Astronomy, Inc. (AURA) has reached an agreement with Celera Genomics Group, an Applera Corporation business in Rockville, MD, on the use of AURA's Operational Pipeline Unified Systems (OPUS) software package. Originally designed for use in the Hubble Space Telescope program, OPUS is being used by Celera to process bioinformatics data. OPUS was developed by the Space Telescope Science Institute, which is managed by AURA under contract with NASA's Goddard Space Flight Center. It is used to process astronomical data generated by the Hubble Space Telescope for use by researchers studying the universe, and it has been widely employed in other space observatories and NASA projects. Facing similar needs for the use of their large databases, Celera is licensing OPUS from AURA to assist in the processing of data from their proteomics and genomics projects.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By NASA
      NASA logo NASA has awarded $15.6 million in grant funding to 15 projects supporting the maintenance of open-source tools, frameworks, and libraries used by the NASA science community, for the benefit of all.
      The agency’s Open-Source Tools, Frameworks, and Libraries awards provide support for the sustainable development of tools freely available to everyone and critical for the goals of the agency’s Science Mission Directorate.
      “We received almost twice the number of proposals this year than we had in the previous call,” said Steve Crawford, program executive, Open Science implementation, Office of the Chief Science Data Officer, NASA Headquarters in Washington. “The NASA science community’s excitement for this program demonstrates the need for sustained support and maintenance of open-source software. These projects are integral to our missions, critical to our data infrastructure, underpin machine learning and data science tools, and are used by our researchers, every day, to advance science that protects our planet and broadens our understanding of the universe.”
      This award program is one of several cross-divisional opportunities at NASA focused on advancing open science practices. The grants are funded by NASA’s Office of the Chief Science Data Officer through the agency’s Research Opportunities for Space and Earth Science. The solicitation sought proposals through two types of awards:
      Foundational awards: cooperative agreements for up to five years for open-source tools, frameworks, and libraries that have a significant impact on two or more divisions of the Science Mission Directorate. Sustainment awards: grants or cooperative agreements of up to three years for open-source tools, frameworks, and libraries that have significant impact in one or more divisions of the Science Mission Directorate. 2024 awardees are:
      Foundation awards:
      NASA’s Ames Research Center, Silicon Valley, CaliforniaPrincipal investigator: Ross Beyer “Expanding and Maintaining the Ames Stereo Pipeline” Caltech, Pasadena, CaliforniaPrincipal investigator: Brigitta Sipocz “Enhancement of Infrastructure and Sustained Maintenance of Astroquery” Cornell University, Scarsdale, New YorkPrincipal investigator: Ramin Zabih “Modernize and Expand arXiv’s Essential Infrastructure” NASA’s Goddard Space Flight Center, Greenbelt, MarylandPrincipal investigator: D. Cooley “Enabling SMD Science Using the General Mission Analysis Tool” NumFOCUS, Austin, TexasPrincipal investigator: Thomas Caswell “Sustainment of Matplotlib and Cartopy” NumFOCUSPrincipal investigator: Erik Tollerud “Investing in the Astropy Project to Enable Research and Education in Astronomy” Sustainment awards:
      NASA’s Jet Propulsion Laboratory, Southern CaliforniaPrincipal investigator: Cedric David “Sustain NASA’s River Software for the Satellite Data Deluge,” three-year award Pennsylvania State University, University ParkPrincipal investigator: David Radice “AthenaK: A Performance Portable Simulation Infrastructure for Computational Astrophysics,” three-year award United States Geological Survey, Reston, VirginiaPrincipal investigator: Trent Hare “Planetary Updates for QGIS,” one-year award NASA JPLPrincipal investigator: Michael Starch “How To F Prime: Empowering Science Missions Through Documentation and Examples,” three-year award NASA GoddardPrincipal investigator: Albert Shih “Enhancing Consistency and Discoverability Across the SunPy Ecosystem,” three-year award Triad National Security, LLC, Los Alamos, New MexicoPrincipal investigator: Julia Kelliher “Enhancing Analysis Capabilities of Biological Data With the NASA EDGE Bioinformatics Platform,” four-year award iSciences LLC, Burlington, VermontPrincipal investigator: Daniel Baston “Sustaining the Geospatial Data Abstraction Library,” three-year award University of Maryland, College Park,Principal investigator: C Max Stevens “Sustaining the Community Firn Model,” three-year award Quansight, LLC, Austin, TexasPrincipal investigator: Dharhas Pothina “Ensuring a Fast and Secure Core for Scientific Python – Security, Accessibility and Performance of NumPy, SciPy and scikit-learn; Going Beyond NumPy With Accelerator Support,” three-year award For information about open science at NASA, visit:
      https://science.nasa.gov/open-science
      -end-
      Alise Fisher
      Headquarters, Washington
      202-617-4977
      alise.m.fisher@nasa.gov
      View the full article
    • By NASA
      3 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      NASA Johnson Space Center: ORDEM represents the state of the art in orbital debris models intended for engineering analysis. It is a data-driven model, relying on large quantities of radar, optical, in situ, and laboratory measurement data. When released, it was the first software code to include a model for different orbital debris material densities, population models from low Earth orbit (LEO) all the way to Geosynchronous orbit (GEO), and uncertainties in each debris population. 
      ORDEM allows users to compute the orbital debris flux on any satellite in Earth orbit.  This allows satellite designers to mitigate possible orbital debris damage to a spacecraft and its instruments using shielding and design choices, thereby extending the useful life of the mission and its experiments.  The model also has a mode that simulates debris telescope/radar observations from the ground.  Both it and the spacecraft flux mode can be used to design experiments to measure the meteoroid and orbital debris environments. 
      ORDEM is used heavily in the hypervelocity protection community, those that design, build, and test shielding for spacecraft and rocket upper stages. The fidelity of the ORDEM model allows for the optimization of shielding to balance mission success criteria, risk posture, and cost considerations. 
      As both government and civilian actors continue to exploit the space environment for security, science, and the economy, it is important that we track the debris risks in increasingly crowded orbits, in order to minimize damage to these space assets to make sure these missions continue to operate safely.  ORDEM is NASA’s primary tool for computing and mitigating these risks.   
      ORDEM is used by NASA, the Department of Defense, and other U.S. government agencies, directly or indirectly (via the Debris Assessment Software, MSC-26690-1) to evaluate collision risk for large trackable objects, as well as other mission-ending risks associated with small debris (such as tank ruptures or wiring cuts). In addition to the use as an engineering tool, ORDEM has been used by NASA and other missions in the conceptual design phase to analyze the frequency of orbital debris impacts on potential in situ sensors that could detect debris too small to be detected from ground-based assets. 
      Commercial and academic users of ORDEM include Boeing, SpaceX, Northrop Grumman, the University of Colorado, California Polytechnic State University, among many others. These end users, similar to the government users discussed above, use the software to (1) directly determine potential hazards to spaceflight resulting from flying through the debris environment, and (2) research how the debris environment varies over time to better understand what behaviors may be able to mitigate the growth of the environment. 
      The quality and quantity of data available to the NASA Orbital Debris Program Office (ODPO) for the building, verification, and validation of the ORDEM model is greater than for any other entity that performs similar research. Many of the models used by other research and engineering organizations are derived from the models that ODPO has published after developing them for use in ORDEM.   
      ORDEM Team 
      Alyssa Manis  Andrew B, Vavrin  Brent A. Buckalew  Christopher L. Ostrom   Heather Cowardin  Jer-chyi Liou   John H, Seago   John Nicolaus Opiela   Mark J. Matney, Ph.D.  Matthew Horstman   Phillip D. Anz-Meador, Ph.D.  Quanette Juarez   Paula H. Krisko, Ph.D.  Yu-Lin Xu, Ph.D.  Share
      Details
      Last Updated Jul 31, 2024 EditorBill Keeter Related Terms
      Office of Technology, Policy and Strategy (OTPS) View the full article
    • By NASA
      4 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      NASA Ames Research Center: ProgPy is an open-source Python package supporting research and development of prognostics, health management, and predictive maintenance tools.  
      Prognostics is the science of prediction, and the field of Prognostics and Health Management (PHM) aims at estimating the current physical health of a system (e.g., motor, battery, etc.) and predicting how the system will degrade with use. The results of prognostics are used across industries to prevent failure, preserve safety, and reduce maintenance costs.  
      Prognostics, and prediction in general, is a very difficult and complex undertaking. Accurate prediction requires a model of the performance and degradation of complex systems as a function of time and use, estimation and management of uncertainty, representation of system use profiles, and ability to represent impact of neighboring systems and the environment. Any small discrepancy between the model and the actual system is compounded repeatedly, resulting in a large variation in the resulting prediction. For this reason, prognostics requires complex and capable algorithms, models, and software systems. 
      The ProgPy architecture can be thought of as three innovations: the Prognostic Models, the Prognostic Engine, Prognostic Support Tools. 
      The first part of the ProgPy innovation is the Prognostic Models. The model describes the prognostic behavior of the specific system of interest. ProgPy’s architecture includes a spectrum of modeling methodologies, ranging from physics-based models to entirely data-driven or hybrid techniques. Most users develop their own physics-based model, train one of the ProgPy data-driven models (e.g., Neural-Network models), or some hybrid of the two. A set of mature models for systems like batteries, electric motors, pumps, and valves are distributed in ProgPy. For these parameterized models, users tune the model to their specific system using the model tuning tools. The Prognostics Engine and Support Tools are built on top of these models, meaning a user that creates a new model will immediately be able to take advantage of the other features of ProgPy. 
      The Prognostic Engine is the most important part of ProgPy and forms the backbone of the software. The Prognostics Engine uses a Prognostics Model to perform the key functions of prognostics and health state estimation. The value in this design is that the Prognostics Engine can use any ProgPy model, whether it be a model distributed with ProgPy or a custom model created by users, to perform health state estimation and prognostics in a configurable way. The components of the Prognostics Engine are extendable, allowing users to implement their own state estimation or prediction algorithm for use with ProgPy models or use one distributed with ProgPy. Given the Prognostics Engine and a model, users can start performing prognostics for their application. This flexible and extendable framework for performing prognostics is truly novel and enables the widespread impact of ProgPy in the prognostic community. 
      The Prognostic Support Tools are a set of features that aid with the development, tuning, benchmarking, evaluation, and visualization of prognostic models and Prognostics Engine results (i.e., predictions). Like the Prognostic Engine, the support tools work equally with models distributed with ProgPy or custom models created by users. A user creating a model immediately has access to a wide array of tools to help them with their task. 
      Detailed documentation, examples, and tutorials of all these features are available to help users learn and use the software tools. 
      These three innovations of ProgPy implement architectures and widely used prognostics and health management functionality, supporting both researchers and practitioners. ProgPy combines technologies from across NASA projects and mission directorates, and external partners into a single package to support NASA missions and U.S. industries. Its innovative framework makes it applicable to a wide range of applications, providing enhanced capabilities not available in other, more limited, state-of-the-art software packages. 
      ProgPy offers unique features and a breadth and depth of unmatched capabilities when compared to other software in the field. It is novel in that it equips users with the tools necessary to do prognostics in their applications as-is, eliminating the need to adapt their use case to comply with the software available. This feature of ProgPy is an improvement upon the current state-of-the-art, as other prognostics software are often developed for specific use cases or based on a singular modeling method (Dadfarina and Drozdov, 2013; Davidson-Pilon, 2022; Schreiber, 2017). ProgPy’s unique approach opens a world of possibilities for researchers, practitioners, and developers in the field of prognostics and health management, as well as NASA missions and U.S. industries. 
      ProgPy Team: 
      Adam J Sweet,  Aditya Tummala,  Chetan Shrikant Kulkarni  Christopher Allen Teubert  Jason Watkins  Kateyn Jarvis Griffith  Matteo Corbetta   Matthew John Daigle  Miryam Stautkalns  Portia Banerjee   Share
      Details
      Last Updated Jul 31, 2024 EditorBill Keeter Related Terms
      Office of Technology, Policy and Strategy (OTPS) View the full article
    • By NASA
      4 min read
      New NASA Software Simulates Science Missions for Observing Terrestrial Freshwater
      A map describing freshwater accumulation (blue) and loss (red), using data from NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites. A new Observational System Simulation Experiment (OSSE) will help researchers design science missions dedicated to monitoring terrestrial freshwater storage. Image Credit: NASA Image Credit: NASA From radar instruments smaller than a shoebox to radiometers the size of a milk carton, there are more tools available to scientists today for observing complex Earth systems than ever before. But this abundance of available sensors creates its own unique challenge: how can researchers organize these diverse instruments in the most efficient way for field campaigns and science missions?
      To help researchers maximize the value of science missions, Bart Forman, an Associate Professor in Civil and Environmental Engineering at the University of Maryland, and a team of researchers from the Stevens Institute of Technology and NASA’s Goddard Space Flight Center, prototyped an Observational System Simulation Experiment (OSSE) for designing science missions dedicated to monitoring terrestrial freshwater storage.
      “You have different sensor types. You have radars, you have radiometers, you have lidars – each is measuring different components of the electromagnetic spectrum,” said Bart Forman, an Associate Professor in Civil and Environmental Engineering at the University of Maryland. “Different observations have different strengths.”
      Terrestrial freshwater storage describes the integrated sum of freshwater spread across Earth’s snow, soil moisture, vegetation canopy, surface water impoundments, and groundwater. It’s a dynamic system, one that defies traditional, static systems of scientific observation.
      Forman’s project builds on prior technology advancements he achieved during an earlier Earth Science Technology Office (ESTO) project, in which he developed an observation system simulation experiment for mapping terrestrial snow. 
      It also relies heavily on innovations pioneered by NASA’s Land Information System (LIS) and NASA’s Trade-space Analysis Tool for Designing Constellations (TAT-C), two modeling tools that began as ESTO investments and quickly became staples within the Earth science community.
      Forman’s tool incorporates these modeling programs into a new system that provides researchers with a customizable platform for planning dynamic observation missions that include a diverse collection of spaceborne data sets.
      In addition, Forman’s tool also includes a “dollars-to-science” cost estimate tool that allows researchers to assess the financial risks associated with a proposed mission.
      Together, all of these features provide scientists with the ability to link observations, data assimilation, uncertainty estimation, and physical models within a single, integrated framework.
      “We were taking a land surface model and trying to merge it with different space-based measurements of snow, soil moisture, and groundwater to see if there was an optimal combination to give us the most bang for our scientific buck,” explained Forman.
      While Forman’s tool isn’t the first information system dedicated to science mission design, it does include a number of novel features. In particular, its ability to integrate observations from spaceborne passive optical radiometers, passive microwave radiometers, and radar sources marks a significant technology advancement.
      Forman explained that while these indirect observations of freshwater include valuable information for quantifying freshwater, they also each contain their own unique error characteristics that must be carefully integrated with a land surface model in order to provide estimates of geophysical variables that scientists care most about.
      Forman’s software also combines LIS and TAT-C within a single software framework, extending the capabilities of both systems to create superior descriptions of global terrestrial hydrology.
      Indeed, Forman stressed the importance of having a large, diverse team that features experts from across the Earth science and modeling communities.
      “It’s nice to be part of a big team because these are big problems, and I don’t know the answers myself. I need to find a lot of people that know a lot more than I do and get them to sort of jump in and roll their sleeves up and help us. And they did,” said Forman.
      Having created an observation system simulation experiment capable of incorporating dynamic, space-based observations into mission planning models, Forman and his team hope that future researchers will build on their work to create an even better mission modeling program.
      For example, while Forman and his team focused on generating mission plans for existing sensors, an expanded version of their software could help researchers determine how they might use future sensors to gather new data.
      “With the kinds of things that TAT-C can do, we can create hypothetical sensors. What if we double the swath width? If it could see twice as much space, does that give us more information? Simultaneously, we can ask questions about the impact of different error characteristics for each of these hypothetical sensors and explore the corresponding tradeoff.” said Forman.
      PROJECT LEAD
      Barton Forman, University of Maryland, Baltimore County
      SPONSORING ORGANIZATION
      NASA’s Advanced Information Systems Technology (AIST) program, a part of NASA’s Earth Science Technology Office (ESTO), funded this project
      Share








      Details
      Last Updated Mar 25, 2024 Related Terms
      Earth Science Earth Science Technology Office GRACE (Gravity Recovery And Climate Experiment) Science-enabling Technology Technology Highlights Explore More
      5 min read NASA to Launch Sounding Rockets into Moon’s Shadow During Solar Eclipse


      Article


      18 hours ago
      10 min read Zero-Boil-Off Tank Experiments to Enable Long-Duration Space Exploration
      Do we have enough fuel to get to our destination? This is probably one of…


      Article


      2 weeks ago
      2 min read Students Become FjordPhyto Volunteers and Discover that Antarctica Is Much Colder Than Texas


      Article


      3 weeks ago
      View the full article
    • By NASA
      The software discipline has broad involvement across each of the NASA Mission Directorates. Some recent discipline focus and development areas are highlighted below, along with a look at the Software Technical Discipline Team’s (TDT) approach to evolving discipline best practices toward the future.

      Understanding Automation Risk

      Software creates automation. Reliance on that automation is increasing the amount of software in NASA programs. This year, the software team examined historical software incidents in aerospace to characterize how, why, and where software or automation is mostly likely to fail. The goal is to better engineer software to minimize the risk of errors, improve software processes, and better architect software for resilience to errors (or improve fault-tolerance should errors occur).


      Some key findings shown in the above charts, indicate that software more often does the wrong thing rather than just crash. Rebooting was found to be ineffective when software behaves erroneously. Unexpected behavior was mostly attributed to the code or logic itself, and about half of those instances were the result of missing software—software not present due to unanticipated situations or missing requirements. This may indicate that even fully tested software is exposed to this significant class of error. Data misconfiguration was a sizeable factor that continues to grow with the advent of more modern data-driven systems. A final subjective category assessed was “unknown unknowns”—things that could not have been reasonably anticipated. These accounted for 19% of software incidents studied.

      The software team is using and sharing these findings to improve best practices. More emphasis is being placed on the importance of complete requirements, off-nominal test campaigns, and “test as you fly” using real hardware in the loop. When designing systems for fault tolerance, more consideration should be given to detecting and correcting for erroneous behavior versus just checking for a crash. Less confidence should be placed on rebooting as an effective recovery strategy. Backup strategies for automations should be employed for critical applications—considering the historic prevalence of absent software and unknown unknowns. More information can be found in NASA/TP-20230012154, Software Error Incident Categorizations in Aerospace.

      Employing AI and Machine Learning Techniques

      The rise of artificial intelligence (AI) and machine learning (ML) techniques has allowed NASA to examine data in new ways that were not previously possible. While NASA has been employing autonomy since its inception, AI/ML techniques provide teams the ability to expand the use of autonomy outside of previous bounds. The Agency has been working on AI ethics frameworks and examining standards, procedures, and practices, taking security implications into account. While AI/ML generally uses nondeterministic statistical algorithms that currently limit its use in safety-critical flight applications, it is used by NASA in more than 400 AI/ML projects aiding research and science. The Agency also uses AI/ML Communities of Practice for sharing knowledge across the centers. The TDT surveyed AI/ML work across the Agency and summarized it for trends and lessons.

      Common usages of AI/ML include image recognition and identification. NASA Earth science missions use AI/ML to identify marine debris, measure cloud thickness, and identify wildfire smoke (examples are shown in the satellite images below). This reduces the workload on personnel. There are many applications of AI/ML being used to predict atmospheric physics. One example is hurricane track and intensity prediction. Another example is predicting planetary boundary layer thickness and comparing it against measurements, and those predictions are being fused with live data to improve the performance over previous boundary layer models.
      Examples of how NASA uses AI/ML. Satellite images of clouds with estimation of cloud thickness (left) and wildfire detection (right). NASA-HDBK-2203, NASA Software Engineering and Assurance Handbook (https://swehb.nasa.gov) The Code Analysis Pipeline: Static Analysis Tool for IV&V and Software Quality Improvement
      The Code Analysis Pipeline (CAP) is an open-source tool architecture that supports software development and assurance activities, improving overall software quality. The Independent Verification and Validation (IV&V) Program is using CAP to support software assurance on the Human Landing System, Gateway, Exploration Ground Systems, Orion, and Roman. CAP supports the configuration and automated execution of multiple static code analysis tools to identify potential code defects, generate code metrics that indicate potential areas of quality concern (e.g., cyclomatic complexity), and execute any other tool that analyzes or processes source code. The TDT is focused on integrating Modified Condition/Decision Coverage analysis support for coverage testing. Results from tools are consolidated into a central database and presented in context through a user interface that supports review, query, reporting, and analysis of results as the code matures.

      The tool architecture is based on an industry standard DevOps approach for continuous building of source code and running of tools. CAP integrates with GitHub for source code control, uses Jenkins to support automation of analysis builds, and leverages Docker to create standard and custom build environments that support unique mission needs and use cases.

      Improving Software Process & Sharing Best Practices

      The TDT has captured the best practice knowledge from across the centers in NPR 7150.2, NASA Software Engineering Requirements, and NASA-HDBK-2203, NASA Software Engineering and Assurance Handbook (https://swehb.nasa.gov.) Two APPEL training classes have been developed and shared with several organizations to give them the foundations in the NPR and software engineering management. The TDT established several subteams to help programs/projects as they tackle software architecture, project management, requirements, cybersecurity, testing and verification, and programmable logic controllers. Many of these teams have developed guidance and best practices, which are documented in NASA-HDBK-2203 and on the NASA Engineering Network.

      NPR 7150.2 and the handbook outline best practices over the full lifecycle for all NASA software. This includes requirements development, architecture, design, implementation, and verification. Also covered, and equally important, are the supporting activities/functions that improve quality, including software assurance, safety configuration management, reuse, and software acquisition. Rationale and guidance for the requirements are addressed in the handbook that is internally and externally accessible and regularly updated as new information, tools, and techniques are found and used.

      The Software TDT deputies train software engineers, systems engineers, chief engineers, and project managers on the NPR requirements and their role in ensuring these requirements are implemented across NASA centers. Additionally, the TDT deputies train software technical leads on many of the advanced management aspects of a software engineering effort, including planning, cost estimating, negotiating, and handling change management.
      View the full article
  • Check out these Videos

×
×
  • Create New...