Jump to content

AURA's OPUS Software Licensed to Celera Genomics


HubbleSite

Recommended Posts

low_STSCI-H-p-0136a-k-1340x520.png

The Association of Universities for Research in Astronomy, Inc. (AURA) has reached an agreement with Celera Genomics Group, an Applera Corporation business in Rockville, MD, on the use of AURA's Operational Pipeline Unified Systems (OPUS) software package. Originally designed for use in the Hubble Space Telescope program, OPUS is being used by Celera to process bioinformatics data. OPUS was developed by the Space Telescope Science Institute, which is managed by AURA under contract with NASA's Goddard Space Flight Center. It is used to process astronomical data generated by the Hubble Space Telescope for use by researchers studying the universe, and it has been widely employed in other space observatories and NASA projects. Facing similar needs for the use of their large databases, Celera is licensing OPUS from AURA to assist in the processing of data from their proteomics and genomics projects.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By NASA
      3 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      NASA Johnson Space Center: ORDEM represents the state of the art in orbital debris models intended for engineering analysis. It is a data-driven model, relying on large quantities of radar, optical, in situ, and laboratory measurement data. When released, it was the first software code to include a model for different orbital debris material densities, population models from low Earth orbit (LEO) all the way to Geosynchronous orbit (GEO), and uncertainties in each debris population. 
      ORDEM allows users to compute the orbital debris flux on any satellite in Earth orbit.  This allows satellite designers to mitigate possible orbital debris damage to a spacecraft and its instruments using shielding and design choices, thereby extending the useful life of the mission and its experiments.  The model also has a mode that simulates debris telescope/radar observations from the ground.  Both it and the spacecraft flux mode can be used to design experiments to measure the meteoroid and orbital debris environments. 
      ORDEM is used heavily in the hypervelocity protection community, those that design, build, and test shielding for spacecraft and rocket upper stages. The fidelity of the ORDEM model allows for the optimization of shielding to balance mission success criteria, risk posture, and cost considerations. 
      As both government and civilian actors continue to exploit the space environment for security, science, and the economy, it is important that we track the debris risks in increasingly crowded orbits, in order to minimize damage to these space assets to make sure these missions continue to operate safely.  ORDEM is NASA’s primary tool for computing and mitigating these risks.   
      ORDEM is used by NASA, the Department of Defense, and other U.S. government agencies, directly or indirectly (via the Debris Assessment Software, MSC-26690-1) to evaluate collision risk for large trackable objects, as well as other mission-ending risks associated with small debris (such as tank ruptures or wiring cuts). In addition to the use as an engineering tool, ORDEM has been used by NASA and other missions in the conceptual design phase to analyze the frequency of orbital debris impacts on potential in situ sensors that could detect debris too small to be detected from ground-based assets. 
      Commercial and academic users of ORDEM include Boeing, SpaceX, Northrop Grumman, the University of Colorado, California Polytechnic State University, among many others. These end users, similar to the government users discussed above, use the software to (1) directly determine potential hazards to spaceflight resulting from flying through the debris environment, and (2) research how the debris environment varies over time to better understand what behaviors may be able to mitigate the growth of the environment. 
      The quality and quantity of data available to the NASA Orbital Debris Program Office (ODPO) for the building, verification, and validation of the ORDEM model is greater than for any other entity that performs similar research. Many of the models used by other research and engineering organizations are derived from the models that ODPO has published after developing them for use in ORDEM.   
      ORDEM Team 
      Alyssa Manis  Andrew B, Vavrin  Brent A. Buckalew  Christopher L. Ostrom   Heather Cowardin  Jer-chyi Liou   John H, Seago   John Nicolaus Opiela   Mark J. Matney, Ph.D.  Matthew Horstman   Phillip D. Anz-Meador, Ph.D.  Quanette Juarez   Paula H. Krisko, Ph.D.  Yu-Lin Xu, Ph.D.  Share
      Details
      Last Updated Jul 31, 2024 EditorBill Keeter Related Terms
      Office of Technology, Policy and Strategy (OTPS) View the full article
    • By NASA
      4 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      NASA Ames Research Center: ProgPy is an open-source Python package supporting research and development of prognostics, health management, and predictive maintenance tools.  
      Prognostics is the science of prediction, and the field of Prognostics and Health Management (PHM) aims at estimating the current physical health of a system (e.g., motor, battery, etc.) and predicting how the system will degrade with use. The results of prognostics are used across industries to prevent failure, preserve safety, and reduce maintenance costs.  
      Prognostics, and prediction in general, is a very difficult and complex undertaking. Accurate prediction requires a model of the performance and degradation of complex systems as a function of time and use, estimation and management of uncertainty, representation of system use profiles, and ability to represent impact of neighboring systems and the environment. Any small discrepancy between the model and the actual system is compounded repeatedly, resulting in a large variation in the resulting prediction. For this reason, prognostics requires complex and capable algorithms, models, and software systems. 
      The ProgPy architecture can be thought of as three innovations: the Prognostic Models, the Prognostic Engine, Prognostic Support Tools. 
      The first part of the ProgPy innovation is the Prognostic Models. The model describes the prognostic behavior of the specific system of interest. ProgPy’s architecture includes a spectrum of modeling methodologies, ranging from physics-based models to entirely data-driven or hybrid techniques. Most users develop their own physics-based model, train one of the ProgPy data-driven models (e.g., Neural-Network models), or some hybrid of the two. A set of mature models for systems like batteries, electric motors, pumps, and valves are distributed in ProgPy. For these parameterized models, users tune the model to their specific system using the model tuning tools. The Prognostics Engine and Support Tools are built on top of these models, meaning a user that creates a new model will immediately be able to take advantage of the other features of ProgPy. 
      The Prognostic Engine is the most important part of ProgPy and forms the backbone of the software. The Prognostics Engine uses a Prognostics Model to perform the key functions of prognostics and health state estimation. The value in this design is that the Prognostics Engine can use any ProgPy model, whether it be a model distributed with ProgPy or a custom model created by users, to perform health state estimation and prognostics in a configurable way. The components of the Prognostics Engine are extendable, allowing users to implement their own state estimation or prediction algorithm for use with ProgPy models or use one distributed with ProgPy. Given the Prognostics Engine and a model, users can start performing prognostics for their application. This flexible and extendable framework for performing prognostics is truly novel and enables the widespread impact of ProgPy in the prognostic community. 
      The Prognostic Support Tools are a set of features that aid with the development, tuning, benchmarking, evaluation, and visualization of prognostic models and Prognostics Engine results (i.e., predictions). Like the Prognostic Engine, the support tools work equally with models distributed with ProgPy or custom models created by users. A user creating a model immediately has access to a wide array of tools to help them with their task. 
      Detailed documentation, examples, and tutorials of all these features are available to help users learn and use the software tools. 
      These three innovations of ProgPy implement architectures and widely used prognostics and health management functionality, supporting both researchers and practitioners. ProgPy combines technologies from across NASA projects and mission directorates, and external partners into a single package to support NASA missions and U.S. industries. Its innovative framework makes it applicable to a wide range of applications, providing enhanced capabilities not available in other, more limited, state-of-the-art software packages. 
      ProgPy offers unique features and a breadth and depth of unmatched capabilities when compared to other software in the field. It is novel in that it equips users with the tools necessary to do prognostics in their applications as-is, eliminating the need to adapt their use case to comply with the software available. This feature of ProgPy is an improvement upon the current state-of-the-art, as other prognostics software are often developed for specific use cases or based on a singular modeling method (Dadfarina and Drozdov, 2013; Davidson-Pilon, 2022; Schreiber, 2017). ProgPy’s unique approach opens a world of possibilities for researchers, practitioners, and developers in the field of prognostics and health management, as well as NASA missions and U.S. industries. 
      ProgPy Team: 
      Adam J Sweet,  Aditya Tummala,  Chetan Shrikant Kulkarni  Christopher Allen Teubert  Jason Watkins  Kateyn Jarvis Griffith  Matteo Corbetta   Matthew John Daigle  Miryam Stautkalns  Portia Banerjee   Share
      Details
      Last Updated Jul 31, 2024 EditorBill Keeter Related Terms
      Office of Technology, Policy and Strategy (OTPS) View the full article
    • By NASA
      4 min read
      New NASA Software Simulates Science Missions for Observing Terrestrial Freshwater
      A map describing freshwater accumulation (blue) and loss (red), using data from NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites. A new Observational System Simulation Experiment (OSSE) will help researchers design science missions dedicated to monitoring terrestrial freshwater storage. Image Credit: NASA Image Credit: NASA From radar instruments smaller than a shoebox to radiometers the size of a milk carton, there are more tools available to scientists today for observing complex Earth systems than ever before. But this abundance of available sensors creates its own unique challenge: how can researchers organize these diverse instruments in the most efficient way for field campaigns and science missions?
      To help researchers maximize the value of science missions, Bart Forman, an Associate Professor in Civil and Environmental Engineering at the University of Maryland, and a team of researchers from the Stevens Institute of Technology and NASA’s Goddard Space Flight Center, prototyped an Observational System Simulation Experiment (OSSE) for designing science missions dedicated to monitoring terrestrial freshwater storage.
      “You have different sensor types. You have radars, you have radiometers, you have lidars – each is measuring different components of the electromagnetic spectrum,” said Bart Forman, an Associate Professor in Civil and Environmental Engineering at the University of Maryland. “Different observations have different strengths.”
      Terrestrial freshwater storage describes the integrated sum of freshwater spread across Earth’s snow, soil moisture, vegetation canopy, surface water impoundments, and groundwater. It’s a dynamic system, one that defies traditional, static systems of scientific observation.
      Forman’s project builds on prior technology advancements he achieved during an earlier Earth Science Technology Office (ESTO) project, in which he developed an observation system simulation experiment for mapping terrestrial snow. 
      It also relies heavily on innovations pioneered by NASA’s Land Information System (LIS) and NASA’s Trade-space Analysis Tool for Designing Constellations (TAT-C), two modeling tools that began as ESTO investments and quickly became staples within the Earth science community.
      Forman’s tool incorporates these modeling programs into a new system that provides researchers with a customizable platform for planning dynamic observation missions that include a diverse collection of spaceborne data sets.
      In addition, Forman’s tool also includes a “dollars-to-science” cost estimate tool that allows researchers to assess the financial risks associated with a proposed mission.
      Together, all of these features provide scientists with the ability to link observations, data assimilation, uncertainty estimation, and physical models within a single, integrated framework.
      “We were taking a land surface model and trying to merge it with different space-based measurements of snow, soil moisture, and groundwater to see if there was an optimal combination to give us the most bang for our scientific buck,” explained Forman.
      While Forman’s tool isn’t the first information system dedicated to science mission design, it does include a number of novel features. In particular, its ability to integrate observations from spaceborne passive optical radiometers, passive microwave radiometers, and radar sources marks a significant technology advancement.
      Forman explained that while these indirect observations of freshwater include valuable information for quantifying freshwater, they also each contain their own unique error characteristics that must be carefully integrated with a land surface model in order to provide estimates of geophysical variables that scientists care most about.
      Forman’s software also combines LIS and TAT-C within a single software framework, extending the capabilities of both systems to create superior descriptions of global terrestrial hydrology.
      Indeed, Forman stressed the importance of having a large, diverse team that features experts from across the Earth science and modeling communities.
      “It’s nice to be part of a big team because these are big problems, and I don’t know the answers myself. I need to find a lot of people that know a lot more than I do and get them to sort of jump in and roll their sleeves up and help us. And they did,” said Forman.
      Having created an observation system simulation experiment capable of incorporating dynamic, space-based observations into mission planning models, Forman and his team hope that future researchers will build on their work to create an even better mission modeling program.
      For example, while Forman and his team focused on generating mission plans for existing sensors, an expanded version of their software could help researchers determine how they might use future sensors to gather new data.
      “With the kinds of things that TAT-C can do, we can create hypothetical sensors. What if we double the swath width? If it could see twice as much space, does that give us more information? Simultaneously, we can ask questions about the impact of different error characteristics for each of these hypothetical sensors and explore the corresponding tradeoff.” said Forman.
      PROJECT LEAD
      Barton Forman, University of Maryland, Baltimore County
      SPONSORING ORGANIZATION
      NASA’s Advanced Information Systems Technology (AIST) program, a part of NASA’s Earth Science Technology Office (ESTO), funded this project
      Share








      Details
      Last Updated Mar 25, 2024 Related Terms
      Earth Science Earth Science Technology Office GRACE (Gravity Recovery And Climate Experiment) Science-enabling Technology Technology Highlights Explore More
      5 min read NASA to Launch Sounding Rockets into Moon’s Shadow During Solar Eclipse


      Article


      18 hours ago
      10 min read Zero-Boil-Off Tank Experiments to Enable Long-Duration Space Exploration
      Do we have enough fuel to get to our destination? This is probably one of…


      Article


      2 weeks ago
      2 min read Students Become FjordPhyto Volunteers and Discover that Antarctica Is Much Colder Than Texas


      Article


      3 weeks ago
      View the full article
    • By NASA
      The software discipline has broad involvement across each of the NASA Mission Directorates. Some recent discipline focus and development areas are highlighted below, along with a look at the Software Technical Discipline Team’s (TDT) approach to evolving discipline best practices toward the future.

      Understanding Automation Risk

      Software creates automation. Reliance on that automation is increasing the amount of software in NASA programs. This year, the software team examined historical software incidents in aerospace to characterize how, why, and where software or automation is mostly likely to fail. The goal is to better engineer software to minimize the risk of errors, improve software processes, and better architect software for resilience to errors (or improve fault-tolerance should errors occur).


      Some key findings shown in the above charts, indicate that software more often does the wrong thing rather than just crash. Rebooting was found to be ineffective when software behaves erroneously. Unexpected behavior was mostly attributed to the code or logic itself, and about half of those instances were the result of missing software—software not present due to unanticipated situations or missing requirements. This may indicate that even fully tested software is exposed to this significant class of error. Data misconfiguration was a sizeable factor that continues to grow with the advent of more modern data-driven systems. A final subjective category assessed was “unknown unknowns”—things that could not have been reasonably anticipated. These accounted for 19% of software incidents studied.

      The software team is using and sharing these findings to improve best practices. More emphasis is being placed on the importance of complete requirements, off-nominal test campaigns, and “test as you fly” using real hardware in the loop. When designing systems for fault tolerance, more consideration should be given to detecting and correcting for erroneous behavior versus just checking for a crash. Less confidence should be placed on rebooting as an effective recovery strategy. Backup strategies for automations should be employed for critical applications—considering the historic prevalence of absent software and unknown unknowns. More information can be found in NASA/TP-20230012154, Software Error Incident Categorizations in Aerospace.

      Employing AI and Machine Learning Techniques

      The rise of artificial intelligence (AI) and machine learning (ML) techniques has allowed NASA to examine data in new ways that were not previously possible. While NASA has been employing autonomy since its inception, AI/ML techniques provide teams the ability to expand the use of autonomy outside of previous bounds. The Agency has been working on AI ethics frameworks and examining standards, procedures, and practices, taking security implications into account. While AI/ML generally uses nondeterministic statistical algorithms that currently limit its use in safety-critical flight applications, it is used by NASA in more than 400 AI/ML projects aiding research and science. The Agency also uses AI/ML Communities of Practice for sharing knowledge across the centers. The TDT surveyed AI/ML work across the Agency and summarized it for trends and lessons.

      Common usages of AI/ML include image recognition and identification. NASA Earth science missions use AI/ML to identify marine debris, measure cloud thickness, and identify wildfire smoke (examples are shown in the satellite images below). This reduces the workload on personnel. There are many applications of AI/ML being used to predict atmospheric physics. One example is hurricane track and intensity prediction. Another example is predicting planetary boundary layer thickness and comparing it against measurements, and those predictions are being fused with live data to improve the performance over previous boundary layer models.
      Examples of how NASA uses AI/ML. Satellite images of clouds with estimation of cloud thickness (left) and wildfire detection (right). NASA-HDBK-2203, NASA Software Engineering and Assurance Handbook (https://swehb.nasa.gov) The Code Analysis Pipeline: Static Analysis Tool for IV&V and Software Quality Improvement
      The Code Analysis Pipeline (CAP) is an open-source tool architecture that supports software development and assurance activities, improving overall software quality. The Independent Verification and Validation (IV&V) Program is using CAP to support software assurance on the Human Landing System, Gateway, Exploration Ground Systems, Orion, and Roman. CAP supports the configuration and automated execution of multiple static code analysis tools to identify potential code defects, generate code metrics that indicate potential areas of quality concern (e.g., cyclomatic complexity), and execute any other tool that analyzes or processes source code. The TDT is focused on integrating Modified Condition/Decision Coverage analysis support for coverage testing. Results from tools are consolidated into a central database and presented in context through a user interface that supports review, query, reporting, and analysis of results as the code matures.

      The tool architecture is based on an industry standard DevOps approach for continuous building of source code and running of tools. CAP integrates with GitHub for source code control, uses Jenkins to support automation of analysis builds, and leverages Docker to create standard and custom build environments that support unique mission needs and use cases.

      Improving Software Process & Sharing Best Practices

      The TDT has captured the best practice knowledge from across the centers in NPR 7150.2, NASA Software Engineering Requirements, and NASA-HDBK-2203, NASA Software Engineering and Assurance Handbook (https://swehb.nasa.gov.) Two APPEL training classes have been developed and shared with several organizations to give them the foundations in the NPR and software engineering management. The TDT established several subteams to help programs/projects as they tackle software architecture, project management, requirements, cybersecurity, testing and verification, and programmable logic controllers. Many of these teams have developed guidance and best practices, which are documented in NASA-HDBK-2203 and on the NASA Engineering Network.

      NPR 7150.2 and the handbook outline best practices over the full lifecycle for all NASA software. This includes requirements development, architecture, design, implementation, and verification. Also covered, and equally important, are the supporting activities/functions that improve quality, including software assurance, safety configuration management, reuse, and software acquisition. Rationale and guidance for the requirements are addressed in the handbook that is internally and externally accessible and regularly updated as new information, tools, and techniques are found and used.

      The Software TDT deputies train software engineers, systems engineers, chief engineers, and project managers on the NPR requirements and their role in ensuring these requirements are implemented across NASA centers. Additionally, the TDT deputies train software technical leads on many of the advanced management aspects of a software engineering effort, including planning, cost estimating, negotiating, and handling change management.
      View the full article
    • By NASA
      Rae Anderson, subject matter expert for software assurance in the NASA Stennis Safety and Mission Assurance Directorate, is the first employee at NASA’s Stennis Space Center – and one of five civil servants across NASA – to earn the highest distinction in the Safety and Mission Assurance Technical Excellence Program in the discipline of software assurance. The level four certification demonstrates Anderson’s dedication to growing her knowledge and skills to become an effective contributor to the agency’s mission.NASA/Danny Nowlin Rae Anderson never set out to have a career with NASA, but the pursuit of opportunities around her interest in computer science led the Union City, Tennessee native to the agency that explores the secrets of the universe for the benefit of all.
      In turn, Anderson’s desire to expand her knowledge helped her become the first employee at NASA’s Stennis Space Center – and one of five civil servants across NASA – to earn the highest distinction in the Safety and Mission Assurance Technical Excellence Program in the discipline of Software Assurance.
      “I want to be good at my job, so early in my career, I set a goal of reaching this certification,” Anderson said.
      The program’s level four certification demonstrates Anderson’s dedication to growing her knowledge and skills to become an effective contributor to the agency’s mission. As the subject matter expert for software assurance, Anderson serves as a technical lead for a team in the NASA Stennis Safety and Mission Assurance Directorate that supports the center’s work with propulsion testing and autonomous systems.
      Along the way, I have been a part of a team at NASA Stennis that has good people who are going to do what they need to do to accomplish goals, whatever it takes to accomplish it and to do it safely.
      rae anderson
      NASA Stennis SME for Software Assurance
      Whether it is propulsion testing for NASA’s Artemis mission or autonomous systems work on pace for the first-ever in-flight autonomous systems mission, the work at NASA Stennis relies on software to carry out complex tasks. Anderson’s team reviews software management plans to ensure all requirements are met to conduct the work safely. She helps lead the effort to determine possible hazards and, if any are present, to put controls and mitigations in place to lessen the risk.
      “It’s important to ensure any potential issues are mitigated,” Anderson said. “It is not a guarantee, but it gives a better feeling that all have done what they are supposed to do as far as following the process and that the software is technically sound to move forward. With software, there’s always going to be bugs because there is so much of it. We are there as the checks and balances of engineering as a project moves forward.”
      Before moving forward and earning a computer science degree from the University of Tennessee-Martin, Anderson grew up 20 minutes from the Tennessee-Kentucky border, which meant she did not live near a NASA center. When she thought about NASA and space, astronauts and the solar system came to mind.
      Since Anderson’s career in software brought her to live in Slidell, Louisiana, and ultimately begin work at NASA Stennis 16 years ago, Anderson has discovered that NASA is much more. She has found NASA to be a place that combines her knowledge of software with a diverse and highly skilled workforce, coming together for the benefit of humanity.
      “Along the way, I have been a part of a team at NASA Stennis that has good people who are going to do what they need to do to accomplish goals, whatever it takes to accomplish it and to do it safely,” Anderson said.
      For information about NASA’s Stennis Space Center, visit:
      Stennis Space Center – NASA
      View the full article
  • Check out these Videos

×
×
  • Create New...