Jump to content

HST Snaps Optical Jet of Quasar 3c 273


HubbleSite

Recommended Posts

low_STSCI-H-p-9330a-k1340x520.png

The Green (V band) image (left) shows the field around the quasar 3c 273 (courtesy Matthew Colless, David Schade and the CFHT). The optical jet can be seen southwest of the quasar. The blue (B band) image (right) shows the optical jet as seen by the Faint Object Camera (FOC) on board the Hubble Space Telescope. For comparison, the 11X11 arcsec FOC field of view is marked on the ground based CFHT image. The insert (right) is a Maximum Entropy reconstruction of the FOC image. This FOC image is derived from three linearly polarized images which show that the brightest knots are highly polarized (20%-50%). A letter which describes these data appears in the 9 September 1993 issue of Nature.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By European Space Agency
      At the International Astronautical Congress (IAC) in Milan this week, ESA signed a contract for Element #1, the first phase of the HydRON Demonstration System. HydRON, which stands for High thRoughput Optical Network, is set to transform the way data-collecting satellites communicate, using laser technology that will allow satellites to connect with each other and ground networks much faster.
      View the full article
    • By European Space Agency
      Image: Juice snaps Moon en route to Earth View the full article
    • By NASA
      5 Min Read NASA Optical Navigation Tech Could Streamline Planetary Exploration
      Optical navigation technology could help astronauts and robots find their ways using data from cameras and other sensors. Credits: NASA As astronauts and rovers explore uncharted worlds, finding new ways of navigating these bodies is essential in the absence of traditional navigation systems like GPS. Optical navigation relying on data from cameras and other sensors can help spacecraft — and in some cases, astronauts themselves — find their way in areas that would be difficult to navigate with the naked eye. Three NASA researchers are pushing optical navigation tech further, by making cutting edge advancements in 3D environment modeling, navigation using photography, and deep learning image analysis. In a dim, barren landscape like the surface of the Moon, it can be easy to get lost. With few discernable landmarks to navigate with the naked eye, astronauts and rovers must rely on other means to plot a course.
      As NASA pursues its Moon to Mars missions, encompassing exploration of the lunar surface and the first steps on the Red Planet, finding novel and efficient ways of navigating these new terrains will be essential. That’s where optical navigation comes in — a technology that helps map out new areas using sensor data.
      NASA’s Goddard Space Flight Center in Greenbelt, Maryland, is a leading developer of optical navigation technology. For example, GIANT (the Goddard Image Analysis and Navigation Tool) helped guide the OSIRIS-REx mission to a safe sample collection at asteroid Bennu by generating 3D maps of the surface and calculating precise distances to targets.
      Now, three research teams at Goddard are pushing optical navigation technology even further.
      Virtual World Development
      Chris Gnam, an intern at NASA Goddard, leads development on a modeling engine called Vira that already renders large, 3D environments about 100 times faster than GIANT. These digital environments can be used to evaluate potential landing areas, simulate solar radiation, and more.
      While consumer-grade graphics engines, like those used for video game development, quickly render large environments, most cannot provide the detail necessary for scientific analysis. For scientists planning a planetary landing, every detail is critical.
      Vira can quickly and efficiently render an environment in great detail.NASA “Vira combines the speed and efficiency of consumer graphics modelers with the scientific accuracy of GIANT,” Gnam said. “This tool will allow scientists to quickly model complex environments like planetary surfaces.”
      The Vira modeling engine is being used to assist with the development of LuNaMaps (Lunar Navigation Maps). This project seeks to improve the quality of maps of the lunar South Pole region which are a key exploration target of NASA’s Artemis missions.
      Vira also uses ray tracing to model how light will behave in a simulated environment. While ray tracing is often used in video game development, Vira utilizes it to model solar radiation pressure, which refers to changes in momentum to a spacecraft caused by sunlight.
      Vira can accurately render indirect lighting, which is when an area is still lit up even though it is not directly facing a light source.NASA Find Your Way with a Photo
      Another team at Goddard is developing a tool to enable navigation based on images of the horizon. Andrew Liounis, an optical navigation product design lead, leads the team, working alongside NASA Interns Andrew Tennenbaum and Will Driessen, as well as Alvin Yew, the gas processing lead for NASA’s DAVINCI mission.
      An astronaut or rover using this algorithm could take one picture of the horizon, which the program would compare to a map of the explored area. The algorithm would then output the estimated location of where the photo was taken.
      Using one photo, the algorithm can output with accuracy around hundreds of feet. Current work is attempting to prove that using two or more pictures, the algorithm can pinpoint the location with accuracy around tens of feet.
      “We take the data points from the image and compare them to the data points on a map of the area,” Liounis explained. “It’s almost like how GPS uses triangulation, but instead  of having multiple observers to triangulate one object, you have multiple observations from a single observer, so we’re figuring out where the lines of sight intersect.”
      This type of technology could be useful for lunar exploration, where it is difficult to rely on GPS signals for location determination.
      A Visual Perception Algorithm to Detect Craters
      To automate optical navigation and visual perception processes, Goddard intern Timothy Chase is developing a programming tool called GAVIN (Goddard AI Verification and Integration) Tool Suit.
      This tool helps build deep learning models, a type of machine learning algorithm that is trained to process inputs like a human brain. In addition to developing the tool itself, Chase and his team are building a deep learning algorithm using GAVIN that will identify craters in poorly lit areas, such as the Moon.
      “As we’re developing GAVIN, we want to test it out,” Chase explained. “This model that will identify craters in low-light bodies will not only help us learn how to improve GAVIN, but it will also prove useful for missions like Artemis, which will see astronauts exploring the Moon’s south pole region — a dark area with large craters — for the first time.”
      As NASA continues to explore previously uncharted areas of our solar system, technologies like these could help make planetary exploration at least a little bit simpler. Whether by developing detailed 3D maps of new worlds, navigating with photos, or building deep learning algorithms, the work of these teams could bring the ease of Earth navigation to new worlds.
      By Matthew Kaufman
      NASA’s Goddard Space Flight Center, Greenbelt, Md.
      Share
      Details
      Last Updated Aug 07, 2024 EditorRob GarnerContactRob Garnerrob.garner@nasa.govLocationGoddard Space Flight Center Related Terms
      Goddard Technology Artificial Intelligence (AI) Goddard Space Flight Center Technology Explore More
      4 min read NASA Improves GIANT Optical Navigation Technology for Future Missions
      Goddard's GIANT optical navigation software helped guide the OSIRIS-REx mission to the Asteroid Bennu. Today…
      Article 10 months ago 4 min read Space Station Research Contributes to Navigation Systems for Moon Voyages
      Article 2 years ago 5 min read NASA, Industry Improve Lidars for Exploration, Science
      NASA engineers will test a suite of new laser technologies from an aircraft this summer…
      Article 5 months ago View the full article
    • By Amazing Space
      Unveiling Quasar RX J1131-1231: Stunning Discoveries by the James Webb Space Telescope
    • By NASA
      NASA/JPL On July 31, 1964, the Ranger 7 spacecraft took this photo, the first image of the Moon taken by a United States spacecraft. 17 minutes later, it crashed into the Moon on the northern rim of the Sea of Clouds as intended. The 4,316 images sent back helped identify safe Moon landing sites for Apollo astronauts.
      Until 1964, no closeup photographs of the lunar surface existed. Ranger 7 returned the first high resolution close-up photographs of the lunar surface. The mission marked a turning point in America’s lunar exploration program, taking the country one step closer to a human Moon landing.
      Learn more about Ranger 7.
      Image credit: NASA/JPL
      View the full article
  • Check out these Videos

×
×
  • Create New...