Experts: AI for space and Earth exploration

Posted by

How can AI advance exploration of the universe, oceans and Earth?

Perspectives from professors Katie Bauman, Matthew Graham, John DabiriAnd Zachary Ross

Image credit: Caltech

Image credit: Caltech

Black holes

Computational imaging specialist Katie Bauman Describes how machine learning enables the imaging of black holes and other astrophysical phenomena.

We use machine learning to create images from data collected by Event Horizon Telescope (EHT), is a network of several telescopes located around the world that work together to collect light from black holes. Together, these telescopes act as a giant virtual telescope. We computationally combine their data together and develop algorithms to fill gaps in coverage. Due to holes in the coverage of our virtual telescope, much uncertainty is introduced into the image structure. It took a worldwide team months to characterize this uncertainty, or quantify the range of possible appearances of a black hole. M87* black hole image And literally for recent years A picture of the Sgr A* black hole At the center of our galaxy.

We can harness the image-generating power of machine learning methods, called Deep-learning generative models to more effectively capture uncertainty For EHT images. The models we are developing generate the full distribution of possible images that fit the complex data we collect, not just one image. We are also using these same generative models to assess uncertainty Exoplanet OrbitsMedical Imaging and Seismology.

One area where we are excited to use machine learning is to help optimize sensors for computational imaging. For example, we are currently developing machine learning methods to help identify locations New telescopes that we build at EHT. By designing telescope placement and image-reconstruction software together, we can squeeze more information out of the data we collect, recovering a high-fidelity image with less uncertainty. This idea of ​​co-designing computational imaging systems extends beyond EHT. Medical imaging and other domains.

Caltech astronomy professors Greg Hallinan and Vikram Ravi are leading the effort. DSA-2000, in which 2,000 telescopes in Nevada will image the entire sky in radio wavelengths. Unlike the EHT, where we have to fill gaps in the data, this project will collect large amounts of data: around 5 terabytes per second. All processing steps such as correlation, calibration and imaging must be fast and automated. There is no time to do this task using traditional methods. Our group is collaborating for development Deep learning methods that automatically clean images So users receive images within minutes of data collection.

-Catherine Bauman, assistant professor of computing and mathematical sciences, electrical engineering and astronomy; Rosenberg Scholar; and an investigator at the Heritage Medical Research Institute

Astronomy

Astronomer Matthew GrahamProject Scientist for Zwicky transient featureExplains how AI is changing astrophysics.

The sociological change we’ve seen in astronomy over the last 20 years has to do with big data. The data is too complex, too big and sometimes too fast to come from the telescope, streaming off the mountain in gigabytes a second. So, we turn to machine learning.

Many people love these new big data sets. Let’s say you’re looking for a one-in-a-million object. Mathematically, in a data set containing a million objects, you’re going to find one object. In data sets like the Rubin Observatory A legacy survey of space and timeWith 40 billion objects, you can find 40,000 of those one-in-a-million objects.

I’m interested in active galactic nuclei, where you have a supermassive black hole in the center of the galaxy, and around the black hole is a disk of dust and gas that falls into it and makes it incredibly bright. So, I can go through the data set to try to find those kinds of things. I have an idea of ​​what patterns I should look for, so I figure out the machine learning approach I want to develop to do that. Or I can simulate how these things should be, then Train my algorithm So it can find simulated objects like them and then apply them to real data.

Today, we are using computers for repetitive tasks that were previously done by undergraduate or graduate students. However, we’re moving into areas where machine learning is becoming more sophisticated, when we start saying to the computer, “Tell me what patterns you’re looking for here.”

-Matthew Graham, Research Professor of Astronomy

the ocean

engineer John Dabiri Describes how AI enhances ocean surveillance and research.

Only 5 to 10 percent of the ocean’s volume has been explored. Traditional ship-based measurements are expensive. Increasingly, scientists and engineers use underwater robots to survey the ocean floor, look for interesting locations or objects, and study and observe ocean chemistry, ecology, and dynamics.

Our group develops technology A swarm of small self-contained underwater drones And Bionic Jellyfish To enable research.

Drones can encounter complex currents while navigating the ocean, and fighting them wastes energy or pulls the drone off course. Instead, we want these robots to take advantage of currents, such as hawks reaching great heights to fly on thermals in the air.

However, in complex ocean currents, we cannot calculate and control the trajectory of each robot like a spacecraft.

While we want robots to explore the deep ocean, especially in swarms, it is nearly impossible to control them with a joystick from 20,000 feet on the surface. We can’t even give them the data about local ocean currents they need to navigate because we can’t detect robots from the surface. Instead, at some point we need sea-born drones to decide for themselves how to proceed.

To help drones navigate autonomously, we’re giving them AI, specifically deep reinforcement learning networks written on low-power microcontrollers that are only one square inch in size. Using data from the drone’s gyroscope and accelerometer, the AI ​​repeatedly calculates trajectories. With each attempt, he learns how to efficiently help the drone coast and pulse his way through any current he encounters.

—John Dabiri (MS ’03, PhD ’05), Centennial Professor of Aeronautics and Mechanical Engineering

Earthquake

Seismologist Zachary Ross Explains how machine learning helps in earthquake monitoring.

The word “earthquake” makes you think of the greatest moment of shaking. But tremors precede and follow that moment. To understand the whole process, you have to analyze all earthquake signals to see the collective behavior of shocks. The more tremors and earthquakes you see, the more it illuminates the web of fault structures within the Earth that are responsible for earthquakes.

Monitoring earthquake signals is challenging. Seismologists use more data than people see Southern California Seismic Network. It’s too much for us to keep up with manually. And we have no easy way to distinguish earthquake tremors from nuisance signals, such as from loud noise events or large trucks.

Students in Seismo Lab Measuring the properties of seismic waves requires considerable time. You hang it up after five minutes, and it’s not fun. These repetitive tasks are obstacles to real scientific analysis. Our students will spend their time thinking about what is new.

Now, AI helps us identify signals that interest us. First, we train machine learning algorithms to detect various types of signals in carefully hand-annotated data. We then apply our model to new incoming data. This model takes the decisions of the seismologist.

—Zachary Ross, Assistant Professor of Geophysics, and William H. Hurt Scholar

Source: Caltech


#Experts #space #Earth #exploration

Leave a Reply

Your email address will not be published. Required fields are marked *