
A team of scientists and astronomers from Universities Space Research Association in Columbia, the SETI Institute and NASA announced the discovery of 69 new exoplanets in May.
While that discovery alone counts as a groundbreaking achievement, the way researchers arrived at the discovery can be considered even more groundbreaking.
The findings were validated using machine learning, demonstrating an ability to cut through a mountain of data to find the needles in the haystack. The new method could have wide-reaching applications in other areas of astronomy, reducing the time it takes to process data and freeing up scarce scientists to work on more productive research.
“One of the big discoveries from this effort is that we did not even know that there were so many exoplanets,” said Hamed Valizadegan, a machine learning scientist at USRA and lead author of a paper describing the methods and findings. “As we collect more and more of them, we are hoping we can answer some of the big questions we have, like are we alone. That’s a very important question.”
Efficient model
Machine learning was already in play, using a deep neural network called ExoMiner that validated 301 new exoplanets in 2021 by looking at data from the Kepler Space Telescope. That method looked for periodic reductions in the amount of light coming from given stars that could be caused by planets transiting them in their orbits.
That system, however, was not able to differentiate between existing confirmed planets and potential new planets, and resulted in false positive signals. It also wasn’t able to rule out false positives attributed to other sources, such as eclipsing binary stars.
Using a concept called multiplicity, which takes into account the logic of probability, astronomers aimed to boost the level of confidence that new transit signals associated with a star already known to have planets would indicate a new discovery.
“The idea is that machine learning is good at finding patterns, and doesn’t get tired and make mistakes like humans,” Valizadegan said. “It’s very effective and efficient, and has saved us a lot of time.”
Traditionally, scientists who discovered transit signals had to book telescope time to observe a star looking for any source of false positives, then put plots and charts together and write a paper trying to convince peers that a new exoplanet had been discovered.
“They had to do this one at a time for each transit event, which is a very time-consuming process,” Valizadegan explained. “This is faster and more effective, and we will see in the next few years if it’s going to be more reliable.”
Hard sell
Astronomers are still trying to answer some basic questions about how the universe and our own solar system evolved.
“These (exoplanet) discoveries help us better understand planets and solar systems beyond our own, and what makes ours so unique,” said Jon Jenkins, exoplanet scientist at NASA’s Ames Research Center.
Aside from searching for exoplanets, machine learning is being applied to spectroscopy and James Webb Space Telescope data. Scientists are now working to improve the model to work with the noisy, more challenging data collected by the Transiting Exoplanet Survey Satellite.
When he started using machine learning techniques for specific tasks tied to the Kepler mission, it became clear to Valizadegan that established researchers aren’t familiar with the tool and rely more on statistical methods.
“The younger generation is using these tools more and more,” he said. “The previous generation is beginning to gain more confidence in machine learning, but they don’t trust it yet.”
Part of the purpose of his paper is to discuss why using machine learning is valid and makes sense.
“I investigated a lot of charts and plots, and the more I looked, the more confidence I gained that it could be trusted,” Valizadegan said. “If the model is confident for 99% of the exoplanets it finds, we are good. Even if three of the 370 exoplanets we’ve found using this technology are false positives, we’ll be able to say the model did it correctly.”
Future imperative
At this point, what machine learning is doing is not science.
“It’s processing data and making it ready for scientists to study,” Valizadegan said, but he thinks it will eventually become commonplace by necessity.
“We are collecting huge amounts of data using existing telescopes, and the volume that will be collected by future telescopes is growing exponentially,” he said. “We cannot manage to process this data manually. We need to invest a lot more in this technology and get more bright people working in this area.”
Unfortunately, USRA, NASA and other science-oriented research agencies have a hard time competing with industry for machine learning and artificial intelligence specialists, because industry offers more money.
“The attraction here is that we are increasing the knowledge we have of the universe, and these are important questions we are answering,” Valizadegan said. “I hope more investment comes into this field for us. In any case, there is no other way and it’s going to be the future of scientific research.”
-30-
Caption:
A variety of exoplanet possibilities are shown in this illustration. Image credit: NASA