ESA/Hubble, N. Bartmann

Researchers explore historical Hubble datasets to understand the solar system

GPU-based supercomputer helps astronomers simulate exoplanet atmospheres

An international team of astronomers has run archival observations collected by the Nasa/ESA Hubble Space Telescope of 25 “hot Jupiter” exoplanets – bodies that orbit stars far beyond our solar system – using GPU-based acceleration from Nvidia.

The researchers have now published a paper based on surveys of the 25 exoplanets using what is believed to be the most data ever employed in a survey of such bodies – 1,000 hours of archival observations, mainly from the Hubble Space Telescope.

They focused their study on hot Jupiters, the largest and therefore easiest-to-detect exoplanets, many sweltering in temperatures of over 3,000°F. Their analysis of these torrid atmospheres used high-performance computing with Nvidia GPUs to advance understanding of all planets, including Earth.

Lead author Quentin Changeat said: “Hubble enabled the in-depth characterisation of 25 exoplanets, and the amount of information we learned about their chemistry and formation – thanks to a decade of intense observing campaigns – is incredible.”

The study’s co-leader, Billy Edwards of UCL and the Commissariat à l'énergie atomique et aux énergies alternatives (CEA), said: “Our paper marks a turning point for the field. We are now moving from the characterisation of individual exoplanet atmospheres to the characterisation of atmospheric populations.”

According to Changeat, the most fascinating part of the process was determining which small set of models to run in a consistent way against data from all 25 exoplanets to get the most reliable and revealing results.

“There was an amazing period of exploration – I was finding all kinds of sometimes weird solutions – but it was really fast to get the answers using Nvidia GPUs,” he said. Each of about 20 models had to run 250,000 times for all 25 exoplanets.

The processing was run on the Wilkes3 supercomputer at the University of Cambridge, which uses 320 Nvidia A100 Tensor Core GPUs on a Nvidia Quantum InfiniBand network.

Each node on Wilkes3 is configured with four A100s, which, according to Nvidia, is equivalent to up to 25,600 CPU cores. Nvidia claimed that a single A100 GPU offers a 200x performance boost compared to a CPU. With 32 processes on each GPU, the team got the equivalent of a 6,400x speedup compared to a CPU, Nvida said in a blog post.

The software running on the Nvidia GPUs simulates how hundreds of thousands of light wavelengths would travel through an exoplanet’s atmosphere

“I expected the A100s might be double the performance of V100s and P100s I used previously, but honestly it was like an order of magnitude difference,” said Ahmed Al-Refaie, a co-author of the paper and head of numerical methods at the UCL Centre for Space Exochemistry Data.

Al-Refaie used Nvidia’s CUDA profilers to optimise jobs, PyCUDA to optimise the team’s code and cuBlas to speed up some of the maths routines.

According to Nvidia, the main bottleneck in the system was not the GPU-based simulation, but the CPU-based system that handled the task of determining statistically where in the dataset to explore next.

Read more on Clustering for high availability and HPC