Parallelized inference for gravitational-wave astronomy

Colm Talbot, Rory Smith, Eric Thrane, Gregory B. Poole

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50× for compact binary coalescence gravitational waveform generation and likelihood evaluation, and more than 100× for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.

Original languageEnglish
Article number043030
Number of pages7
JournalPhysical Review D
Volume100
Issue number4
DOIs
Publication statusPublished - 28 Aug 2019

Cite this

@article{08fe550eacf24bcb8e74d21ac6dd7a93,
title = "Parallelized inference for gravitational-wave astronomy",
abstract = "Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50× for compact binary coalescence gravitational waveform generation and likelihood evaluation, and more than 100× for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.",
author = "Colm Talbot and Rory Smith and Eric Thrane and Poole, {Gregory B.}",
year = "2019",
month = "8",
day = "28",
doi = "10.1103/PhysRevD.100.043030",
language = "English",
volume = "100",
journal = "Physical Review D",
issn = "2470-0010",
publisher = "American Physical Society",
number = "4",

}

Parallelized inference for gravitational-wave astronomy. / Talbot, Colm; Smith, Rory; Thrane, Eric; Poole, Gregory B.

In: Physical Review D, Vol. 100, No. 4, 043030, 28.08.2019.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Parallelized inference for gravitational-wave astronomy

AU - Talbot, Colm

AU - Smith, Rory

AU - Thrane, Eric

AU - Poole, Gregory B.

PY - 2019/8/28

Y1 - 2019/8/28

N2 - Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50× for compact binary coalescence gravitational waveform generation and likelihood evaluation, and more than 100× for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.

AB - Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50× for compact binary coalescence gravitational waveform generation and likelihood evaluation, and more than 100× for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.

UR - http://www.scopus.com/inward/record.url?scp=85072213186&partnerID=8YFLogxK

U2 - 10.1103/PhysRevD.100.043030

DO - 10.1103/PhysRevD.100.043030

M3 - Article

AN - SCOPUS:85072213186

VL - 100

JO - Physical Review D

JF - Physical Review D

SN - 2470-0010

IS - 4

M1 - 043030

ER -