When physicists combine various gravitational wave events (such colliding black holes) to test Albert Einstein’s theory of general relativity, small modeling errors may compound more quickly than previously thought, contend researchers at the University of Birmingham in the UK.
The results, which were reported on June 16 in the journal iScience, point to the possibility that catalogs with as few as 10 to 30 events and a signal-to-background noise ratio of 20 (which is typical for the events used in this type of test) could provide false departures from general relativity, pointing to new physics when none actually exists.
The authors draw the conclusion that physicists should exercise caution while doing similar experiments because this is close to the scale of the present catalogs used to evaluate Einstein’s hypothesis.
“Testing general relativity with catalogs of gravitational wave events is a very new area of research,” says Christopher J. Moore, a lecturer at the School of Physics and Astronomy & Institute for Gravitational Wave Astronomy at the University of Birmingham in the United Kingdom and the lead author of the study.
“This is one of the first studies to look in detail at the importance of theoretical model errors in this new type of test. While it is well known that errors in theoretical models need to be treated carefully when you are trying to test a theory, we were surprised by how quickly small model errors can accumulate when you start combining events together in catalogs.”
In 1916, Einstein published his theory of general relativity, which explains how massive celestial objects warp the interconnected fabric of space and time, resulting in gravity. According to the idea, cataclysmic space events like black hole collisions will cause space-time to be substantially disrupted, leading to the creation of gravitational waves that will travel through space at the speed of light.
Testing general relativity with catalogs of gravitational wave events is a very new area of research. This is one of the first studies to look in detail at the importance of theoretical model errors in this new type of test. While it is well known that errors in theoretical models need to be treated carefully when you are trying to test a theory, we were surprised by how quickly small model errors can accumulate when you start combining events together in catalogs.
Christopher J. Moore
Instruments such as LIGO and Virgo have now detected gravitational wave signals from dozens of merging black holes, which researchers have been using to put Einstein’s theory to the test. So far, it has always passed. To push the theory even further, physicists are now testing it on catalogs of multiple grouped gravitational wave events.
“When I got interested in gravitational wave research, one of the main attractions was the possibility to do new and more stringent tests of general relativity,” says Riccardo Buscicchio, a PhD student at the School of Physics and Astronomy & Institute for Gravitational Wave Astronomy and a co-author of the study.
“The theory is fantastic and has already passed a hugely impressive array of other tests. But we know from other areas of physics that it can’t be completely correct. Trying to find exactly where it fails is one of the most important questions in physics.”
Larger gravitational wave libraries, however they might in the near future help scientists get closer to the solution, also increase the risk of mistakes. Models with a high degree of accuracy for individual events could be misleading when applied to vast libraries since waveform models inherently entail some approximations, simplifications, and modeling errors.
To determine how waveform errors grow as catalog size increases, Moore and colleagues used simplified, linearized mock catalogs to perform large numbers of test calculations, which involved drawing signal-to-noise ratios, mismatch, and model error alignment angles for each gravitational wave event.
The researchers discovered that the distribution of waveform modeling errors across events, whether deviations have the same value for each event, and if modeling mistakes tend to average out across many different catalog events all affect how quickly modeling errors accrue.
“The next step will be for us to find ways to target these specific cases using more realistic but also more computationally expensive models,” says Moore. “If we are ever to have confidence in the results of such tests, we must first have as a good an understanding as possible of the errors in our models.”
A European Union H2020 ERC Starting Grant, the Leverhulme Trust, and the Royal Society supported this work.