Earthquake statistics follow an approximate scaling law – the famous Gutenberg-Richter law – which states that the number of earthquakes having magnitude m larger than some value M falls off as a power law with an exponent b. The value of b can be estimated from recorded data in earthquake catalogues. However, any earthquake catalogue is incomplete, as instruments have limited sensitivity. Also, earthquakes occurring in the aftermath of a main shock may be impossible to detect as they are masked by the signal of the larger shock. This limitation is reflected in a parameter known as the “complete magnitude” MC, which also shows spatiotemporal heterogeneity – the record of earthquakes in some places and times may be significantly more complete than in others. MC determines how accurately the statistics of the earthquake process can be known.
In a recent paper, LML Fellow Jiangcang Zhuang and colleagues tested and compared five distinct methods widely used in estimating MC. Using catalogues of observed earthquake properties, they tested the performance of these five algorithms under difficult conditions, such as a small volume of events and strong spatial‐temporal heterogeneity. In the study, they examined how stable each of the algorithms were and how well they agreed with known data. Overall, the researchers conclude that there is no one superior algorithm, but the suitability of each depends on circumstances and they make suggestions for when each might be most appropriately used. As they note, similar work has been done using synthetic catalogues, and the present study provides further results by testing on data from real catalogues and from different seismic networks.
The paper is available at https://agupubs.onlinelibrary.wiley.com/doi/full/10.26464/epp2018015