-
PDF
- Split View
-
Views
-
Cite
Cite
Jiří Vackář, Jan Burjánek, František Gallovič, Jiří Zahradník, John Clinton, Bayesian ISOLA: new tool for automated centroid moment tensor inversion, Geophysical Journal International, Volume 210, Issue 2, August 2017, Pages 693–705, https://doi.org/10.1093/gji/ggx158
- Share Icon Share
Abstract
We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space–time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.
1 INTRODUCTION
Focal mechanisms of earthquakes are important for understanding physics of individual seismic events, and for studying seismotectonics of a region; they also serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. Then we have to find six components of the moment tensor, three space coordinates of the centroid, and the centroid time, together this is called the centroid-moment-tensor (CMT) solution.
Many methods have been proposed for CMT solutions (e.g. Fukuyama & Dreger 2000; Pondrelli et al. 2002; Bernardi et al. 2004; Liu et al. 2004; Rueda & Mézcua 2005; Scognamiglio et al. 2009; Cesca et al. 2010). They differ in ability to provide automatic solutions and they include consideration of the uncertainty to varying degrees. In this paper, we develop a method suitable for highly automated processing of individual events that can handle both real-time and offline processing. We follow the Bayesian approach to treat rigorously the uncertainty in the inversion.
In general, CMT determination using broad-band waveforms is a nonlinear inverse problem. Alternatively, a linear inverse problem can be solved just for MT components, assuming that the centroid position, centroid time, and source time function are known, using, for example, hypocentre parameters from an earthquake catalogue. The applicability of the latter simplification is highly limited, hence it is not applied in this paper. In this work, we calculate the centroid position and time on a regular grid, while the MT is obtained in every space–time grid point by the linear least-squares method, that is, analytically with minimal computational costs. The solutions are then combined by the Bayesian approach. This hybrid approach enables solution of the nonlinear inverse problem by a grid search in four-dimensional parameter space. Indeed, in the grid search one has full control over the density of the posterior sampling.
A CMT determination should be accompanied by an assessment of its uncertainty. If the uncertainty is estimated correctly, it defines a region in the model space comprising the true solution, with certain probability. The sources of uncertainty in an MT determination are: signals in the waveforms generated by all other sources than the earthquake of interest (seismic noise), lack of knowledge of the Earth structure (Green's function modelling error), limited number of seismic recordings, measuring errors and possible instrument malfunction (Vackář et al. 2015), and assumptions for the mathematical description of the earthquake source (point source approximation, fixed source time function, etc.). If any of these aspects is ignored, the uncertainty evaluation may be biased.
Existing methods for estimating the uncertainty in the CMT reflect many of these aspects. They include, for example, construction of confidence regions by global search algorithms (Šílený 1998), including bootstrap and jackknifing techniques (Tichelaar & Ruff 1989), calculation of various stability measures (Sokos & Zahradník 2013), adding noise to real data (Vavryčuk 2011), and using covariance matrix of the data, which is discussed below.
The uncertainty of the MT in the linear case, assuming Gaussian error distribution, can be described by a 6-D error ellipsoid (Zahradník & Custódio 2012). Křížová et al. (2013) show how to extend this approach to a nonlinear inversion including source depth and time. Particularly important are the methods developed in a general framework of Bayesian formulation, where the solution of the inverse problem is given by the posterior probability density function (PPDF) of model parameters (Tarantola 2005; Minson et al. 2013). Examples of application of Bayesian approach to seismic source inversion include Wéber (2006) to local CMT and its source time function, Dettmer et al. (2007) to sound velocity profiles, Duputel et al. (2012) to CMT from W-phase, and Dettmer et al. (2014) to finite fault inversions. Stähler & Sigloch (2014, 2016) published fully Bayesian framework for inversion of earthquake depth, MT and source time function, including novel misfit criterion called signal decorrelation.
A key role in the Bayesian inversion is played by data and modelling covariance matrices, which quantify the noise characteristic and measurement and theory errors. For example, Duputel et al. (2012) reflected in the covariance matrix the centroid location uncertainty (theory error) and the effect of an oversampled record by describing correlation of samples as an exponentially decaying function. Wéber (2006) generated synthetic seismograms for several reasonable velocity models and used their variance as an estimate of modelling error. Dettmer et al. (2007) estimated the covariance matrix iteratively from data residuals. Bodin et al. (2012) and Dettmer et al. (2012) used hierarchical estimation of covariance matrix, where the level of noise is considered as unknown hyperparameter. Mustać & Tkalčić (2016) used the hierarchical Bayesian approach with a data covariance matrix based on before-event-noise correlation function. Hallo & Gallovič (2016) developed a method to reflect the uncertainty of the velocity model in the covariance matrix.
We follow a similar approach to Duputel et al. (2012) and Mustać & Tkalčić (2016), but contrary to them, we include the estimated (unmodified) noise covariance function directly in the covariance matrix, making the procedure simpler. The data covariance matrix then serves as an automated station weighting according to the noise level and it can also reflect directionality of the seismic noise and its correlation between components of the seismic station. Mustać & Tkalčić (2016) used Markov chain Monte Carlo method to solve for the PPDF problem, while we use a combination of grid-search with analytical solution, which makes the procedure more efficient.
The aim of the paper is to describe a Bayesian method for the CMT solution as well as introducing an open-source code ISOLA-ObsPy that solves the problem automatically with relatively low computational costs. Here we assume that the seismic noise is the most important source of errors. Typically, this is a case encountered in low-frequency inversions of data with low signal-to-noise ratios. Otherwise the Green's function modelling uncertainties as well as other sources of errors can be incorporated into the covariance matrix, but this is out of the scope of this paper. For simplicity, we assume also that the moment-rate time function is known; for example, a delta function is a suitable approximation if the source duration is shorter than periods used in the inversion.
The present method solves similar task as SCISOLA code (Triantafyllis et al. 2016), but the main goals of the codes are different. SCISOLA is a tool for real-time MT solutions, can be run as a SeisComP3 module, and as a back-end uses the ISOLA code (Sokos & Zahradník 2013). The main objective of the presented method, which is also inspired by ISOLA, is to improve the core of the inversion by using the data covariance matrix and the Bayesian approach. Instead of extending ISOLA by these features and using it as a back-end, we have taken the opportunity to re-write the core of the inversion from scratch with improved computational efficiency and a more versatile code.
The paper is structured as follows. First, we describe the Bayesian formulation of the CMT problem and show by example how the uncertainty can be visualized and how it is influenced by the noise level and by the number of stations. Second, we construct the data covariance matrix from before-event noise and show how to visualize the effects of the data covariance matrix. By means of a few synthetic tests we show the ability of the new method to improve the CMT estimates. Third, we test our method on real data from the Swiss Seismic Network and compare the results with the moment tensor catalogue created manually by the SeisComP3 scmtv module. Finally, we briefly describe some technical aspects of the developed code.
2 BAYESIAN FORMULATION OF MIXED LINEAR – NONLINEAR PROBLEM
A Bayesian approach is a probabilistic method projecting data errors and prior information into the uncertainty of model parameters. In addition to the best-fitting CMT solution, the method provides a PPDF of the parameters. The PPDF can be obtained analytically for linear inverse problems in an L2-norm and Gaussian distributions of uncertainties. For nonlinear problems a systematic grid search can be used, but (except for problems with very few parameters) it is computationally highly inefficient. That is why stochastic sampling algorithms have been developed to provide an approximation of the PPDF (Sambridge & Mosegaard 2002; Sambridge 2014).
Here we propose a hybrid approach, where the centroid position and time are evaluated on a grid of points, and the MT is solved by least squares. In each grid point the PPDF (which is Gaussian because the problem is linear) is obtained analytically. Then such obtained PPDFs are combined to obtain a full (non-Gaussian) PPDF. So we obtain the same PPDF as would be obtained by a stochastic method, but using just four-dimensional grid instead of sampling 10-dimensional model space. The grid is chosen in such a way, that it covers all relevant locations and the sampling controls PPDF discretization, for details see Section 6. By choosing the grid, we define a prior information for centroid location, which is uniform within the grid and zero elsewhere.
We need a way how to visually inspect marginal probability density of the inverted parameters as well as some (nonlinear) combinations of them (e.g. strike angle or double-couple percentage, which are not directly inverted, but can be calculated from the inverted moment tensor components). To manage this, we generate random samples from a multivariate normal (Gaussian) distribution at each grid point specified by its mean |$\mathbf {\widetilde{m}}_i$| and covariance matrix |$\widetilde{\mathbf {C}}^M_i$| (determined in eqs 1 and 2). The number of random samples at each grid point i is proportional to ai, that is, PPDF integrated over the local MT parameters (eq. 7). Having an ensemble of random samples drawn from the total PPDF, we can easily plot histograms of parameters of interest (both inverted as well as their combinations) or scattering of nodal planes.
To illustrate the performance of the proposed method, we conduct a synthetic test showing behaviour of the Bayesian inversion for different noise levels and station configurations (Fig. 1). We generated synthetic waveforms (in the same velocity model as used in the inversion), then we added white noise of specified level, and used it as ‘data' in the inversion. Narrowing the marginal probability of each parameter can be observed for higher number of stations, lower noise level, and better azimuthal coverage. The dependence on the noise level is driven by the data covariance matrix CD (described in the next chapter), which reflects the properties of before-event noise.

Case example (synthetic test), how the Bayesian uncertainty of the resulting CMT varies in five different configurations. 400 focal mechanisms were generated randomly based on the calculated PPDF. The configurations differ by the number of the stations, their azimuthal distribution and by the noise level. True parameters are shown in red and the best solution by the green dotted line. Marginal probability densities are shown for nodal lines, moment magnitude Mw, centroid depth and for the MT decomposed into its double-couple, CLVD and isotropic component (Vavryčuk 2015). The marginal probability function gets narrower (more focused) as the number of the stations increases, their azimuthal coverage improves, and as the noise decreases. Noise denoted ‘strong' has four times higher standard deviation than ‘weak' one.
The marginal probability distribution is not always centred at the true solution (for which synthetic seismograms were generated; red line in the figure), but usually close to the best fitting model, which can be biased. The bias changes for every realization of random white noise. The ‘DC per cent' parameter never has a maximum at 100 per cent because every perturbation around a pure-DC mechanism results in a non-DC part, so obtaining pure-DC moment tensor is highly improbable when generating realizations within a given probability density function. But maximum probability of ‘DC per cent' gets closer to original 100 per cent when the uncertainty is decreasing.
3 COVARIANCE MATRIX OF THE NOISE
We illustrate a covariance matrix on a simple example in Fig. 2.

Example covariance matrix. We generated synthetic data for three 3-component stations for a given seismic source and added planar wave white noise coming from azimuth 54° and inclination −21°, so all components are affected. The covariance matrix (panel a) is calculated from the before-event noise (orange time window in panel b). The data covariance matrix consists of three large non-empty blocks corresponding to three stations, their 3 × 3 sub-blocks correspond to auto- and crosscovariance of the three components. We can see that some components are correlated (red colour at the diagonal) and the others anticorrelated (blue colour). The correlation between stations is assumed to be zero.
3.1 Improved visualization of the effect of the covariance matrix
The difference between |$\boldsymbol{d}^{\prime }_{{\rm obs}}$| and |$\boldsymbol{d}^{\prime }$| (called standardized residuals, according to Dettmer et al.2014) is minimized in the L2-norm, so it can be plotted to visualize waveform agreement as used in inversion. The effect of the automated station weighting and frequency filtering, that are also included in the approach, (shown in Chapter 4) can be seen in |$\boldsymbol{d}^{\prime }_{{\rm obs}}$| (standardized observed data) and |$\boldsymbol{d}^{\prime }$| (standardized synthetic data) also.
4 SYNTHETIC TESTS
There are two representative synthetic tests presented in this section. They show that the proposed method can not only assess the uncertainty of the CMT solutions with respect to seismic noise, but it can also improve the CMT results. The improvement is possible because when analysing the noise, the inversion can automatically be focused onto the least noisy part of the data.
4.1 White noise test, station dependent
In practice, the seismic noise level often varies from station to station. An automatic algorithm or manual operator needs to choose the stations according their signal-to-noise ratio. In this test we examine ability of the data covariance matrix to automate such a procedure, that is, to prefer the stations at which the noise level is low. Because in practice we often use the same frequency range at many stations, and this range is relatively narrow (1–2 octaves wide), for simplicity here we assume a white noise.
We simulated waveforms of an event with a given focal mechanism, adding a strong white noise to 3 stations of 5, and weak noise to the others. The level of the ‘strong noise' was chosen to be such that in manual processing such stations would be eliminated. The reason was to test the capability of the procedure to manage extreme conditions. Then we sampled down the waveforms to the rate 1.2 Hz, dropping frequencies above the Nyquist frequency, and then we filtered them by Butterworth filter to frequency range 0.02–0.15 Hz. The results are presented in Fig. 3. Further detail can be found in the electronic supplement. The covariance matrix automatically down-weighted the strongly disturbed stations, so that the inversion is controlled almost entirely by the low-noise stations. It usually improves results, but sometimes might produce unfavourable station geometry. To identify such cases, we recommend to have a look at the plot of standardized data together with the plot of the station geometry (panels (a) and (g) in Fig. 3).

White noise synthetic test: (a) source-station configuration, (b) mechanism for which waveforms were generated (‘true solution'), (c) best-fitting solution with a diagonal CD (the same standard deviation for all stations), (d) best-fitting solution with full (i.e. block-diagonal) data covariance matrix CD (Section 3). Waveform comparison for: (e) solution with a diagonal CD, (f) solutions with full CD–original seismograms and (g) solutions with full CD–multiplied by Cholesky decomposition of the CD (see Section 3.1), ‘standardized data'. Panels (e)–(g) show original waveforms without noise (dashed black lines), waveforms with the added white noise which were inverted (solid black lines), and modelled waveforms for the retrieved focal mechanism (coloured solid line). Comparing panels (f) and (g) it is demonstrated that the covariance matrix strongly up-weighted the weak-noise stations. Further plots for this synthetic test can be found in the electronic supplement.
4.2 Coloured (correlated) noise
In real data we always encounter a frequency-dependent noise, and standard manual CMT inversions need to carefully select a frequency range with suitable signal-to-noise ratio. For example, in regional CMT inversions, we can usually use only frequency range below the microseism noise. In this synthetic test we study whether the covariance matrix of the data is able to avoid the noisy part of spectrum automatically.
We simulated waveform for the same focal mechanism as in the previous example and added coloured noise of the same spectral content to all stations. The noise is strong between frequencies 0.05–0.25 Hz, while it is negligible outside this frequency band. Again, the level of the strong noise is such that in manual processing that spectral range would be avoided. Then we down-sampled the waveforms to the rate 4 Hz, dropping frequencies above the Nyquist frequency, and then we filtered them by Butterworth filter between 0.10–0.50 Hz.
The results are shown in Fig. 4 and in more detail are to be found in the electronic supplement. It shows that the new method with the data covariance matrix avoids using the noisy frequency range itself, so it indeed works as an automated filtering scheme.

Same as Fig. 3, but for coloured noise synthetic test: Strong noise in the frequency band 0.05–0.25 Hz at all stations. The inverted frequency band is 0.10–0.50 Hz, so only its higher part is not disturbed (noise free). The inversion with the covariance matrix CD (panel d) provides a solution close to the right one, contrasting with a wrong solution with a diagonal CD (panel c). Comparing panels (f) and (g) it is obvious that the covariance matrix serves as an automatic frequency filter enhancing the undisturbed part of the spectrum, thus improving the inversion.
In both synthetic tests we compare the new method (which uses the full data covariance matrix CD, described in Section 3) with solution using diagonal CD, which assumes the same standard deviation for all stations. The method with diagonal CD is equivalent to simple minimization of difference between the observed and simulated data in L2 norm, which is common in MT inversion. In this sense, the new method represents improvement compared to the common approach. We observe that the new method finds the best solution closer (both in time and space) to the ‘right' centroid position than the method with a diagonal CD.
5 COMPARISON WITH SWISS MT CATALOGUE
In order to evaluate the performance of the proposed algorithm, we systematically compare CMT solutions using this approach with an independent methodology. As a reference event data set we have selected all (139) events with magnitude ≥3 from the Swiss earthquake catalogue from 1999 to June 2015 (see http://arclink.ethz.ch/fdsnws/event/1/query?starttime=1999-01-01T01:00:00&endtime=2015-07-01T00:00:00&minmagnitude=3.0&format=text&nodata=404). We have chosen the year 1999 as this was the start of the Swiss Digital Seismic Network (SDSNet), the broad-band component of the Swiss Seismic Network (Swiss Seismological Service [SED] at ETH Zurich 1983). We compare the results from our method with moment tensor solutions obtained from the scmtv module from SeisComP3 package (Hanka et al. 2010) (hereafter called as 'manual processing'). For both methods, a similar set of 1-D Green's Functions, optimized for the Alpine region (Diehl et al. 2009), is used. Local and Regional MT catalogues for Switzerland have been produced by the SED in the past (Braunmiller et al. 2002; Bernardi et al. 2004), though at the SED these methods have been discontinued and now are replaced by the scmtv approach. For both the methods, all broad-band data that is available in the archives of the SED are used. This includes data not just from Switzerland, but also from foreign networks that provide real-time data to the SED, see Acknowledgements for details. Sensors used in the inversion are restricted to broad-band sensors ranging from 40–120s, though the majority of 120s.
The manual processing uses the workflow implemented within the routine catalogue curation at the Swiss Seismological service. The methodology is based on the linear least squares inversion of Dreger (2003). The methodology assumes that the isotropic component is zero, the epicentral coordinates are fixed (though the depth can vary), and the source time function is fixed, so it is an MT rather than CMT approach. A limited set of predefined parameters is used to select stations and prepare the data, which is automatically implemented depending on the events' Local Magnitude (ML) as contained in the earthquake catalogue. These include channel types (broad-band), candidate stations (based on epicentral distance) and narrow band filter ranges. These parameters can be modified though this is rarely necessary and hence rarely done. An automatic algorithm can be applied to select the optimal set of stations and event depth that produces the best MT, taking into account the Variance Reduction and the per cent Double Couple of the solution, whilst retaining as many stations as possible. Interactively, the solution can be tweaked to select and remove individual station components.
Of the 139 candidate events, MT solutions were obtained for 40 events. The remaining events could not provide a high quality solution for various reasons: too few station components provide high waveform fits with high Variance Reduction due to high background noise, too few stations are available (a small event is located at edges of the network, or the event occurred during the start of the network when station density was sparse). These 40 events were also processed by our new automated code.
The automated procedure followed using the proposed algorithm; all broad-band stations within a radius controlled by the ML magnitude (obtained during catalogue creation at the Swiss Seismological Service) were used. The maximum epicentral distance was limited according to ad hoc formula |$r < 2^{2 \cdot M_L}\,\mathrm{km}$|. Moreover, stations closer than 2 km were removed, because they make inversion unstable in many cases. Stations components, where instrumental disturbances were detected automatically by the MouseTrap code (Vackář et al. 2015), were removed. Also eliminated were records with data gaps in the period required for the earthquake and the noise analyses. The code does not remove any station with high noise because we wanted to test practical ability of the covariance matrix to manage data sets with noisy stations. We require at least 2 stations which have at least 5 usable components, otherwise the event was removed.
Prior to the inversion, the mean was subtracted from the records and 4-pole Butterworth bandpass filter was applied restricting the frequency range to 0.02–0.15 Hz. Moreover, for stations with epicentral distance larger than 100 km, the high frequency limit was restricted in such a way that the minimum wavelength is no more than five times shorter than the source to station distance for a reference S-wave velocity 3 km s−1. The restriction is to prevent errors arising from inaccurate Earth model resulting in Green's function modelling errors (Hallo & Gallovič 2016) as well as limited frequency range of the sensor.
For each event, the procedure calculates the CMT in 4 cases: (i) deviatoric MT using a diagonal covariance matrix (the same variance for all stations, that is, we just minimize L2 norm between observed and synthetic waveforms), (ii) deviatoric MT using the data covariance matrix CD created just from autocovariance, (iii) deviatoric MT using CD including crosscovariance between components of single stations, and (iv) full MT using CD including crosscovariance between components. We observed some difference between the first case with diagonal covariance matrix (i) and the others, but the difference between the cases with CD calculated from noise covariance (ii–iv) was usually insignificant; only occasionally we observed some small improvement (particularly in higher DC-percentage, which we assume for tectonic earthquakes, and in higher variance reduction) in the cases with crosscovariance (iii–iv). For comparison with manual processing, we have chosen the case with crosscovariance and deviatoric MT (iii), because the manual solution uses deviatoric MT too.
Comparison of the automatic and manual processing is illustrated in Fig. 5. The comparison is quantified by measuring the difference in the orientation of the DC parts of the moment tensors using Kagan's angle (Kagan 1991), and the difference in depth and moment magnitude MW. The majority of the automatically processed events (over 70 per cent) have similar focal mechanism as that obtained in the manual SeisComP3 processing, as expressed by their Kagan angle 0°–20°. Events with large Kagan angle were inspected manually (the first three lines of Table 1); in all cases, there was another problem with the event, for example, large azimuthal gap, a not-detected disturbance in inverted waveforms, unstable mechanism strongly varying with depth with almost the same variance reduction, or a combination of these problems. The difference in moment magnitude is below 0.05 at the majority of the events, and within 0.15 in more than 90 per cent cases. We also compared the inverted centroid depth, and found the difference lower than 2.5 km for 44 per cent of the events. Taking into account that the station set was usually different in both methods we consider these results as a good agreement.

Statistics of the comparison between the new automatic processing of 36 earthquakes recorded in Swiss Digital Seismic Network and their manual processing in scmtv module of SeisComP3. Similarity of the solutions is expressed by (a) Kagan's angles, (b) differences in moment magnitude, and (c) differences in centroid depth, and (d) centroid position. All the events are shallow earthquakes with hypocentre depth up to 35 km, magnitudes are between 3.0–5.0.
Specific events. There are listed events where the manual and automated solution strongly disagree (# 1–3), where the automated solution is labelled as untrusted although the manual solution exists (# 4–11), where the automated solution is trusted although the manual solution does not exist (# 12–24), and where the solution is labelled as untrusted although we confirmed the automated solution by visual inspection (# 25). For the untrusted events the value which does not pass the criterion is shown in bold. Both numerical and graphical output for event # 25 is shown in the Supporting Information as an example of automated output of the code. The detailed output for all events including those not-listed in this table is available at http://geo.mff.cuni.cz/∼vackar/CH_catalog/.
. | . | . | . | . | . | . | . | . | . | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|---|
event # . | origin time (UTC) . | Mw . | number of stations . | number of components . | DC per cent . | variance reduction (per cent) . | condition number . | standard deviation . | trust . | Kagan angle [°] . | visual inspection . | comment . |
# | t | Mw | # stat | # comp | DC | VR | CN | stdev | trust | Kagan | visual | comment |
1 | 2010-08-11 01:28:55 | 3.3 | 14 | 36 | 76 | 0.56 | 2 | 0.08 | 1 | 77.1 | problem | fitting just stations in one direction |
2 | 2010-06-30 11:53:44 | 3.6 | 17 | 37 | 87 | 0.59 | 2 | 0.03 | 1 | 86.0 | OK | azimuthal gap ∼ 180° |
3 | 2007-03-23 05:01:38 | 3.3 | 6 | 14 | 68 | 0.70 | 2 | 0.78 | 1 | 58.9 | uncert. | not-detected disturbance |
4 | 2011-10-29 04:13:34 | 4.0 | 17 | 42 | 16 | 0.54 | 2 | 0.01 | 0 | 27.2 | uncert. | azimuthal gap > 180° |
5 | 2009-09-11 06:34:38 | 3.1 | 20 | 49 | 11 | 0.33 | 4 | 0.55 | 0 | 36.5 | problem | CD problem: an event in before-event noise |
6 | 2009-01-04 15:30:31 | 3.8 | 18 | 52 | 45 | 0.47 | 3 | 0.01 | 0 | 22.0 | problem | problem with near-by station |
7 | 2003-08-22 09:30:09 | 3.2 | 15 | 45 | 85 | 0.33 | 3 | 1.18 | 0 | 36.3 | uncert. | |
8 | 2003-08-01 03:20:23 | 3.5 | 17 | 44 | 100 | 0.37 | 71 | 0.00 | 0 | 34.6 | OK | problem with near-by station |
9 | 2002-05-31 16:50:33 | 3.3 | 9 | 21 | 23 | 0.79 | 7 | 0.04 | 0 | 36.2 | problem | problem with near-by station |
10 | 2000-06-03 15:14:10 | 3.5 | 9 | 24 | 99 | 0.40 | 2 | 0.06 | 0 | 35.7 | uncert. | |
11 | 1999-12-31 04:55:53 | 4.1 | 13 | 37 | 41 | 0.73 | 3 | 0.02 | 0 | 93.0 | uncert. | azimuthal gap > 270° |
12 | 2015-06-15 03:14:47 | 3.0 | 15 | 33 | 81 | 0.61 | 2 | 0.52 | 1 | OK | ||
13 | 2008-06-17 19:48:07 | 3.0 | 7 | 19 | 98 | 0.89 | 7 | 0.52 | 1 | OK | ||
14 | 2006-10-20 00:11:58 | 3.5 | 7 | 16 | 88 | 0.52 | 3 | 0.14 | 1 | OK | ||
15 | 2004-06-21 23:10:02 | 3.4 | 11 | 26 | 68 | 0.76 | 2 | 0.25 | 1 | OK | ||
16 | 2004-02-23 17:31:21 | 4.5 | 20 | 55 | 97 | 0.68 | 2 | 0.01 | 1 | OK | ||
17 | 2004-02-18 14:31:58 | 3.2 | 8 | 21 | 87 | 0.54 | 2 | 1.24 | 1 | OK | ||
18 | 2003-03-22 13:36:15 | 3.9 | 17 | 46 | 60 | 0.68 | 3 | 0.06 | 1 | OK | azimuthal gap > 270° | |
19 | 2002-01-18 11:14:54 | 3.4 | 3 | 9 | 82 | 0.82 | 9 | 0.80 | 1 | uncert. | ||
20 | 2001-10-01 06:36:22 | 3.8 | 11 | 31 | 79 | 0.88 | 5 | 0.03 | 1 | OK | OK | |
21 | 2001-07-09 22:50:02 | 3.1 | 5 | 11 | 80 | 0.79 | 3 | 1.21 | 1 | OK | OK | |
22 | 2001-03-17 00:29:59 | 3.4 | 14 | 35 | 93 | 0.72 | 7 | 0.11 | 1 | OK | OK | |
23 | 2000-06-10 05:51:02 | 3.4 | 9 | 21 | 88 | 0.62 | 2 | 0.43 | 1 | OK | OK | |
24 | 2000-06-09 05:06:06 | 3.2 | 2 | 6 | 54 | 0.75 | 7 | 1.11 | 1 | uncert. | unstable | |
25 | 2013-12-27 07:08:28 | 3.4 | 21 | 61 | 62 | 0.29 | 3 | 1.86 | 0 | OK | fit decreased by distant stations |
. | . | . | . | . | . | . | . | . | . | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|---|
event # . | origin time (UTC) . | Mw . | number of stations . | number of components . | DC per cent . | variance reduction (per cent) . | condition number . | standard deviation . | trust . | Kagan angle [°] . | visual inspection . | comment . |
# | t | Mw | # stat | # comp | DC | VR | CN | stdev | trust | Kagan | visual | comment |
1 | 2010-08-11 01:28:55 | 3.3 | 14 | 36 | 76 | 0.56 | 2 | 0.08 | 1 | 77.1 | problem | fitting just stations in one direction |
2 | 2010-06-30 11:53:44 | 3.6 | 17 | 37 | 87 | 0.59 | 2 | 0.03 | 1 | 86.0 | OK | azimuthal gap ∼ 180° |
3 | 2007-03-23 05:01:38 | 3.3 | 6 | 14 | 68 | 0.70 | 2 | 0.78 | 1 | 58.9 | uncert. | not-detected disturbance |
4 | 2011-10-29 04:13:34 | 4.0 | 17 | 42 | 16 | 0.54 | 2 | 0.01 | 0 | 27.2 | uncert. | azimuthal gap > 180° |
5 | 2009-09-11 06:34:38 | 3.1 | 20 | 49 | 11 | 0.33 | 4 | 0.55 | 0 | 36.5 | problem | CD problem: an event in before-event noise |
6 | 2009-01-04 15:30:31 | 3.8 | 18 | 52 | 45 | 0.47 | 3 | 0.01 | 0 | 22.0 | problem | problem with near-by station |
7 | 2003-08-22 09:30:09 | 3.2 | 15 | 45 | 85 | 0.33 | 3 | 1.18 | 0 | 36.3 | uncert. | |
8 | 2003-08-01 03:20:23 | 3.5 | 17 | 44 | 100 | 0.37 | 71 | 0.00 | 0 | 34.6 | OK | problem with near-by station |
9 | 2002-05-31 16:50:33 | 3.3 | 9 | 21 | 23 | 0.79 | 7 | 0.04 | 0 | 36.2 | problem | problem with near-by station |
10 | 2000-06-03 15:14:10 | 3.5 | 9 | 24 | 99 | 0.40 | 2 | 0.06 | 0 | 35.7 | uncert. | |
11 | 1999-12-31 04:55:53 | 4.1 | 13 | 37 | 41 | 0.73 | 3 | 0.02 | 0 | 93.0 | uncert. | azimuthal gap > 270° |
12 | 2015-06-15 03:14:47 | 3.0 | 15 | 33 | 81 | 0.61 | 2 | 0.52 | 1 | OK | ||
13 | 2008-06-17 19:48:07 | 3.0 | 7 | 19 | 98 | 0.89 | 7 | 0.52 | 1 | OK | ||
14 | 2006-10-20 00:11:58 | 3.5 | 7 | 16 | 88 | 0.52 | 3 | 0.14 | 1 | OK | ||
15 | 2004-06-21 23:10:02 | 3.4 | 11 | 26 | 68 | 0.76 | 2 | 0.25 | 1 | OK | ||
16 | 2004-02-23 17:31:21 | 4.5 | 20 | 55 | 97 | 0.68 | 2 | 0.01 | 1 | OK | ||
17 | 2004-02-18 14:31:58 | 3.2 | 8 | 21 | 87 | 0.54 | 2 | 1.24 | 1 | OK | ||
18 | 2003-03-22 13:36:15 | 3.9 | 17 | 46 | 60 | 0.68 | 3 | 0.06 | 1 | OK | azimuthal gap > 270° | |
19 | 2002-01-18 11:14:54 | 3.4 | 3 | 9 | 82 | 0.82 | 9 | 0.80 | 1 | uncert. | ||
20 | 2001-10-01 06:36:22 | 3.8 | 11 | 31 | 79 | 0.88 | 5 | 0.03 | 1 | OK | OK | |
21 | 2001-07-09 22:50:02 | 3.1 | 5 | 11 | 80 | 0.79 | 3 | 1.21 | 1 | OK | OK | |
22 | 2001-03-17 00:29:59 | 3.4 | 14 | 35 | 93 | 0.72 | 7 | 0.11 | 1 | OK | OK | |
23 | 2000-06-10 05:51:02 | 3.4 | 9 | 21 | 88 | 0.62 | 2 | 0.43 | 1 | OK | OK | |
24 | 2000-06-09 05:06:06 | 3.2 | 2 | 6 | 54 | 0.75 | 7 | 1.11 | 1 | uncert. | unstable | |
25 | 2013-12-27 07:08:28 | 3.4 | 21 | 61 | 62 | 0.29 | 3 | 1.86 | 0 | OK | fit decreased by distant stations |
Specific events. There are listed events where the manual and automated solution strongly disagree (# 1–3), where the automated solution is labelled as untrusted although the manual solution exists (# 4–11), where the automated solution is trusted although the manual solution does not exist (# 12–24), and where the solution is labelled as untrusted although we confirmed the automated solution by visual inspection (# 25). For the untrusted events the value which does not pass the criterion is shown in bold. Both numerical and graphical output for event # 25 is shown in the Supporting Information as an example of automated output of the code. The detailed output for all events including those not-listed in this table is available at http://geo.mff.cuni.cz/∼vackar/CH_catalog/.
. | . | . | . | . | . | . | . | . | . | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|---|
event # . | origin time (UTC) . | Mw . | number of stations . | number of components . | DC per cent . | variance reduction (per cent) . | condition number . | standard deviation . | trust . | Kagan angle [°] . | visual inspection . | comment . |
# | t | Mw | # stat | # comp | DC | VR | CN | stdev | trust | Kagan | visual | comment |
1 | 2010-08-11 01:28:55 | 3.3 | 14 | 36 | 76 | 0.56 | 2 | 0.08 | 1 | 77.1 | problem | fitting just stations in one direction |
2 | 2010-06-30 11:53:44 | 3.6 | 17 | 37 | 87 | 0.59 | 2 | 0.03 | 1 | 86.0 | OK | azimuthal gap ∼ 180° |
3 | 2007-03-23 05:01:38 | 3.3 | 6 | 14 | 68 | 0.70 | 2 | 0.78 | 1 | 58.9 | uncert. | not-detected disturbance |
4 | 2011-10-29 04:13:34 | 4.0 | 17 | 42 | 16 | 0.54 | 2 | 0.01 | 0 | 27.2 | uncert. | azimuthal gap > 180° |
5 | 2009-09-11 06:34:38 | 3.1 | 20 | 49 | 11 | 0.33 | 4 | 0.55 | 0 | 36.5 | problem | CD problem: an event in before-event noise |
6 | 2009-01-04 15:30:31 | 3.8 | 18 | 52 | 45 | 0.47 | 3 | 0.01 | 0 | 22.0 | problem | problem with near-by station |
7 | 2003-08-22 09:30:09 | 3.2 | 15 | 45 | 85 | 0.33 | 3 | 1.18 | 0 | 36.3 | uncert. | |
8 | 2003-08-01 03:20:23 | 3.5 | 17 | 44 | 100 | 0.37 | 71 | 0.00 | 0 | 34.6 | OK | problem with near-by station |
9 | 2002-05-31 16:50:33 | 3.3 | 9 | 21 | 23 | 0.79 | 7 | 0.04 | 0 | 36.2 | problem | problem with near-by station |
10 | 2000-06-03 15:14:10 | 3.5 | 9 | 24 | 99 | 0.40 | 2 | 0.06 | 0 | 35.7 | uncert. | |
11 | 1999-12-31 04:55:53 | 4.1 | 13 | 37 | 41 | 0.73 | 3 | 0.02 | 0 | 93.0 | uncert. | azimuthal gap > 270° |
12 | 2015-06-15 03:14:47 | 3.0 | 15 | 33 | 81 | 0.61 | 2 | 0.52 | 1 | OK | ||
13 | 2008-06-17 19:48:07 | 3.0 | 7 | 19 | 98 | 0.89 | 7 | 0.52 | 1 | OK | ||
14 | 2006-10-20 00:11:58 | 3.5 | 7 | 16 | 88 | 0.52 | 3 | 0.14 | 1 | OK | ||
15 | 2004-06-21 23:10:02 | 3.4 | 11 | 26 | 68 | 0.76 | 2 | 0.25 | 1 | OK | ||
16 | 2004-02-23 17:31:21 | 4.5 | 20 | 55 | 97 | 0.68 | 2 | 0.01 | 1 | OK | ||
17 | 2004-02-18 14:31:58 | 3.2 | 8 | 21 | 87 | 0.54 | 2 | 1.24 | 1 | OK | ||
18 | 2003-03-22 13:36:15 | 3.9 | 17 | 46 | 60 | 0.68 | 3 | 0.06 | 1 | OK | azimuthal gap > 270° | |
19 | 2002-01-18 11:14:54 | 3.4 | 3 | 9 | 82 | 0.82 | 9 | 0.80 | 1 | uncert. | ||
20 | 2001-10-01 06:36:22 | 3.8 | 11 | 31 | 79 | 0.88 | 5 | 0.03 | 1 | OK | OK | |
21 | 2001-07-09 22:50:02 | 3.1 | 5 | 11 | 80 | 0.79 | 3 | 1.21 | 1 | OK | OK | |
22 | 2001-03-17 00:29:59 | 3.4 | 14 | 35 | 93 | 0.72 | 7 | 0.11 | 1 | OK | OK | |
23 | 2000-06-10 05:51:02 | 3.4 | 9 | 21 | 88 | 0.62 | 2 | 0.43 | 1 | OK | OK | |
24 | 2000-06-09 05:06:06 | 3.2 | 2 | 6 | 54 | 0.75 | 7 | 1.11 | 1 | uncert. | unstable | |
25 | 2013-12-27 07:08:28 | 3.4 | 21 | 61 | 62 | 0.29 | 3 | 1.86 | 0 | OK | fit decreased by distant stations |
. | . | . | . | . | . | . | . | . | . | . | . | . |
---|---|---|---|---|---|---|---|---|---|---|---|---|
event # . | origin time (UTC) . | Mw . | number of stations . | number of components . | DC per cent . | variance reduction (per cent) . | condition number . | standard deviation . | trust . | Kagan angle [°] . | visual inspection . | comment . |
# | t | Mw | # stat | # comp | DC | VR | CN | stdev | trust | Kagan | visual | comment |
1 | 2010-08-11 01:28:55 | 3.3 | 14 | 36 | 76 | 0.56 | 2 | 0.08 | 1 | 77.1 | problem | fitting just stations in one direction |
2 | 2010-06-30 11:53:44 | 3.6 | 17 | 37 | 87 | 0.59 | 2 | 0.03 | 1 | 86.0 | OK | azimuthal gap ∼ 180° |
3 | 2007-03-23 05:01:38 | 3.3 | 6 | 14 | 68 | 0.70 | 2 | 0.78 | 1 | 58.9 | uncert. | not-detected disturbance |
4 | 2011-10-29 04:13:34 | 4.0 | 17 | 42 | 16 | 0.54 | 2 | 0.01 | 0 | 27.2 | uncert. | azimuthal gap > 180° |
5 | 2009-09-11 06:34:38 | 3.1 | 20 | 49 | 11 | 0.33 | 4 | 0.55 | 0 | 36.5 | problem | CD problem: an event in before-event noise |
6 | 2009-01-04 15:30:31 | 3.8 | 18 | 52 | 45 | 0.47 | 3 | 0.01 | 0 | 22.0 | problem | problem with near-by station |
7 | 2003-08-22 09:30:09 | 3.2 | 15 | 45 | 85 | 0.33 | 3 | 1.18 | 0 | 36.3 | uncert. | |
8 | 2003-08-01 03:20:23 | 3.5 | 17 | 44 | 100 | 0.37 | 71 | 0.00 | 0 | 34.6 | OK | problem with near-by station |
9 | 2002-05-31 16:50:33 | 3.3 | 9 | 21 | 23 | 0.79 | 7 | 0.04 | 0 | 36.2 | problem | problem with near-by station |
10 | 2000-06-03 15:14:10 | 3.5 | 9 | 24 | 99 | 0.40 | 2 | 0.06 | 0 | 35.7 | uncert. | |
11 | 1999-12-31 04:55:53 | 4.1 | 13 | 37 | 41 | 0.73 | 3 | 0.02 | 0 | 93.0 | uncert. | azimuthal gap > 270° |
12 | 2015-06-15 03:14:47 | 3.0 | 15 | 33 | 81 | 0.61 | 2 | 0.52 | 1 | OK | ||
13 | 2008-06-17 19:48:07 | 3.0 | 7 | 19 | 98 | 0.89 | 7 | 0.52 | 1 | OK | ||
14 | 2006-10-20 00:11:58 | 3.5 | 7 | 16 | 88 | 0.52 | 3 | 0.14 | 1 | OK | ||
15 | 2004-06-21 23:10:02 | 3.4 | 11 | 26 | 68 | 0.76 | 2 | 0.25 | 1 | OK | ||
16 | 2004-02-23 17:31:21 | 4.5 | 20 | 55 | 97 | 0.68 | 2 | 0.01 | 1 | OK | ||
17 | 2004-02-18 14:31:58 | 3.2 | 8 | 21 | 87 | 0.54 | 2 | 1.24 | 1 | OK | ||
18 | 2003-03-22 13:36:15 | 3.9 | 17 | 46 | 60 | 0.68 | 3 | 0.06 | 1 | OK | azimuthal gap > 270° | |
19 | 2002-01-18 11:14:54 | 3.4 | 3 | 9 | 82 | 0.82 | 9 | 0.80 | 1 | uncert. | ||
20 | 2001-10-01 06:36:22 | 3.8 | 11 | 31 | 79 | 0.88 | 5 | 0.03 | 1 | OK | OK | |
21 | 2001-07-09 22:50:02 | 3.1 | 5 | 11 | 80 | 0.79 | 3 | 1.21 | 1 | OK | OK | |
22 | 2001-03-17 00:29:59 | 3.4 | 14 | 35 | 93 | 0.72 | 7 | 0.11 | 1 | OK | OK | |
23 | 2000-06-10 05:51:02 | 3.4 | 9 | 21 | 88 | 0.62 | 2 | 0.43 | 1 | OK | OK | |
24 | 2000-06-09 05:06:06 | 3.2 | 2 | 6 | 54 | 0.75 | 7 | 1.11 | 1 | uncert. | unstable | |
25 | 2013-12-27 07:08:28 | 3.4 | 21 | 61 | 62 | 0.29 | 3 | 1.86 | 0 | OK | fit decreased by distant stations |
From the 139 events there were 24 events skipped, because too few stations were available. Of the 115 remaining events, there were 45 events which passed the criterion (eq. 16). These events, together with their solutions, are plotted in Fig. 6 and the statistic of some of their parameters is plotted in Fig. 7. There is also detailed output of the ISOLA-ObsPy code for one example event in the electronic supplement. Most of the trusted events have their variance reduction in the range 0.8–0.9 and condition number between 2 and 4. From this set, 32 events have their counterpart in the manual processing and 13 events were newly obtained. The 70 untrusted events include also 8 events previously manually processed. We inspected in detail all the removed and newly added events one by one. In case of 8 untrusted events, in most cases we consider that our result is not really reliable. This does not necessary mean that the result from the manual processing is not reliable too, the proper selection of stations and components as well as some other operations in manual processing may help in these specific cases. Inspecting visually all 13 newly obtained events, we confirmed all. We speculate that the reason why manual solutions cannot be made is because of the inflexibility of the manual method to accommodate narrow bandpasses in noisy records, which is a key feature of the proposed method.

Map of all events with a CMT solution calculated by the proposed method. Events which passed the criterion (eq. 16), that is, ‘trusted events', are shown in red, the others in black. The stations available from the SED data archives at arclink.ethz.ch are shown by blue triangles. Note that only a specific subset of the stations was used in the inversion of each MT. Magnitudes are between 3.0 and 5.0, hypocentre depths are up to 35 km.

Statistics of quality measures for 45 earthquakes of Swiss earthquake catalogue whose focal mechanisms were newly obtained by the automatic processing and evaluated as reliable by the automated criterion (eq. 16). Note reasonable reliability of the new solutions, expressed by their (a) high values of the variance reduction, (b) low condition number, (c) low DC percentage, and (d) low combined standard deviation of CMT parameters (see eq. 16). The cut-off value of the criterion is shown by the red line.
The results for the removed and newly added events are listed in Table 1 and a list of all results linked with detailed output for each event is available at http://geo.mff.cuni.cz/∼vackar/CH_catalog/. For each event, it includes several automatically generated plots and tables in form of an HTML page: centroid location, moment tensor and its decomposition including uncertainty visualization, quality measure VR and CN, uncertainty histogram for several parameters, list and map of used station, waveform fit, noise, and spectral plots, data covariance matrix plot, and figures of the grid of solutions showing PPDF, centroid time, variance reduction, condition number and the best solution at each grid point.
6 DEVELOPED SOFTWARE
The method described in the previous sections, together with a few other procedures for optimal data selection and some plotting routines, is programmed as a software package ISOLA-ObsPy, which can be used for fully automated moment tensor inversion, including near-real-time data flows, as well as large data sets of previously recorded events.
The package includes automated data retrieval (saved in any file format supported by ObsPy (Krischer et al. 2015) or accessible via ArcLink (SeisComP3 2016)), removal of components with various instrumental disturbances, setting frequency ranges for each station individually according to its distance and event magnitude, and full-waveform inversion in space–time grid around hypocentre. The size of the space–time grid is automatically chosen according to the location uncertainty and magnitude. Time sampling is 100 times higher than the high limit of the inverted frequency band. Spatial sampling can be adjusted by user by entering horizontal and vertical step directly and/or by entering maximal number of grid points. In application described in Section 5 we used sampling 1 km, which can be recommended for such applications.
Grid search over time and space is effectively combined with analytical (least-squares) MT inversion in a Bayesian framework. This way not only the best solution is found, but also the full PPDF is inferred. The marginal probability density function for any CMT parameter can be plotted. Data covariance matrix calculated from before-event noise yields an automated weighting of the stations according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies. To speed up the inversion, the time demanding tasks such as the Green's function calculations and the spatial grid search are parallelized. The software package is programmed as versatile as possible in order to be applicable in various networks ranging from local to regional.
The code shares some similarities with the broadly used ISOLA software (Sokos & Zahradník 2013) in terms of the inversion methods and input/output file structures, but most codes have been re-written from the scratch for maximum computational efficiency (combing Fortran and Python, using ObsPy, NumPy and MatPlotLib). In contrast to ISOLA, whose advantage is in a friendly manual processing of individual events using Matlab GUI, the new codes are intended rather for a massive automated application on large sets of earthquakes from a database, and/or for near real-time applications.
Although the process is fully automated, the inversion can be visually inspected later. For this reason, many figures are automatically plotted. Beside Figs 1–7, which all come from automated output of the code, there are figures showing spectra of original records and after filtering by the data covariance matrix, figure of the covariance functions, and figures showing how the mechanism vary with depth and space (Fig. 8).

Example of automated output of the code: for an MW = 3.7 earthquake at Sargans, Switzerland on 2013-12-27 07:08:28. Both panels show how the solution varies with depth and west–east coordinate (view from the south). The colour of the beachball corresponds to the DC percentage for the best solution at a given grid point. In panel (a), the size of the beachballs corresponds to the posterior probability density function (PPDF) and the PPDF is summed over third dimension (N–S coordinate) and time. In panel (b), the beachball size corresponds to the variance reduction (VR), the solutions are shown in the plane in which the best solution (circled) lies, the colour in the background corresponds to the inverted centroid time at a given grid point, and the contour lines show the condition number. The other plots (waveforms, covariance matrix, uncertainty histograms, etc.) for this events are shown in the Supporting Information.
The ISOLA-ObsPy software package is freely available under GNU/GPL licence and can be downloaded from http://geo.mff.cuni.cz/∼vackar/isola-obspy/, where there is also full documentation.
7 CONCLUSION AND DISCUSSION
We developed a new method and code for the centroid moment tensor (CMT) inversion. The method is innovative in the following aspects: (i) the CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected later using many figures which are automatically plotted. (ii) The automated process includes detection of disturbances based on MouseTrap code (Vackář et al. 2015). The detection, although not detecting all types of disturbances and further development is welcome, avoids most of problems which can cause misleading results. (iii) The data covariance matrix calculated from before-event noise is used. It works as an automated frequency filter and station weighting according to the noise level. (iv) Bayesian approach is used, so not only the best solution is obtained, but also the PPDF. (v) A space–time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to visually inspect solution stability over the grid.
Most of these features were used by other authors before, but the combination of them is novel. To the best of our knowledge, automated MT solution using a Bayesian approach was published just by Stähler & Sigloch (2014) for teleseismic events. The combination of an analytical solution for the MT components with a spatial-temporal sampling was suggested by the same authors, but we do not know any other study using it in practice. We did not find also any other Bayesian method eliminating disturbances in seismograms, although hierarchical weights (Dettmer et al. 2014) could overcome this issue. Other works using Bayesian inversion of (C)MT, which are not automated, include Wéber (2006); Duputel et al. (2012); Mustać & Tkalčić (2016). We differ not only in automation and analytical determination of MT components, but also by construction of the data covariance matrix using a non-parametric approach.
We demonstrate the usefulness of the data covariance matrix in terms of automatically identifying a set of stations and frequency ranges most suitable for the waveform inversion. In this sense, our new method is more efficient than various existing procedures in which the inversion is repeatedly performed in different frequency ranges, either manually, or automatically (as an example of the latter, see the useful Frequency Range Tests of Dias et al. 2014). The proposed approach seems to be useful for weak events and records with low signal-to-noise ratio, where the seismic noise is dominant source of errors. It is important to keep in mind, that such uncertainties correspond just to noise in data, and do not reflect the errors in Green's functions or any other sources of errors. So the obtained uncertainties are usually underestimated in real cases. For more general use, it would be useful to combine our approach (covariance matrix from before-event noise covariance) with some other methods reflecting Green's function uncertainty, such as hierarchical error estimation (Bodin et al. 2012; Dettmer et al. 2012) or analytical calculation of the covariance matrix (Hallo & Gallovič 2016).
The automated procedure is tested by comparison with manually processed moment tensors of all events greater than M ≥ 3 in the Swiss catalogue over 16 yr using data available the Swiss data centre at arclink.ethz.ch. The quality of the results of the presented automated process is comparable with careful manual processing of data.
As an outlook, it would be useful to generalize the method to multi-point source approximation, incorporate the uncertainty of the Green's functions to the covariance matrix, add automated detection of more types of disturbances, and use the first-motion polarities in the inversion.
Acknowledgments
Data used in the report is primarily from Switzerland (Swiss Seismological Service [SED] at ETH Zurich 1983). Other agencies that provide data that is archived at the SED and accessible for this study are: Geophysical Observatory Fürstenfeldbruck (Ludwig-Maximilians-University, Munich), GERESS (Hannover), GRSN/SZGRF (Erlangen), INGV/MEDNET (Rome), Landes-Erdbebendienst (Freiburg i. B.), OGS/CRS (Udine/Trieste), RENASS (Strasbourg), RSNI/DipTeris (Genova), SISMALP (Grenoble), SNRS (Ljubljana), TGRS (Nice), ZAMG (Vienna).
We acknowledge financial support by the Czech Science Foundation project 14-04372S, the Grant Agency of the Charles University projects SVV-260218 and GAUK-496213. The first author was supported by SCIEX Scholarship Fund and a significant part of the work was done during his fellowship at ETH Zurich.
REFERENCES
SUPPORTING INFORMATION
Supplementary data are available at GJI online.
supplement_all.pdf
Please note: Oxford University Press is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the paper.