Summary

The global catalogue of tsunami events is examined to determine if transient variations in tsunami rates are consistent with a Poisson process commonly assumed for tsunami hazard assessments. The primary data analyzed are tsunamis with maximum sizes >1m. The record of these tsunamis appears to be complete since approximately 1890. A secondary data set of tsunamis >0.1m is also analyzed that appears to be complete since approximately 1960. Various kernel density estimates used to determine the rate distribution with time indicate a prominent rate change in global tsunamis during the mid-1990s. Less prominent rate changes occur in the early- and mid-20th century. To determine whether these rate fluctuations are anomalous, the distribution of annual event numbers for the tsunami catalogue is compared to Poisson and negative binomial distributions, the latter of which includes the effects of temporal clustering. Compared to a Poisson distribution, the negative binomial distribution model provides a consistent fit to tsunami event numbers for the >1m data set, but the Poisson null hypothesis cannot be falsified for the shorter duration >0.1m data set. Temporal clustering of tsunami sources is also indicated by the distribution of interevent times for both data sets. Tsunami event clusters consist only of two to four events, in contrast to protracted sequences of earthquakes that make up foreshock–main shock–aftershock sequences. From past studies of seismicity, it is likely that there is a physical triggering mechanism responsible for events within the tsunami source ‘mini-clusters’. In conclusion, prominent transient rate increases in the occurrence of global tsunamis appear to be caused by temporal grouping of geographically distinct mini-clusters, in addition to the random preferential location of global M >7 earthquakes along offshore fault zones.

Introduction

The assessment of tsunami hazards, like many other hazards, assumes that the rate of occurrence is constant with time. A Poisson process is often the default model for these assessments (e.g. Geist & Parsons 2006; González et. al. 2009) and with it, there can be transient, apparent variations in the historical occurrence rate caused by events randomly occurring close in time. It is important to examine the historical data to determine if indeed a Poisson process can explain the occurrence of tsunamis or whether interevent triggering can elevate the occurrence rate over a period of time.

Triggering among tsunami sources implies that the occurrence of one event is dependent on another, and therefore is not consistent with a Poisson process. For earthquakes, the most common example of triggering is the occurrence of foreshock–main shock–aftershock sequences. Whereas spontaneous earthquakes are thought to be independent and follow a Poisson processes, the addition of foreshock–main shock–aftershock sequences to the historical record causes the distribution of event numbers to deviate from a Poisson distribution and result in higher than expected, transient rate changes. Because earthquakes cause the vast majority of tsunamis, either directly or indirectly through seismically triggered landslides beneath the ocean, we would expect the statistics of tsunamis to be similar to that of earthquakes. However, only earthquakes under certain conditions generate observable tsunamis: if the magnitude is large enough, if the earthquakes (or triggered landslides) are beneath the ocean and if the earthquakes are not very deep. These conditions select a subset of all possible earthquakes and it is worth investigating whether the same statistical models of seismicity apply to tsunamis under these conditions.

There have been some notable recent examples of triggering between tsunamigenic earthquakes. It is generally agreed that the Mw= 8.3 2006 November 16 interplate thrust earthquake along the Kuril subduction zone triggered the Mw= 8.1 2007 January 13 outer-rise, normal faulting earthquake approximately 105km away (Ammon et. al. 2008; Lay et. al. 2009; Raeesi & Atakan 2009; Ogata & Toda 2010). Both of these earthquakes generated significant tsunamis in the near and far field (Fujii & Satake 2008; Horrillo et. al. 2008; Kowalik et. al. 2008; Dengler et. al. 2009; MacInnes et. al. 2009). It is believed that the static Coulomb failure stress increased on the outer-rise fault following the 2006 interplate thrust event, consistent with previous hypotheses that the rupture cycle of interplate thrust and outer-rise earthquakes is linked (Christensen & Ruff, 1988; Dmowska et. al. 1988). Whereas the two Kuril earthquakes were almost 2 months apart, the interevent time between outer-rise and interplate thrust earthquakes can be very short, as in the case of the 2009 September 29 Samoa earthquake. Originally considered as one event, Lay et. al. (2010) indicate that this was a complex sequence of events, starting with a Mw= 8.1 outer trench-slope normal faulting earthquake followed by a compound Mw= 8.0 interplate thrust earthquake 2 min later. In addition to triggering of tsunamigenic earthquakes that occur on different faults, triggering among earthquakes on the same fault can occur, such as on different segments of the interplate thrust (e.g. Dmowska & Lovison 1992; Lin & Stein 2004). Throughout the tsunami catalogue, one can identify short sequences of tsunami sources in which individual events in the sequence are likely dependent among one another.

The phenomenon of interevent triggering has been studied extensively in relation to interpreting seismicity. Fundamental laws such as the Gutenberg–Richter (G–R) relation for earthquake sizes and the Omori law for aftershock sequences appear to have global applicability. Statistical branching models, such as the epidemic-type aftershock sequence (ETAS; Ogata 1988; Ogata & Zhuang 2006) and critical branching models (Kagan & Knopoff 1987; Marzocchi & Lombardi 2008; Kagan & Jackson 2011), are often effective in replicating observed earthquake sequences and are consistent with the G–R and Omori relations. More generally, the distribution of interevent times and earthquake numbers as evidenced by global and regional catalogues exhibit non-Poissonian statistics. Kagan (2010) demonstrates that triggered earthquakes contributing to spatial and temporal event clusters are best fit by a negative binomial distribution for large time and space windows and a large magnitude range. Earthquake numbers within individual earthquake clusters follow a geometric distribution. If a restricted magnitude range is selected, then earthquake numbers, including clusters, can be approximated by a Poisson distribution. Using these observations, Kagan & Jackson (2000, 2011) develop both long-term and short-term forecasts.

We investigate whether triggered events appear in global tsunami occurrence statistics, in terms of the distribution of annual event numbers and interevent times. After a description of the data, in terms of its level of completeness and historical trends, we note anomalous apparent rate changes in historical time. To assess these rate changes further, we test whether a stationary Poisson process or a temporal and spatial clustering process adequately models tsunami occurrence data. If the latter, then there may be some limited short-term forecast potential that can be explored in future studies. This may help elucidate concerns regarding the future occurrence of tsunamis following major events such as the Mw= 9.0 2011 March Tohoku tsunami.

Data selection

The data used for this study is extracted from the National Geophysical Data Center (NGDC) tsunami database ( last accessed, 2010 November 5). Other tsunami databases aside from the NGDC database exist for the type of analysis presented here (Gusiakov 2001; Gusiakov 2009) and should produce similar results over the period where the tsunami catalogue is complete. The tsunami event database is a compilation of separate catalogues of tsunami events, tide-gauge reports and individual event reports. Each event is linked to available water-level readings (tide-gauge amplitudes, run-up heights, etc. above the tidal height at the time of the tsunami). An advantage of tsunami catalogues is that direct measurements of wave height or amplitude are recorded and are not dependent on a particular scale conversion, as is the case with earthquake magnitude. The primary observation systems for tsunamis are global tide-gauge networks as described by Mofjeld (2009) and Dunbar et. al. (2009). In addition, tsunami catalogues include eyewitness observations and post-event tsunami surveys. Although there may be some regions of the world in which a tsunami of decimetre scale may not be detected, the long-term rate of instrumentally observable tsunamis greater than 0.1 m has been fairly stable since the mid-20th century, relative to current tsunami observation systems. Completeness of historic tsunami records is aided by the fact that tsunami event detection is primed or conditioned by the occurrence of an earthquake. That is, efforts at event detection are intensified when a major earthquake occurs beneath the ocean or near the coast. For significant tsunamis >1m, the long-term rate as determined from different types of data (tide-gauge recording, physical marks, eyewitness observations, etc.) is fairly stable since the beginning of the 20th century. We quantitatively estimate periods of completeness for the two tsunami data sets (>1 and >0.1m) later.

The data selection objective for this study is to retain those events that are reliably identified as tsunamis. Events in the NGDC database have an integer validity mark ranging between −1 and 4, with 3 being defined as a probable tsunami and 4 as definite tsunami. Validity values of −1, 0, 1 and 2 are labelled as erroneous, seiche or inland disturbance, very doubtful and questionable, respectively. Data that are initially selected range in time from 1800 January 1–2010 January 10, and have validity marks of 3 or 4. Maximum water levels for each event are displayed in Fig. 1 to determine catalogue completeness and whether the recording threshold has changed over the last two centuries. The relative abundance of 0.1m values is interpreted as being a default reading, likely an indicator that a tsunami was distinguished on the tide-gauge recording from its frequency characteristics or rapid onset in comparison to ambient noise (e.g. wind waves), but a maximum-amplitude determination could not be made. In the latter part of the 20th century continuing to the present, maximum-amplitude values less than 0.1 m are more common, because of an improvement in tide-gauge technology.

Figure 1

Maximum amplitude (m) on a logarithmic scale of tsunami events from the beginning of the 19th century to 2010. Increased reporting during the 20th century is evident, particularly below the 1 m threshold.

Figure 1

Maximum amplitude (m) on a logarithmic scale of tsunami events from the beginning of the 19th century to 2010. Increased reporting during the 20th century is evident, particularly below the 1 m threshold.

To determine when the global reporting rate stabilizes, we search for change points in the cumulative number distribution. In this case, successive linear regressions of the cumulative distribution are computed starting with the first event and sequentially increasing the left cut-off year in the regression with each event until the coefficient of determination (R2) reaches a maximum. This simple analysis suggests that the long-term rate of tsunami observations >1m stabilizes at approximately 1890 (2.3 events yr–1). For the lower threshold amplitude, the change-point analysis is applied to maximum tsunami amplitudes between 0.1 and 1.0m. This indicates that long-term rate stabilizes for maximum amplitudes >0.1m in the late 1950s (7.9 events yr–1) and applying it to the 0.1–1.0m range is a more conservative estimate of completeness compared to the change-point analysis applied to all data >0.1m. Starting with the occurrence of the Pacific basin-wide 1946 April 1 Aleutian tsunami, there was a gradual increase in tide-gauge stations reporting tsunami events throughout the world. That event resulted in a more systematic reporting of small tsunamis and the establishment of the first Pacific tsunami warning centre. Similar completeness estimates for the >1 and >0.1m data are obtained by examining the tsunami size distribution as a function of time, using the method developed by Wiemer & Wyss (2000).

A cumulative plot of tsunami events displays interesting transient rate changes lasting several years or more after the long-term rate has stabilized (Fig. 2). Fig. 2(a) is the cumulative number of events for the entire catalogue; Dunbar et. al. (2010) provides historical trends for the complete tsunami catalogue since the beginning of the 18th century. Figs 2(b) and (c) are for portions of the tsunami catalogue that are thought to be complete: >1m for the time period 1890–2010 and >0.1m for 1960–2010, respectively. A prominent rate increase occurs in the mid-1990s for the >1 and >0.1m data that is the original motivation for this study. Rate change patterns detected by kernel density estimates for the data shown in Figs 2 (b) and (c) are further discussed in the next section.

Figure 2

Cumulative number of tsunami events: (a) all tsunami events 1800–2010; (b) events with maximum amplitude >1m from 1890 to 2010; (c) events with maximum amplitude >0.1m from 1960 to 2010.

Figure 2

Cumulative number of tsunami events: (a) all tsunami events 1800–2010; (b) events with maximum amplitude >1m from 1890 to 2010; (c) events with maximum amplitude >0.1m from 1960 to 2010.

The data selection criterion used to determine tsunami interevent times is that tsunami sources are recorded with at least hourly precision. Interevent times are plotted in Fig. 3 for the selected data (1800–2010). Tsunami interevent times are for the most part stationary over the 20th century as the observation systems for tsunamis stabilize. The lack of interevent times longer than several hundred days during this time period is consistent with their much reduced probability of occurrence under an interevent distribution model in the exponential family, as discussed in Section 5. There is an apparent decrease in the minimum threshold of interevent times that relates to the ability to distinguish a succeeding event within the coda of the antecedent event. The advent of real-time communications, digital technology and shorter sampling intervals at tide-gauge stations during the latter part of the 20th century (cf., Mofjeld 2009) may have resulted in improved identification of separate tsunami events. The NGDC database does not seem to contain events measured at deep-ocean pressure sensors that have no coastal water-level measurements. An example would be a small-amplitude tsunami from an aftershock that is not separately distinguished at a tide-gauge station from the larger tsunami associated with the main shock. Missed tsunami event pairs or sequences with short interevent times are likely rare in the database.

Figure 3

Interevent time (days) on a logarithmic scale of all tsunami events from the beginning of the 19th century to 2010.

Figure 3

Interevent time (days) on a logarithmic scale of all tsunami events from the beginning of the 19th century to 2010.

Kernel density estimates

Using the data in Section 2, we first examine patterns of historical rate changes using non-parametric methods. Kernel density estimates are used to better delineate the fluctuations that are observed in the cumulative plot (Fig. 2). Compared to traditional histograms, kernel density estimates avoid origin and discontinuity problems, while imposing a degree of smoothing on the data. In the statistical analysis of seismicity, event catalogues are declustered to determine the background rate of spontaneous earthquake occurrence (i.e. in the absence of dependent foreshock–main shock–aftershock sequences; e.g. Console et. al. 2003; Marzocchi & Lombardi 2008). Using a declustered catalogue, different statistical tests have been developed (e.g. Matthews & Reasenberg 1988) to determine whether or not variations of the spontaneous rate are caused by the intrinsic randomness of occurrence (e.g. that of a Poisson process), or whether there are real variations in the spontaneous rates caused by changes in long-term fault movement (Ogata & Abe 1991) or an external triggering event (e.g. Selva & Marzocchi 2005; Wang et. al. 2010). In this early stage of statistical analysis of tsunami occurrence, we examine the raw catalogue without declustering the data to determine whether or not aftershock and triggered events have similar effects in rate changes as they do for earthquake occurrence.

In addition to statistical tests of spontaneous rate changes, Matthews & Reasenberg (1988) indicated that it is helpful to have other visual tools such as kernel density estimates to evaluate historical changes in seismicity. In general, kernel density estimates provide a non-parametric regression of the data using a particular kernel function K. The estimate graphic is defined as  

1
formula
where Xi are the observation values, N the number of observations and h is the bandwidth for the kernel function (Silverman 1998). The kernel function is often in the form of a probability density function (usually unimodal and symmetric, such as the Gaussian function) that satisfies the condition  
2
formula

The kernel can be thought of as a probability mass of size 1/N at each data point (Wand & Jones 1995). There are numerous kernel density functions commonly used, two of which we test later.

The value used for the bandwidth (h) controls the smoothing of the kernel density estimate. Although the process of choosing the optimal bandwidth is somewhat subjective, guidance is provided based on minimization of the mean integrated square error (MISE)  

3
formula

In reference to the standard normal density distribution, Silverman (1998) indicates that the optimal bandwidth is  

4
formula

Although this value of the bandwidth may oversmooth the density estimates in some cases (Wand & Jones 1995), results presented here using this value appear to be consistent with fluctuations observed in the cumulative plot (Fig. 2). hopt values for the two subcatalogues are 10.1 yr for the >1m data and 4.4 yr for the >0.1m data.

Density estimates of the historical tsunami data are shown in Fig. 4, using two different kernel functions: the biweight and Epanechnikov kernels. The biweight kernel function is given by  

5
formula
whereas the Epanechnikov kernel function is  
6
formula

Figure 4

Kernel density estimates of historical tsunami occurrence. Events >1m from 1890 to 2010: (a) biweight kernel; (b) Epanechnikov kernel. Events >0.1m from 1960 to 2010: (c) biweight kernel; (d) Epanechnikov kernel.

Figure 4

Kernel density estimates of historical tsunami occurrence. Events >1m from 1890 to 2010: (a) biweight kernel; (b) Epanechnikov kernel. Events >0.1m from 1960 to 2010: (c) biweight kernel; (d) Epanechnikov kernel.

Several features of the kernel density estimate appear robust. For the >1m data (Figs 4a and b), a prominent rate increase occurs in the 1990s as also suggested by the cumulative event plots in Fig. 2. The kernel density estimate for the >1m data also suggests two other periods with slightly higher occurrence: one in the 1930s and one from the late 1960s to the early 1970s. Because of the smaller bandwidth, the density estimate for the >0.1m data more finely resolves the rate increase in the 1990s, as well as indicating an apparent rate increase starting in the mid-2000s (Figs 4c and d). In each case, the high number of tsunamis that occurred in the mid-1990s is the most anomalous in the last century.

Because of the finite bandwidth of the kernel density functions, there are edge effects at the beginning and end of the catalogue. Smoothing at the beginning and end of the historical catalogue causes an artificial decrease in the density estimates. Boundary kernels have been proposed to fix this problem (Wand & Jones 1995). In particular, Chen (2000) developed non-negative kernels based on the gamma distribution, but with a variable shape parameter:  

7
formula
where Γ is the gamma function and b is a scale parameter. One property of this kernel is that the shape and amount of smoothing vary depending on proximity to the beginning and end of the data. Overall, the kernel density estimate using the variable gamma distribution (Fig. 5, red curve) is less smooth, but again indicates that the mid-1990s rate increase is prominent. This analysis suggests that the onset of this rate increase is more abrupt than the conventional density estimate. The gamma kernel estimate can also be used to determine whether or not the rate increase in the mid-2000s is waning. For the >1m data, the rate of occurrence appears to be reduced to the background level prior of the first half of the 20th century by 2010. For the >0.1m data, it appears that the rate increase is continuing to the present day. However, we cannot rule out the possibility that reporting of small tsunamis has increased since the occurrence of the 2004 Indian Ocean tsunami, thus generating an artificial rate increase relative to pre-2004 levels.

Figure 5

Kernel density estimates of historical tsunami occurrence using the variable gamma kernel of Chen (2000) (red curve) compared to the biweight kernel shown in Fig. 4 (blue curve). (a) Events >1m from 1890 to 2010; (b) Events >0.1m from 1960 to 2010.

Figure 5

Kernel density estimates of historical tsunami occurrence using the variable gamma kernel of Chen (2000) (red curve) compared to the biweight kernel shown in Fig. 4 (blue curve). (a) Events >1m from 1890 to 2010; (b) Events >0.1m from 1960 to 2010.

Seismicity patterns indicate that the number of events within a cluster increases sharply soon after a triggering event and decays according to a geometric distribution (Kagan 2010). This observation motivated us to try a one-sided exponential kernel. Because the shape of the exponential kernel is constant and is a special case of the gamma distribution, this would also help evaluate the effects of the variable-shape gamma kernel described above. In this case (Fig. 6), the density estimate is rougher still, but the basic structure of tsunami event occurrence is similar to that shown in Fig. 5. Whether or not all of these rate increases are consistent with that of a stationary Poisson process is the subject of the next section.

Figure 6

Kernel density estimates of historical tsunami occurrence using a one-sided exponential kernel (red curve) compared to the biweight kernel shown in Fig. 4 (blue curve). (a) Events >1m from 1890 to 2010; (b) Events >0.1m from 1960 to 2010.

Figure 6

Kernel density estimates of historical tsunami occurrence using a one-sided exponential kernel (red curve) compared to the biweight kernel shown in Fig. 4 (blue curve). (a) Events >1m from 1890 to 2010; (b) Events >0.1m from 1960 to 2010.

Distribution of tsunami event numbers

Several previous earthquake studies demonstrate that rate changes can be traced to specific triggering events (e.g. Marsan & Nalbant 2005). For the tsunami rate changes described in the previous section, however, dominant triggering events are not as obvious. In this section, we test whether the rate changes can be ascribed to random fluctuations of tsunami event numbers associated with a stationary Poisson process. We also compared the distribution of event numbers with a negative binomial distribution associated with temporal clustering, assuming a large enough magnitude range and extensive time and space windows as indicated by Kagan (2010).

The Poisson distribution is a discrete distribution that describes the probability that a number of events (k) can occur in an interval of time given by the following mass distribution:  

8
formula
where λ is the rate or intensity parameter. The mean (μ) and variance (σ2) of the Poisson distribution both equal λ. However, a plot of the yearly tsunami event numbers shown in Fig. 7 suggest that the observed number distribution is slightly overdispersed compared to the Poisson distribution (i.e. σ2 > μ).

Figure 7

Annual number of global tsunamis: (a) >1m 1890–2010 data set; (b) >0.1m 1960–2010 data set.

Figure 7

Annual number of global tsunamis: (a) >1m 1890–2010 data set; (b) >0.1m 1960–2010 data set.

A goodness-of-fit test that can be performed relative to the assumption of a Poisson distribution and maximum-likelihood parameter estimation is Pearson′s chi-square test (e.g. Rice 2007)  

9
formula
where Oi and Ei are the observed and expected distribution values, respectively. For this and subsequent statistical tests, a significance level of p= 0.05 (i.e. a 95 per cent confidence level) is chosen from the outset. For the >1m data set, this test yields a chi-square value of 150.6 with 119 degrees of freedom and an associated p value of 0.027 (H0: Poisson distribution with λmle= 2.3 events yr–1). For the >0.1m data set, the chi-square test yields a chi-square value of 57.9 with 50 degrees of freedom and an associated p value of 0.21 (H0: Poisson distribution with λmle= 7.9 events yr–1). Therefore, whereas it is likely that the global tsunami event numbers >0.1m since 1960 follow a Poisson distribution, the larger catalogue of significant tsunamis >1m since 1890 do not appear to follow a Poisson distribution at the 95 per cent confidence level.

An alternative to the Poisson distribution for overdispersed event numbers is the negative binomial distribution. In general, the negative binomial distribution can be thought of in several ways, for example as representing a Poisson cluster process, a process in which the Poisson rate parameter λ follows a gamma distribution, etc. (Rice 2007). Specifically for earthquakes, Kagan (2010) indicates that the negative binomial distribution is an appropriate model for event catalogues that contain clusters if large space and time windows are used and if the magnitude range of the events is large. Kagan (2010) also suggests that the negative binomial distribution is a result of more general branching models of earthquake occurrence that reproduce the G–R relation and Omori′s law (e.g. Ogata 1988; Ogata & Zhuang 2006). As a consequence, the parameters of the negative binomial distribution for event numbers can be directly related to the parameters of the earthquake size distribution (Kagan & Jackson 2000; Kagan 2010).

Kagan (2010) provides a description of the different forms of the negative binomial distribution as well as parameter estimation methods as applied to earthquake event numbers (see also Kagan & Jackson 2000). The form used here is  

10
formula
where the distribution parameters satisfy n > 0 and 0 ≤p≤ 1. The definition of the overdispersion parameter used here is graphic Two parameter estimation methods are described by Kagan (2010): the moment method and the maximum-likelihood estimate (MLE). For the former,  
11
formula
where m1 and m2 are the mean and variance of the observed event distribution. The moment method and the MLE yield similar parameter estimates for the tsunami data (Table 1).

Table 1

Estimated parameters for the negative binomial distribution of tsunami event counts.

Table 1

Estimated parameters for the negative binomial distribution of tsunami event counts.

A likelihood-ratio test is made for the null hypothesis that the overdispersion parameter of the negative binomial distribution is zero (α= 0, equivalent to a Poisson distribution). This version of the likelihood-ratio test accounts for α= 0 being on the boundary of the parameter space (Gutierrez et. al. 2001). The p value for the >1m data set is 0.04, whereas the p value for the >0.1m data set is 0.27. Consistent with the results of the chi-square test for the Poisson distribution, this test again indicates that the Poisson model can be rejected for the >1m data set (1890–2010) in favour of negative binomial model, but the extra distribution parameter associated with the negative binomial distribution is not necessary to explain the >0.1m data set (1960–2010; i.e. Poisson null hypothesis cannot be rejected).

The different models can also be evaluated with respect to the Akaike information criterion (AIC). The standard form of AIC is given by  

12
formula
where log L is the log likelihood for a particular model and k is the number of model or distribution parameters. For the >1m data set, the negative binomial distribution has the lower AIC value, indicating that it is the preferred model. The difference in AIC values, however, is small (ΔAIC= 1.1) and is not statistically significant at the 95 per cent level (see the Appendix). For the >0.1m data set, the Poisson distribution yields the lower AIC value (ΔAIC= 1.6).

Interevent distribution

The temporal occurrence of tsunami events can be viewed from a different perspective: the distribution of interevent times. In a previous study, Geist & Parsons (2008) examined the empirical density distribution of interevent times for the entire global catalogue (up to the year 2007). The conclusion from that study was that there were more short interevent times than expected from an exponential distribution associated with a stationary Poisson process. Several statistical models are consistent with the data, including the generalized gamma distribution (Corral 2004b) and a theoretical distribution derived from the ETAS model (Saichev & Sornette 2007).

The global interevent distribution is revisited in this study, using the >1 and >0.1m data sets. Selecting data with at least hourly precision limits the events to seismogenic tsunamis or tsunami with a seismically triggered landslide component. Interevent times have a much larger range than annual event numbers. As such, care must be taken when binning the data. Corral (2004a) proposed a procedure where interevent times are binned according to an exponential function cnn= 1, 2, 3, …, where the binning parameter c is consistent with the range of the data. The bin counts are divided by the width of each bin and by the total number of interevent times to yield the empirical density distribution shown in Fig. 8 (circles).

Figure 8

Density distribution of tsunami interevent times. Circles: empirical distribution. Magenta line: exponential distribution. Blue line: gamma distribution. (a) >1m 1890–2010 data set; (b) >0.1m 1960–2010 data set.

Figure 8

Density distribution of tsunami interevent times. Circles: empirical distribution. Magenta line: exponential distribution. Blue line: gamma distribution. (a) >1m 1890–2010 data set; (b) >0.1m 1960–2010 data set.

Also shown in Fig. 8 are the best-fit exponential model given by the density distribution  

13
formula
where λ is the rate parameter and the gamma model given by the density distribution  
14
formula
where x=λτ. The rate parameter for the exponential distribution is determined by the mean rate for the entire data set, whereas the shape (γ) and scale (β) parameters for the gamma distribution are determined using the MLE method. The best-fit parameters for the >1m data set are 1/β= 166 d and γ= 0.74, whereas for the >0.1m data set 1/β= 45 d and γ= 0.70. Maximum log-likelihood contours along with the 95 per cent confidence estimates for the gamma distribution parameters are shown in Fig. 9. For each data set, the 95 per cent confidence interval for the shape parameter γ is less than one, indicating temporal clustering either from aftershock sequences or longer-term correlations (Corral 2004a; Hainzl et. al. 2006).

Figure 9

Maximum-likelihood contours and 95 per cent confidence intervals for gamma distribution parameters: (a) >1m 1890–2010 subcatalogue; (b) >0.1m 1960–2010 subcatalogue.

Figure 9

Maximum-likelihood contours and 95 per cent confidence intervals for gamma distribution parameters: (a) >1m 1890–2010 subcatalogue; (b) >0.1m 1960–2010 subcatalogue.

Using the maximum log likelihood for both the >1 and >0.1m data sets, the AIC is significantly smaller for the gamma model compared to the exponential model, suggesting a better goodness-of-fit (Ogata & Zhuang 2006). In addition, the difference in AIC values between the models is statistically significant for each data set (see the Appendix). Thus, as previously indicated by Geist & Parsons (2008) using several other interevent distribution models, there are more short interevent times than expected from the exponential distribution for both data sets.

For the >0.1m data set, this appears to be in conflict with the results of the previous section in which a Poisson distribution cannot be rejected for the distribution of event numbers. This may be explained by the larger range of observed interevent times compared to annual event numbers, and the fact that most interevent times occur near the mean interevent time, where there is little difference between the exponential and gamma distributions. Another, perhaps more plausible, explanation is that because the characteristic timescale of clustering is much less than 1 yr and because clusters are infrequent, the effects of clustering would be filtered or smoothed out in the annual count data described in Section 4.

The interevent distribution shown in Fig. 8 can be used to identify anomalous temporal and spatial clusters of events. Examination of the short-term rate changes in the 1990s (>1m data) indicates several pairs of events with short interevent times in the range that deviate from the exponential distribution (Fig. 8a): for example, two earthquakes along the Java subduction zone 1.1 d apart in 1994, two along the Ryukyu subduction zone 0.67 d apart in 1995 and two along the Kamchatka subduction zone 8.7 d apart in 1997. However, there does not appear to be classical aftershock sequences (i.e. several events or more) in the tsunami catalogue. Clusters are temporally identified as having interevent times that are in the range that deviates from an exponential distribution: less than 10 d for the >1m subcatalogue and less than 1d for the >0.1m subcatalogue (Fig. 8, comparing binned data with magenta curve). An additional criterion to define clusters is that the events are in the same geographic region as specified in the NGDC database. Clusters identified in this manner typically consist of only two to three events for the >1m subcatalogue and two to four events for the >0.1m subcatalogue. We therefore term them as ‘mini-clusters’ to distinguish them from earthquake event clusters and foreshock–main shock–aftershock sequences that consist of many more events. Interevent triggering most likely occurs for longer durations than considered for the simple, but restrictive, criterion described above (i.e. 10 and 1d maximum limits). For example, Parsons (2002) indicates that for large (M≥ 7) earthquakes worldwide, there is an Omori-law decay that lasts between ∼7 and 11yr after a main shock. However, cluster-identification methods for tsunamigenic events for this duration have yet to be developed and tested and therefore are currently beyond the scope of this paper.

To determine the relationship between mini-clusters and historic rate changes in tsunami occurrence, we plot in Fig. 10 the timing of the mini-clusters on the bilinear kernel density estimate shown in Fig. 4. Each arrow in Fig. 10 represents a mini-cluster. Several mini-clusters of events occur close in time during the 1990s rate increase. It should be noted, however, that most mini-clusters that occur close in time are geographically distant from one another, both in the 1990s rate increase (Figs 10 a and b) and in the 1960–1970s rate increase (Fig. 10a) and therefore do not have an apparent physical-triggering mechanism.

Figure 10

Historic distribution of mini-clusters (arrows) of events plotted on the bilinear kernel density estimate from Fig. 4. Labels indicate location and number of events for each mini-cluster. (a) >1m 1890–2010 subcatalogue; (b) >0.1m 1960–2010 subcatalogue.

Figure 10

Historic distribution of mini-clusters (arrows) of events plotted on the bilinear kernel density estimate from Fig. 4. Labels indicate location and number of events for each mini-cluster. (a) >1m 1890–2010 subcatalogue; (b) >0.1m 1960–2010 subcatalogue.

Historical changes in proportion of tsunamigenic earthquakes

We examine the effects of historical changes in the proportion of earthquakes that generate observable tsunamis to determine whether it is related to the temporal clustering of tsunami events described in the previous section. Tsunami sources can be considered as a subset of all earthquakes that meet certain criteria: (1) a minimum magnitude threshold (approximately M= 6.5–7); (2) primarily dip-slip mechanism and (3) shallow enough focal depth to cause significant seafloor displacement. Under these criteria, the statistics of tsunamigenic earthquakes might share similar characteristics to that for the entire global earthquake catalogue. There is even a greater chance that tsunamigenic earthquakes share similar statistical characteristics with shallow subduction-zone earthquakes indentified in Bird & Kagan (2004) and Kagan et al. (2010). In the latter study, subduction-zone earthquakes do not appear anomalous in terms of their clustering parameters in comparison to the global catalogue. Complications with the statistics of tsunamigenic events in general, however, arise with those events that are not generated by earthquakes or are triggered by earthquakes indirectly. This would include non-seismically triggered landslides, landslides triggered by small magnitude earthquakes and volcanogenic processes.

Shown in Fig. 11(a) is the annual count of earthquakes (M≥ 7) and tsunamis (≥1m) from 1940 to 2010. The centennial earthquake catalogue (Engdahl et. al. 1998; Engdahl & Villaseñor 2002) was used to determine the earthquake annual counts from 1900 to 2001. From 2002 to 2010, the Advanced National Seismic System catalogue was used (). The proportion of earthquakes that generated tsunamis is very roughly constant during this time period (Fig. 11b; mean = 0.16), although certain years such as 1994 are clearly anomalous. In Fig. 11(c), the global distribution of both earthquakes (right) and tsunamis (left) are compared for the years of 1994 (anomalous) and 1995 (red circles). The great majority of M≥ 7 earthquakes in 1994 were along oceanic subduction zones, whereas in the following year, M≥ 7 earthquakes are distributed among oceanic and continental fault zones. Other tsunamigenic factors such as focal depth and mechanism do not appear to have been anomalous in 1994. It is possible, therefore, that the random geographic distribution of global earthquakes contributes to local rate changes in tsunami events, in addition to temporal grouping of tsunami source clusters.

Figure 11

(a) Annual number of global tsunamis (≥1m) and earthquakes (M≥ 7) from 1900 to 2010; (b) proportion of tsunamis (≥1m) to earthquakes (M≥ 7); (c) geographic distribution of tsunamis and earthquakes 1940–2010 (yellow circles). Radius of circle is proportional to size. Red circles represent events occurring in 1994 and 1995, as indicated.

Figure 11

(a) Annual number of global tsunamis (≥1m) and earthquakes (M≥ 7) from 1900 to 2010; (b) proportion of tsunamis (≥1m) to earthquakes (M≥ 7); (c) geographic distribution of tsunamis and earthquakes 1940–2010 (yellow circles). Radius of circle is proportional to size. Red circles represent events occurring in 1994 and 1995, as indicated.

Discussion

This study demonstrates that tsunami numbers (resulting in >1m run-up) and interevent times (both >1 and >0.1m) deviate from that expected for a Poisson process, suggesting that interevent triggering has an effect on tsunami source occurrence. Parsons (2002) indicates that for large earthquakes (Ms≥ 7), triggered earthquakes occurred at rates higher than the background rates in regions of increased shear stress up to 240km away from and ∼7–10 yr after the main shock. The most likely mechanism for triggering large earthquakes of tsunamigenic magnitude is the static stress change after a previous earthquake, rather than dynamic stress changes resulting from the passage of seismic waves from an earthquake (Parsons & Velasco 2011). The distribution of earthquake event numbers investigated by Kagan (2010) indicates that for large space and time windows and for a large enough magnitude range, the negative binomial distribution best describes the data. Such a distribution results from clusters of events dependent on a triggering event that is contained within the data. For the case of tsunamis, the clusters not only include aftershocks, but landslides in regions destabilized from a previous event, that generate tsunamis from an earthquake with a much smaller magnitude than the conventional magnitude threshold for tsunamigenesis.

The prominent tsunami rate change in the mid-1990s (Fig. 4) appears to have two primary causes. Temporal grouping of tsunami mini-clusters, defined as having anomalously short interevent times compared to an exponential distribution and being within close geographic proximity to one another, occurs most notably in the mid-1990s. Even though the groups of mini-clusters appear to be independent, the fact that several of these mini-clusters occurred close in time greatly contributed to an overall global rate increase. In addition, in some years (e.g. 1994), the variable geographic occurrence of earthquakes is skewed toward oceanic fault zones, resulting in a greater proportion of tsunamigenic earthquakes (Fig. 11).

With regard to the temporal grouping of mini-clusters, it is possible to view the tsunami occurrence data as a Poisson cluster process. In the specific case of a Hawkes self-exciting process (Hawkes 1971), immigrant events are distributed according to a stationary Poisson process and each immigrant generates a finite cluster according to a fertility rate (Kagan & Knopoff 1987; Ogata 1998; Bordenave & Torrisi 2007; Kagan & Jackson 2011). For tsunami events, the fertility rate is much lower compared to analogous processes in seismicity (Geist & Parsons 2008). Even with the restrictive definition of mini-clusters above, treating each mini-cluster as being associated with a single immigrant, the event number statistics are likely to be close to a Poisson distribution (cf., Gardner & Knopoff 1974). Confirmation of this, however, awaits application of declustering methods to tsunami source data, although the results provided in this study suggest that the tsunami catalogue may well be considered as a Poisson cluster process, with only the infrequent occurrence of mini-clusters. However, when mini-clusters occasionally occur close in time as would generally be expected from a Poisson process, the apparent rate of tsunami occurrence can increase significantly.

Conclusions

The global occurrence of tsunamis deviates from a Poisson process as evidenced by the distribution of event numbers for tsunami sizes >1m and from the interevent distribution of tsunami sources. Similar to seismicity models, the negative binomial distribution appears to best fit annual tsunami rates because there are periods with more tsunamis than expected from a Poisson dispersed process (σ2=μ). The most prominent period of high tsunami rates occurred in the mid-1990s. Kernel density estimates of the historic tsunami catalogue indicate a robust and prominent increase in the number of tsunamis during this time that are likely not related to increased detection capability. Using gamma kernels, the onset of the 1990s rate change appears to be abrupt. However, there does not appear to be a protracted sequence of triggered events during this time period as one may suspect from seismicity analogues. Rather, the cause of the mid-1990s rate increase is the apparently random grouping of geographically distinct tsunami source ‘mini-clusters’, in addition to a preferential location of earthquakes of tsunamigenic magnitude along offshore faults (e.g. in 1994). Further research into the conditions in which mini-clusters may occur, such as triggering between interplate thrust and outer-rise fault systems, may lead to the development of short-term tsunami forecasts.

Acknowledgements

The authors gratefully acknowledge constructive reviews of the manuscript by Jason Chaytor, Alex Apotsos and two anonymous journal reviewers. We particularly acknowledge additional insights on the statistical results provided by one of the journal reviewers. Discussions with Paula Dunbar and Heather McCullough at the National Geophysical Data Center (NGDC) were helpful in better understanding the tsunami event catalogue.

References

Ammon
C.J.
Kanamori
H.
Lay
T.
,
2008
.
A great earthquake doublet and seismic stress transfer cycle in the central Kuril Islands
,
Nature
 ,
451
,
561
565
.
Bird
P.
Kagan
Y.Y.
,
2004
.
Plate-tectonic analysis of shallow seismicity: apparent boundary width, beta-value, corner magnitude, coupled lithosphere thickness, and coupling in 7 tectonic settings
,
Bull. seism. Soc. Am.
 ,
94
,
2380
2399
.
Bordenave
C.
Torrisi
G.L.
,
2007
.
Large deviations of Poisson cluster processes
,
Stoch. Models
 ,
23
,
593
625
.
Burnham
K.P.
Anderson
D.R.
,
2010
.
Model Selection and Multimodel Inference
 ,
2nd edn
,
Springer
, New York, NY,
488
pp.
Cahill
A.T.
,
2003
.
Significance of AIC differences for precipitation intensity distributions
,
Adv. Water Resour.
 ,
26
,
457
464
.
Chen
S.X.
,
2000
.
Probability density function estimation using gamma kernels
,
Ann. Inst. Stat. Math.
 ,
52
,
471
480
.
Christensen
D.H.
Ruff
L.J.
,
1988
.
Seismic coupling and outer-rise earthquakes
,
J. geophys. Res.
 ,
93
,
13 421–13 444
.
Console
R.
Murru
M.
Lombardi
A.M.
,
2003
.
Refining earthquake clustering models
,
J. geophys. Res.
 ,
108
, doi:10.1029/2002JB002130.
Corral
A.
,
2004
.
Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes
,
Phys. Rev. Lett.
 ,
92
, doi:10.1029/2002JB002130.
Corral
A.
,
2004
.
Universal local versus unified global scaling laws in the statistics of seismicity
,
Physica A
 ,
340
,
590
597
.
Dengler
L.
Uslu
B.
Barberopoulou
A.
Yim
S.C.
Kelly
A.
,
2009
.
The November 15, 2006 Kuril Islands-generated tsunami in Crescent City, California
,
Pure appl. Geophys.
 ,
166
,
37
53
.
Dmowska
R.
Lovison
L.C.
,
1992
.
Influence of asperities along subduction interfaces on the stressing and seismicity of adjacent areas
,
Tectonophysics
 ,
211
,
23
43
.
Dmowska
R.
Rice
J.R.
Lovison
L.C.
Josell
D.
,
1988
.
Stress transfer and seismic phenomena in coupled subduction zones during the earthquake cycle
,
J. geophys. Res.
 ,
93
,
7869
7884
.
Dunbar
P.
Stroker
K.
McCullough
H.
,
2010
.
Do the 2010 Haiti and Chile earthquakes and tsunamis indicate increasing trends?
,
Geomatics, Nat. Hazards and Risk
 ,
1
,
95
114
.
Dunbar
P.K.
et al
,
2009
.
Long-term tsunami data archive supports tsunami forecast, warning, research, and mitigation
,
Pure appl. Geophys.
 ,
165
,
2275
2291
.
Engdahl
E.R.
Van der Hilst
R.D.
Buland
R.P.
,
1998
.
Global teleseismic earthquake relocation with improved travel times and procedures for depth determination
,
Bull. seism. Soc. Am.
 ,
88
,
722
743
.
Engdahl
E.R.
Villaseñor
A.
,
2002
.
Global seismicity: 1900–1999
,
International Handbook of Earthquake and Engineering Seismology, Part A
 , pp.
665
690
, eds
Lee
W.H.K.
Kanamori
H.
Jennings
P.C.
Kisslinger
C.
,
Academic Press
, San Diego, CA.
Fujii
Y.
Satake
K.
,
2008
.
Tsunami sources of the November 2006 and January 2007 Great Kuril earthquakes
,
Bull. seism. Soc. Am.
 ,
98
,
1559
1571
.
Gardner
J.K.
Knopoff
L.
,
1974
.
Is the sequence of earthquakes in southern California, with aftershocks removed, Poissonian?
,
Bull. seism. Soc. Am.
 ,
64
,
1363
1367
.
Geist
E.L.
Parsons
T.
,
2006
.
Probabilistic analysis of tsunami hazards
,
Nat. Hazards
 ,
37
,
277
314
.
Geist
E.L.
Parsons
T.
,
2008
.
Distribution of tsunami inter-event times
,
Geophys. Res. Lett.
 ,
35
,
L02612
, doi:02610.01029/02007GL032690.
González
F.I.
et al
,
2009
.
Probabilistic tsunami hazard assessment at Seaside, Oregon for near- and far-field seismic sources
,
J. geophys. Res.
 ,
114
, doi:10.1029/2008JC005132.
Gusiakov
V.K.
,
2001
.
Basic Pacific tsunami catalog and database, 47 BC-2000 AD: results of the first stage of the project
,
International Tsunami Symposium
 , pp.
263
272
,
NOAA
, Seattle, WA.
Gusiakov
V.K.
,
2009
.
2 Tsunami history: recorded
,
The Sea
 , pp.
23
53
, eds
Bernard
E. N.
Robinson
A. R.
,
Harvard University Press
, Cambridge, MA.
Gutierrez
R.G.
Carter
S.
Drukker
D.M.
,
2001
.
On Boundary-Value Likelihood-Ratio Tests
 , pp.
15
18S
,
Stata Press
, College Station, TX.
Hainzl
S.
Scherbaum
F.
Beauval
C.
,
2006
.
Estimating background activity based on interevent-time distribution
,
Bull. seism. Soc. Am.
 ,
96
,
313
320
.
Hawkes
A.G.
,
1971
.
Spectra of some self-exciting and mutually exciting point processes
,
Biometrika
 ,
58
,
83
90
.
Horrillo
J.
Knight
W.
Kowalik
Z.
,
2008
.
Kuril Islands tsunami of November 2006: 2. Impact at Crescent City by local enhancement
,
J. geophys. Res.
 ,
113
, doi:10.1029/2007JC004404.
Kagan
Y.Y.
,
2010
.
Statistical distributions of earthquake numbers: consequence of branching process
,
Geophys. J. Int.
 ,
180
,
1313
1328
.
Kagan
Y.Y.
Bird
P.
Jackson
D.D.
,
2010
.
Earthquake patterns in diverse tectonic zones of the globe
,
Pure appl. Geophys.
 ,
167
,
721
741
.
Kagan
Y.Y.
Jackson
D.D.
,
2000
.
Probabilistic forecasting of earthquakes
,
Geophys. J. Int.
 ,
143
,
438
453
.
Kagan
Y.Y.
Jackson
D.D.
,
2011
.
Global earthquake forecasts
,
Geophys. J. Int.
 ,
184
,
759
776
.
Kagan
Y.Y.
Knopoff
L.
,
1987
.
Statistical short-term earthquake prediction
,
Science
 ,
236
,
1563
1567
.
Kowalik
Z.
Horrillo
J.
Knight
W.
Logan
T.
,
2008
.
Kuril Islands tsunami of November 2006: 1. Impact at Crescent City by distant scattering
,
J. geophys. Res.
 ,
113
, doi:10.1029/2007JC004402.
Lay
T.
Ammon
C.J.
Kanamori
H.
Rivera
L.
Koper
K.D.
Hutko
A.R.
,
2010
.
The 2009 Samoa-Tonga great earthquake triggered doublet
,
Nature
 ,
466
,
964
968
.
Lay
T.
Kanamori
H.
Ammon
C.J.
Hutko
A.R.
Furlong
K.
Rivera
L.
,
2009
.
The 2006–2007 Kuril Islands great earthquake sequence
,
J. geophys. Res.
 ,
114
, doi:10.1029/2008JB006280.
Lin
J.
Stein
R.S.
,
2004
.
Stress triggering in the thrust and subduction earthquakes and stress interaction between the southern San Andreas and nearby thrust and strike-slip faults
,
J. geophys. Res.
 ,
109
, doi:10.1029/2003JB002607.
MacInnes
B.T.
Pinegina
T.K.
Bourgeois
J.
Razhegaeva
N.G.
Kaistrenko
V.M.
Kravchenovskaya
E.A.
,
2009
.
Field survey and geological effects of the 15 November 2006 Kuril tsunami in the middle Kuril Islands
,
Pure appl. Geophys.
 ,
166
,
9
36
.
Marsan
D.
Nalbant
S.S.
,
2005
.
Methods for measuring seismicity rate changes: a review and a study of how the Mw 7.3 Landers earthquake affected the aftershock sequence of the Mw 6.1 Joshua Tree earthquake
,
Pure appl. Geophys.
 ,
2005
,
1151
1185
.
Marzocchi
W.
Lombardi
A.M.
,
2008
.
A double branching model for earthquake occurrence
,
J. geophys. Res.
 , 23, doi:10.1029/2007JB005472.
Matthews
M.V.
Reasenberg
P.A.
,
1988
.
Statistical methods for investigating quiescence and other temporal seismicity patterns
,
Pure appl. Geophys.
 ,
126
,
357
372
.
Mofjeld
H.O.
,
2009
.
8 Tsunami measurements
,
The Sea
 , pp.
201
235
, eds
Bernard
E.N.
Robinson
A.R.
,
Harvard University Press
, Cambridge, MA.
Ogata
Y.
,
1988
.
Statistical models for earthquake occurrences and residual analysis for point processes
,
J. Am. Stat. Assoc.
 ,
83
,
9
27
.
Ogata
Y.
,
1998
.
Space-time point-process models for earthquake occurrences
,
Ann. Inst. of Stat. Math.
 ,
50
,
379
402
.
Ogata
Y.
Abe
K.
,
1991
.
Some statistical features of the long-term variation of the global and regional seismic activity
,
Int. Stat. Rev.
 ,
59
,
139
161
.
Ogata
Y.
Toda
S.
,
2010
.
Bridging great earthquake doublets through silent slip: on- and off-fault aftershocks of the 2006 Kuril Island subduction earthquake toggled by a slow slip on the outer rise normal fault of the 2007 great earthquake
,
J. geophys. Res.
 ,
115
, doi:10.1029/2009JB006777.
Ogata
Y.
Zhuang
J.
,
2006
.
Space-time ETAS models and an improved extension
,
Tectonophysics
 ,
413
,
13
23
.
Parsons
T.
,
2002
.
Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone
,
J. geophys. Res.
 ,
107
,
2199
, doi:2110.1029/2001JB000646.
Parsons
T.
Velasco
A.A.
,
2011
.
Absence of remotely triggered large earthquakes beyond the main shock region
,
Nature Geoscience
 ,
4
,
312
316
.
Raeesi
M.
Atakan
K.
,
2009
.
On the deformation cycle of a strongly coupled plate interface: the triple earthquakes of 16 March 1963, 15 November 2006, and 13 January 2007 along the Kurile subduction zone
,
J. geophys. Res.
 ,
114
, doi:10.1029/2008JB006184.
Rice
J.A.
,
2007
.
Mathematical Statistics and Data Analysis
 ,
3rd edn
,
Brooks
/Cole, Cengage Learning
, Belmont, CA, 672pp.
Saichev
A.
Sornette
D.
,
2007
.
Theory of earthquake recurrence times
,
J. geophys. Res.
 ,
112
, doi:10.1029/2006JB004536.
Selva
J.
Marzocchi
W.
,
2005
.
Variations of southern California seismicity: empirical evidence and possible physical causes
,
J. geophys. Res.
 ,
110
, doi:10.1029/2004JB003494.
Silverman
B.W.
,
1998
.
Density Estimation for Statistics and Data Analysis
 ,
6th edn
,
Chapman and Hall
/CRC
, Boca Raton, FL, 175pp.
Wand
M.P.
Jones
M.C.
,
1995
.
Kernel Smoothing
 ,
Chapman and Hall
/CRC
, Boca Raton, FL, 212pp.
Wang
Q.
Jackson
D.D.
Zhuang
J.
,
2010
.
Are spontaneous earthquakes stationary in California?
J. geophys. Res.
 ,
115
, doi:10.1029/2009JB007031.
Wiemer
S.
Wyss
M.
,
2000
.
Minimum magnitude of completeness in earthquake catalogs: examples from Alaska, the western United States, and Japan
,
Bull. seism. Soc. Am.
 ,
90
,
859
869
.

Appendix

Appendix: Significance of AIC differences

In this appendix, we describe the technique used to establish whether Akaike information criterion (AIC) values are significantly different for nested distributions. A distribution is nested within a parent distribution if the nested distribution can be obtained by ignoring one of the parameters (e.g. setting equal to 0 or 1) of the parent distribution. For example, the exponential distribution  

15
formula
is nested within the gamma distribution  
16
formula
because eq. (A1) can be recovered by setting shape parameter (γ) in the gamma distribution equal to one (graphic).

The likelihood-ratio statistic for nested distributions is given by  

17
formula
where L is the log likelihood and graphic and θ are the distribution parameters for the nested and parent distributions, respectively (Cahill 2003). The likelihood-ratio statistic is approximately distributed as a χ2 distribution with j degrees of freedom equal to the difference in the number of distribution parameters between the parent and nested distributions. Cahill et al. (2003) discusses the assumptions and conditions in which the distribution given in eq. (A3) can be approximated by the χ2 distribution. The likelihood-ratio statistic (eq. A3) is distinguished from the likelihood ratio hypothesis test. The former is used here to determine whether differences among AICs are significant.

Distribution eq. (A3) can be written in terms of the AIC for the nested and parent distributions, that is,  

18
formula

Substituting into eq. (A3) one obtains (Cahill 2003; Burnham & Anderson 2010)  

19
formula
where the ∼ indicates that the left-hand side has the distribution of the right-hand side. The two AICs are different at a significance level α if the left-hand side of eq. (A5) is greater than χ2j (α). For the exponential and gamma distributions used in this study, i= 1 and j= 1, such that  
20
formula
for a 95 per cent significance level, χ21 (0.95) = 3.84.

AIC values for the exponential and gamma distributions are tested to see if they are significantly different for the interevent time data (Section 5). For the >1.0m tsunami run-up data, the AIC for the exponential distribution is 3498,whereas the AIC for the gamma distribution is 3481. Using these values in eq. (A6), 19 > χ21 (0.95) indicating that the gamma distribution is the better model and that the AIC are statistically different at the 95 per cent significance level. For the >0.1m data, the AIC for the exponential distribution is 3997, whereas the AIC for the gamma distribution is 3958. Using these values, 41 > χ21 (0.95) also indicates that the gamma distribution is the better model and that the AIC are statistically different at the 95 per cent significance level.

Inasmuch as the Poisson distribution can be considered nested within the negative binomial distribution as the overdispersion parameter graphic (cf., eq. 10), we can similarly evaluate the AICs from the different models for the annual count data (Section 4). For the >1.0m data, the AIC for the Poisson distribution is 451.8, whereas the AIC for the negative binomial distribution is 450.7. Using these values, 3.1 < χ21 (0.95) indicates that whereas the negative binomial model is the better model based solely on the AIC, the AICs are not different at the 95 per cent significance level. For the >0.1m data, the Poisson model is the better model based on comparison of AICs.