Potential earthquake triggering in a complex fault network: the northern South Island, New Zealand

SUMMARY A synthetic seismicity computer model of multiple, interacting faults has been used to investi-gate potential earthquake triggering in the Alpine–Marlborough–Buller fault network, northern South Island, New Zealand. Of particular interest is whether large, characteristic earthquakes on the central 300-km-long segment of the Alpine fault (average recurrence time of 240 yr) might trigger events to the north and northeast, possible temporal clustering of other large events and stress shadows. The synthetic seismicity model generates long, homogeneous catalogues that sample a wide range of possible stress interactions and stress states allowing statistical analysis. The catalogues used here include ∼ 1500 central Alpine fault events of M > = 7.8 and approximately 2 000 000 other events of magnitudes down to 4.8. These other events occur on four additional segments of the Alpine fault, on 30 segments of the major Marlborough and Buller faults and on 4650 randomly distributed smaller faults. The synthetic seismicity model is of the quasi-static type, governed by the Coulomb failure criterion, with extensions to better account for rupture propagation and fault healing. True rate and state friction and viscoelastic relaxation are not yet included. Mechanical properties are adjusted so that most events rupture an entire fault segment, i.e. the faults behave characteristically. The regional b-value of ∼ 1 thus arises mainly from the fault size distribution. The driving mechanism and fault properties are iteratively adjusted so that the resulting long-term slip rates, single event displacements and recurrence times match the observed, real world values. The major results are as follows.

Earthquake clustering is often explained as the result of static stress changes (Harris 1998;Stein 1999;King & Cocco 2001;Lin & Stein 2004). In this process an earthquake on one fault induces changes in stress on other faults that can bring forward the occurrence of another event well away from the immediate aftershock zone. Such stress changes are often modelled in terms of static, coseismic changes in Coulomb failure stress (CFS), calculated from dislocation theory (Okada 1992;King et al. 1994). However, earthquakes also produce dynamic stress changes associated with the passage of seismic waves that may also be important (Gomberg et al. 1998;Kilb et al. 2000;Du et al. 2003). Because the induced stress changes, not too close to the rupture, are only a small fraction of the stress drop in an earthquake, clustering on a short timescale (a few years or less) seems to require that the triggered fault be already near failure. This is in accord with the idea that the crust is always in a near critical state (Bak & Tang 1989). Deng & Sykes (1997a,b) showed that the epicentres of small to large magnitude events in southern California since the great earthquake of 1812 are correlated with positive static changes in CFS as a result of past events and tectonic loading. Harris & Simpson (1996, 1998, however, have emphasized the effect of decreases in CFS causing stress shadows. Induced changes in CFS occur essentially instantaneously. Timedependent friction, static fatigue or pore pressure readjustments can be used to explain why aftershocks do not occur all at once (Dieterich 1994;Bosl & Nur 2002;Price & Burgmann 2002;Scholz 2002; Figure 1. Regional setting of the study region, in the convergence zone of the Pacific and Australian plates. The red box shows the area in Fig. 2. The northern South Island forms a transition region between subduction in the north and continental collision to the south. Shapiro et al. 2003). On longer timescales, or at greater distances, viscoelastic readjustments (Chery et al. 2001;Zeng 2001;Pollitz & Sacks 2002;Pollitz et al. 2003) may be important in triggering other main shocks outside the traditional aftershock zone. While viscous relaxation tends to amplify the static stress changes (positive or negative) away from the event, it reloads the main shock fault plane.
Given a potential triggering event and its induced stresses, methods have been developed to estimate the change in probability of an earthquake elsewhere (Toda et al. 1998;Parsons et al. 2000). However, in complex fault networks the stress interactions over time can be very intricate and without knowledge of the pre-existing stress state these estimates must have large uncertainties. One way around this problem is to use computer synthetic seismicity models (Rundle & Kanamori 1987;Ward 1992;Ward & Goes 1993;Robinson & Benites 1995Ward 2000;Fitzenz & Miller 2001;Robinson & Benites 2001) to generate long, homogeneous catalogues of events that can be investigated statistically to average over many modes of interaction and stress distributions. Of course, the more realistic the models, the more faith can be put in their predictions.
New Zealand is a region of plate convergence (Fig. 1), with subduction of the Pacific Plate in the North Island and of the Australian Plate in the far south of the South Island (Walcott 1978). Between these subduction zones lies the Alpine fault, with the Marlborough and Buller fault networks in a transition region in the northern South Island (Fig. 2).The Alpine fault is a major ∼500-km-long, transpressive fault with a horizontal slip rate up to 0.03 m yr −1 and a vertical rate of approximately a quarter of that (Norris & Cooper 2000). Palaeoseismic studies indicate that major earthquakes C 2004 RAS, GJI, 159, 734-748 Downloaded from https://academic.oup.com/gji/article-abstract/159/2/734/627347 by guest on 28 July 2018 (maximum displacements of approximately 6 m) rupturing at least the central segment have a variable recurrence time (Wells et al. 1999;Rhoades & Van Dissen 2003); the last four major events occurred in ∼AD 1717, 1620, 1425 and 1220, judging mainly from tree ring data. Geodetic studies show that strain is accumulating at a rate consistent with these palaeoseismic results (Beavan et al. 1999). Major earthquakes have also occurred on faults in the Marlborough network that splay off the Alpine fault to the northeast and on less active faults in the Buller area to the north (Cowan 1990;Van Dissen & Yeats 1991;Yang 1992;Langridge et al. 2003a,b). In historic times (since approximately 1840) such events include those of magnitude 7.5 in 1848 on the Awatere fault (Grapes et al. 1998), magnitude 7.2 in 1888 on the central Hope fault (Cowan 1991;Downes, private communication, 2003), magnitude 7.0 in 1929 on the Poulter fault (Berryman, private communication, 2003), magnitude 7.7 also in 1929 on the White Creek fault (Doser et al. 1999) and finally magnitude 7.2 in 1968 on a fault southeast of Westport . Doser & Robinson (2002) and Hincapie et al. (2004) have studied some of these historical earthquakes and found their occurrence consistent for the most part with static stress clock advance as a result of previous events combined with tectonic loading.
In this study I use a synthetic seismicity model to simulate the seismicity in the Alpine-Marlborough-Buller fault network. Of particular interest are the questions of large central-Alpine fault events propagating along strike to the northeast past the junctions with the Marlborough faults, of such events initiating slip on the other Marlborough and Buller faults and of longer term clustering or stress shadows in the region as a whole.

THE SYN T H E T I C S E I S M I C I T Y M O D E L
The synthetic seismicity computer program (Robinson & Benites 1995, 2001) is intended to produce catalogues of earthquakes occurring on a network of interacting faults, without the problems of incompleteness, magnitude inhomogeneity and short duration that plague real catalogues. The model is of the discrete, quasi-static type (inertia is neglected) and is similar to those of Ben-Zion & Rice (1993), Ben-Zion (1996) and Fitzenz & Miller (2001), except that rupture propagation is more realistic and many faults of any orientation and slip direction can be included. All faults in the model are subdivided into smaller cells and each cell is assigned coefficients of static and dynamic friction and stress drop. When any cell fails as a result of either the tectonic loading or stresses induced by other cells failing, then stresses induced by that failure are applied to all other cells on all faults after a delay determined by the distance and the shear wave velocity. These induced stresses are calculated using the methods of Okada (1992) for an elastic half-space. If the stress on any of these other unbroken cells is sufficient for failure then they do so as part of the same event. Cells can slip more than once, in which case the dynamic coefficient of friction applies. Slip can be in any direction in the plane of the cell, depending on the direction of maximum shear stress. An event is over when all cells are stable. In this model, a large earthquake is regarded as composed of many subevents on the individual cells. Event magnitudes (Mw) are computed from the moment as determined by individual cell sizes, amounts of slip and a rigidity of 4.0 × 10 10 N m −2 . Using this synthetic seismicity model, it is possible to generate large earthquakes that reproduce a wide range of event properties as observed in the real world (Somerville et al. 1999;Robinson & Benites 2001).
Cell failure is governed by the Coulomb failure criterion, so that failure occurs when where τ s is the shear stress, µ is the dry static or dynamic coefficient of friction (as appropriate), σ is the normal stress, P is the pore pressure and t is time. Induced changes in pore pressure are calculated from where δτ is the change in stress, i runs from 1 to 3 and β is Skempton's coefficient, taken here as 0.5. Note that this differs from the common use of an apparent coefficient of friction, lower than the dry value, without specific reference to pore pressure. Beeler et al. (2000) have shown that the distinction can be important and argues for the approach used here. Simple static/dynamic friction is modified to include instant healing back to the static value shortly after the rupture front passes by (Heaton 1990) and to approximate some crack tip stress enhancements as a result of dynamic rupture (Robinson & Benites 2001). Pore pressure deviations from hydrostatic (plus or minus) are allowed to decay exponentially with a time constant of 1 yr. Similarly, extreme variations in fault normal stress that may sometime arise are allowed to decay with a time constant of 10 yr. In the real world, slip on small faults with different orientations keep the normal stress within bounds and the decay is just a simple way to efficiently account for this. Time passes at two rates. Immediately after an event has occurred it is possible to calculate the time needed before any new failure and this is used as the next time step. During an event, however, much smaller steps are used, governed by the cell size and stress propagation speed.
In the present application, the coefficients of friction are set randomly between 0.7 and 0.8, the values obtained from the induced seismicity experiments of Raleigh et al. (1976) and from the KTB ultradeep borehole (Brudy et al. 1997): this is also consistent with other field evidence (Sibson 1994). Recall that these are the dry values. The pore pressure is hydrostatic initially. The stress drops on first failure, and for later failures while the cell is still weak, are adjusted so that many, but not all, events cascade into ruptures covering most of the fault in question and that result in appropriate average slip values for these characteristic events. The initial stress drop on all cells is 40 per cent of the static strength and 10 per cent for subsequent failures before healing. Healing time is taken as 3.0 s. (Somerville et al. 1999). Another important factor here is the dynamic enhancement factor. This is a multiplier that increases the induced stresses on the nearest four neighbouring cells to that which is rupturing, for a single, intra-event time step: it does not apply elsewhere or afterwards. This is meant to approximate the stress enhancement at a propagating crack tip, an effect that static stress calculations exclude. It is not related to so called dynamic stress triggering as a result of the passage of seismic waves away from the fault. More is said about this in the next section.
The synthetic seismicity program is computationally intense. To remain feasible for fault networks of reasonable complexity it is necessary to compute all cell interactions once at the start and store them in the memory. Maintaining a record of slip history is also memory demanding; e.g., slip on a fault cell in one corner of the network must be remembered until its effects have time to propagate to the opposite corner. At present, a parallel version of the program runs on a Beowulf cluster of 33 PCs, each with 1 GByte of memory. For the present application (4700 cells), the time needed to generate the final catalogues of approximately 2 300 000 events (1500 large events on the central segment of the Alpine fault) is approximately 10 day. No doubt this could be decreased significantly by using various approximations, but for the present purpose it is acceptable. Much prior testing and parameter adjustment was done using shorter catalogues (see the discussion in Section 6).
It is important to understand that the synthetic seismicity model does not pretend to model fault evolution. The long catalogues are generated just so that the many modes of stress interaction in an elastic earth can be sampled adequately. Viscoelastic relaxation is not yet included, so the results reflect mainly elastic interaction effects. Also, it is not the intent to reproduce specific historical earthquake sequences, or to extrapolate such sequences into the future.

THE GEO M E T RY O F T H E FAU LT N E T W O R K A N D I T S D R I V I N G M E C H A N I S M
The major faults included in the model (Fig. 3) are the Alpine fault, the Marlborough faults to the northeast and several poorly known faults to the northwest in the Buller region. The latter faults have low long-term slip rates but are nonetheless capable of generating large earthquakes, as shown by the historic record. To the major faults are added a large number of randomly placed small faults to simulate the distributed background seismicity and to fill in the gaps. The major faults are subdivided into segments defined by junctures with other major faults, by changes in strike, or by offsets (Fig. 3, Table 1).
The geometry of the northern Alpine and Marlborough faults are in large part taken from a compilation map in Langridge (2004). The representations of the Buller region faults (mixed thrusting and strike-slip, probably reactivated late Mesozoic normal faults) are less certain and are to some extent just representative of structures inferred to be present from large, historical earthquakes: their surface traces are poorly known and ruptures may not always extend to the surface. Offsets or jogs between fault segments are important in determining how often a rupture will jump across segment boundaries. In the region considered here, the only major offset, >2 km, that is likely to offer strong resistance to rupture propagation is associated with the Hanmer basin on the central Hope fault, between segments 26 and 27 (Langridge, 2004; see below also, in the discussion of the dynamic enhancement factor). Approximate long-term slip rates (Table 1) for the major faults are assigned from palaeoseismic studies (Lensen 1968;Cowan 1990;Van Dissen & Yeats 1991;Berryman et al. 1992;Browne 1992;McCalpin 1996;Grapes et al. 1998;Benson et al. 2001;Langridge et al. 2003a,b;Russ Van Dissen, private communication, 2003  Approximate characteristic event displacements, if known, are also given in Table 1: these can vary significantly along strike and serve only as a guide to the modelling. The central section of the Alpine fault (segment number 1 in Table 1) is taken as extending 300 km southwest from its junction with the Kelly/Hope faults. The dip of this segment is approximately 45 • SE or less near the centre of the segment (Davey et al. 1995;Beavan et al. 1999) but steepens along strike to the northeast to approximately 60-70 • (Berryman et al. 1992). For simplicity, the dip of the entire segment is taken as 60 • . A 60 • dip is also used for the four Alpine fault segments to the northeast (numbers 2-5), as far as the juncture with the big-bend segment (number 6, dip 75 • ). All major faults in the Marlborough network (numbers 7-31) are vertical, except for the Barefell fault and the Jordon thrust, which dip 60 • to the northwest. Omitted from the model is a separate Kakapo fault, just south of segments 25 and 26: it has been combined with the adjacent Hope fault segments. The four major Buller faults to the north of the Alpine fault (32-35) have dips of 45 • : they exhibit a mix of thrusting and right-lateral slip Doser et al. 1999). The width of all major faults is such that they extend down to a depth of 15 km except that in the case of intersection with other faults the lesser one, as judged by slip rate, is truncated.
Besides the major faults the model includes 3628 smaller faults with random positions within the modelled region, except not too close to any other and not south of the region shown in Fig. 3. The restriction on closeness to other faults arises because of computation instabilities if the distance is much less than the cell sizes involved. The small fault sizes, taken as square, are chosen randomly from a power-law distribution between 1 and 10 km. The strikes and dips of the smaller faults are taken as the same as the nearest major fault with a random perturbation of ±10 • . As it turns out, these smaller faults have only a small effect on the overall statistical results (in detail the catalogues are different) and in some experimental models they were excluded so that the computations were much quicker. For the final models, however, they were retained.
In the models considered here, many events on a given segment are characteristic, rupturing most of the fault plane. Thus the regional b-value of approximately 1 arises mainly from the size distribution of faults, not the internal dynamics of single faults (Wesnousky 1994;Knopoff 1996;Stirling et al. 1996). As is the case elsewhere in New Zealand, present day background seismicity in the northern South Island does not delineate the major faults (Anderson & Webb 1994), supporting this interpretation. Although a Gutenberg-Richter distribution of event sizes on a major fault can C 2004 RAS, GJI, 159, 734-748 Downloaded from https://academic.oup.com/gji/article-abstract/159/2/734/627347 by guest on 28 July 2018 be obtained by setting the various mechanical properties differently, it then becomes very difficult to reproduce the large observed single event displacements.
Each major fault segment is divided into square cells of approximately 5 km size, giving 4700 cells including the smaller faults. The few random small faults with sizes greater than 5 km were left as one cell. The smallest event in the simulations had a magnitude of 4.88; the biggest 8.15. Smaller cell sizes are possible, providing detailed slip histories (Robinson & Benites 2001), but not required for the present purposes.
The dynamic enhancement factor is taken as applying if a cell is within 5 km of the failing cell, equivalent to the four nearest neighbours for the case of a single fault. Its magnitude was determined by examining in which situations a model rupture can jump across a segment offset. The 3-D dynamic modelling results of Harris & Day (1999) involve two enéchelon strike-slip faults with some overlap and offsets of various widths. They find that offsets of approximately 1 km can sometimes be breached, but it depends on the state of stress: sometimes offsets of only 0.5 km act as barriers. Kase & Kuge (2001) also obtained some 3-D modelling results, including perpendicular faults. Real world examples of offsets that terminate ruptures show that the maximum width of an offset that can be breached is approximately 4 km; for offsets of 1 km there is approximately a 50 per cent chance of rupture jumping across (Wesnousky 1988;Langridge et al. 2002;Lettis et al. 2002). In the 1992 Landers, California, earthquake offsets of 2 km were breached (Zachariasen & Sieh 1995). So the test used here was to reproduce a 50 per cent chance of a breach for a 1-km offset of enéchelon strike-slip faults. The faults were 30 × 15 km in size, as in Harris & Day (1999), and a test run includes approximately 100 characteristic events on either fault.
An increased chance of jumping across an offset can be obtained by increasing the stress drop as well as the dynamic enhancement factor. These two factors also influence the occurrence rate of characteristic events relative to smaller events on the same major fault, the characteristic displacements and the recurrence time of characteristic events. It is a matter of trial and error to find the combinations of stress drop and dynamic enhancement that best satisfy these constraints (also see the discussion in Section 6). Given the coefficients of dry friction (between 0.7 and 0.8) and near hydrostatic pore pressures, a dynamic factor of 2.0 and an initial stress drop of 40 per cent were found to satisfy these conditions. These values are used throughout this study. Smaller stress drops produced fewer characteristic events and those had displacements too small compared with observation.
Because the dynamic enhancement factor was in part derived by reproducing how ruptures jump across offset fault segments, it here applies not only across along-strike segments of the same major fault, but also across junctions with other major faults. Note that the offset of segments 26 and 27 (Hope fault) in the region of the Hanmer basin, is 4 km, and is thus likely to be a strong barrier to rupture propagation.
Of course, it is necessary to have a driving mechanism that loads the faults toward failure. As discussed in Robinson & Benites (1995, this can be done in several ways. One way is to have each fault driven by aseismic slip on its extension to depth and along strike where appropriate (Deng & Sykes 1997a,b;Doser & Robinson 2002). However, in the present case this becomes geometrically too complex. Another would be to use hypothetical driving faults on the boundaries of the model or to assume a direction of uniaxial compression in accord with plate tectonics and resolve the stress onto each fault in the model. However, these uniform processes make it impossible to match the observed long-term slip rates, which vary widely between parallel adjacent faults. In the end, it is just admitted that the real driving mechanism is too complex or not known in enough detail to use directly and shear stress increments are applied on each fault, according to its type, at a specified rate (this is roughly equivalent to the slip at depth driving mechanism). These rates are then adjusted by trial and error so that the resulting long-term slip rates approximate the observed values, where known. In general, stressing rates increase from north to south, but less than would be expected from the observed slip rates, i.e. the stressing rate is more uniform than the slip rates suggest. Some of the lesser faults with a low observed slip rate still require high stress accumulation rates to avoid being pushed backwards by larger nearby faults. The entire question of the driving mechanism needs to be considered in more detail in future work. Gilbert et al. (1994) present some observations favouring deep aseismic slip on fault extensions as a driving mechanism.

THE SYN T H E T I C S E I S M I C I T Y C ATA L O G U E
The catalogue used for the following analysis contains ∼2 300 000 events, representing approximately ∼150 000 yr. There is an initial transient period of approximately 3000 yr in which the behaviour is atypical and which is excluded. This happens because the initial stress state, chosen somewhat arbitrarily, is not compatible with long-term dynamics. Monitoring the b-value is a good way to define the transient period: it starts out too low but eventually stabilizes around a value of 1. Fig. 4 shows the frequency-magnitude distribution excluding the initial 3000 yr. Two forms are shown, non-cumulative and cumulative (as is more common) numbers. Fig. 5 shows the long-term slip rates in the model versus the observed rates, indicating that the model is successful in reproducing the geologic data.
The rate of moment release is constant over periods of 1000 yr or so, but there are shorter term variations (Fig. 6); for example, a 200-yr-long period of frequent moderate magnitude events can be followed by a quiet 200-yr-long period, followed by a build-up to a very large event (see the period 32 500-33 000 yr in Fig. 6). This is similar to the mode-switching behaviour found by Ben-Zion et al. (1999) in one of their synthetic seismicity models. The b-value fluctuates on similar timescales, but there is no obvious precursory pattern before the largest events.
Epicentres of the events for the 31 500 to 34 500 yr period are shown in Fig. 7. Some small aftershocks on the major faults are encompassed in the main shocks, but the smaller events occur mostly away from the major faults as a result of the characteristic nature of the main fault behaviour and as a result of the restriction that no cells be closer than the maximum of their two cell sizes. There are 15 632 events in 3000 yr, or 5.2 per yr. This is ∼2 times the observed rate for 1964-2002. The difference could be the result of a relatively quite period in the real world since 1964 or the lack of smaller magnitude events in the synthetic catalogue: in the real world these smaller events take up some moment release that must be accommodated by larger events in the synthetic catalogue.

TRI G G E R I N G R E S U LT I N G F RO M L A RG E E A RT H Q UA K E S
Although the regional b-value is close to 1 (Fig. 4)  characteristic events that rupture most of the fault (Fig. 8). If we define a large Alpine fault event (LAFE) as rupturing at least 90 per cent of the area of segment 1, then they have a magnitude of 7.82 or more. Such events occur with an average recurrence time of 240 yr (Fig. 9), but with a broad range (89 to 718 yr; coefficient of variation = 0.28). This illustrates the danger of extrapolating a short  series of real inter-event times. The observed intervals between the last four real events are 205, 195 and 97 yr and the interval from the last to the present is 286 yr. If the magnitude threshold for a LAFE is reduced to 7.5, the histogram in Fig. 9 would be shifted somewhat to the left. This ambiguity is hard to avoid when dealing with a continuous range of catalogue magnitudes rather than discrete values, such as big versus little, and with the imprecise nature of palaeoseismic data (e.g. did slip in the last three events really extend over 90 per cent of the length of segment 1?). Current geological opinion (Berryman, private communication, 2003) is that the typical interval between truly large events is ∼300 yr.
The average surface displacement in a LAFE is 5.5 m horizontally and 1.    . Histogram of the time intervals, dT, between characteristic events on segment 1 (central Alpine fault). The mean is approximately 240 yr, but the range is wide, from less than 100 to more than 600 yr.
In the discussion to follow the effect of a LAFE is measured by its ability to trigger a significant event on other segments. Significant will be taken to mean an event of magnitude 6.5 or more. This magnitude is not high enough to represent rupture of the whole of most segments, so I will also refer to triggered characteristic events that rupture 90 per cent or more of the segment area (see Table 1 for values of M char ).
In the synthetic seismicity model the meaning of instantaneous when referring to triggering of events should be taken in a broad sense. Because the model does not contain time-dependent friction features (e.g. rate and state friction) that could delay some ruptures, perhaps for years, the apparent zero time delay between events could be interpreted as anything from truly zero to perhaps a few years. This does not apply to cases where dynamic enhancement is involved (e.g. propagation along strike on the same fault), where instantaneous is likely to mean just that. Temporal decay of induced changes in pore pressure, which are included in the model, do not seem to have a significant effect on delaying triggered events. Aftershock sequences following real, large earthquakes last up to ∼3 yr before decaying to close to the background rate. So if we take this to be the effect of rate and state friction, instantaneous can be taken to mean within 3 yr.
The change in Coulomb failure stress (dCFS) induced by a LAFE on along strike segments of the Alpine fault to the northwest (Fig. 10, top) suggests that instantaneous rupture continuation is quite likely, producing a cascade of failing segments. Indeed, examination of the synthetic catalogue shows that 97.7 per cent of LAFEs generate events of magnitude 6.5 or more on segment 2, 58.9 per cent on segment 3, 38.7 per cent on segment 4 and 14.7 per cent on segment 5 (Table 2). For the situation in which characteristic events are triggered, these percentages are 78.6, 25.1, 16.1 and 8.7. However, there is only a small chance that such a megarupture (characteristic events on segments 1-5) would propagate around the big bend (segment 6) and onto segments 7-9 of the Wairau fault (∼1 per cent). Instantaneous propagation of a mega-event onto the other major faults splaying off the Alpine fault is somewhat more common, with the Awatere most commonly involved (5 per cent chance of characteristic event on segment 11). A mega-event has a chance of ∼40 per cent of triggering an event of M > = 6.5 on at least one other segment, most probably on the Kelly or western Hope fault (segments 24 and 30). Other Alpine fault segments can initiate a cascade that propagates onto segment 1. For example, a characteristic event on segment 2 has a high probability (79.8 per cent) of initiating a LAFE.
It should be noted that Fig. 10 (top) does not tell the whole dCFS story, because the continuation of rupture from segment 1 to the northeast can de-stress some faults that splay off to the east (Fig. 10, bottom) and vice versa: the result is a complex web of dynamically evolving stress interactions. Fig. 11 shows the time history of the ratio of shear stress to static strength on the west edge of segment 24 for an extended Alpine fault event (rupturing segments 1, 2, 3, 4 and 5). Although the initial effect of the advancing rupture is to increase the ratio (bring closer to failure), the ultimate result is a significant decrease (take further from failure). Note that the static strength as well as the shear stress is a function of time during the event, as it depends on the normal stress and pore pressure, both of which can change.
Table 2 also shows the chances of a characteristic event on any segment (as an initial event, not itself triggered) inducing an M >= 6.5 event on the other segments within 3 yr. Although the chances of this happening in a random catalogue are small, values in the table less than approximately 3 per cent are mostly not reliable because of small numbers. Most of the higher values in the table are for segments along strike from, or at least joining up with, the initiating segment. Examples of more distant triggering do occur, but often with intermediate events, producing a chain.
It is difficult to quantify the effect of de-stressing because of low numbers. For example, a characteristic event on segment 7 (western Wairau fault) has a very low chance of triggering an event on segment 6, which directly adjoins it to the southwest. That this is an effect of de-stressing seems likely (the faults have different mechanisms, strikes and dips), but the chance of apparent triggering as a result of random chance is also low, so we cannot say the chance in the catalogue is less than that. Other examples of very low triggering potential despite spatial proximity include the lack of triggering resulting from large events on segment 10 and the generally low triggering potential of large events on segment 27.
Taking the region as a whole, Fig. 12 shows the distribution of inter-event times for magnitude 7.25 or more events anywhere in the fault network, as compared with the case for a randomized catalogue (times scrambled, everything else unchanged). This can be interpreted as saying that if no triggering occurs within a few years, then something more akin to a stress shadow takes effect. Curves for other magnitudes are similar. To some extent this result could be anticipated from the number of instantaneous triggers, because the overall rate of moment release must be constant over the long term. Also, parallel strike-slip faults tend to be self-inhibitory.

DISC U S S I O N A N D C O N C L U S I O N S
The synthetic seismicity model contains many parameters, so it might be asked how the results depend on these. Such parameters include the dynamic enhancement factor, background pore pressure and its decay time if changed by induced stresses, rupture healing time, stress drop and range of the coefficient of friction. Because a full computer run takes ∼10 day, it is not feasible to do exhaustive tests of all the possible variations and combinations. However, most of the parameters are more constrained than it might at first appear. As discussed in the text, several features serve to place tight bounds on the dynamic enhancement factor and stress drop: the long-term slip rates, the observed characteristic slips and recurrence times, and observations elsewhere of how often a rupture can jump an offset of a certain size. It is not thought the adopted range of the coefficient of friction can be changed very much because it is based on very good real world observations. One factor that might be missing is C 2004 RAS, GJI, 159, 734-748 Downloaded from https://academic.oup.com/gji/article-abstract/159/2/734/627347 by guest on 28 July 2018 Table 2. The probability of a characteristic earthquake (initial, not itself triggered) on any segment (across) triggering an M >= 6.5 event on any other segment (down), ×100, instantaneously (i.e., within ∼3 yr). Values of 10 per cent or more are bold; sans serif font if 50 per cent or more. Note that there were no such initial events on segments 22 and 23. Values less than ∼3 per cent are mostly questionable. the presence of asperities. However, given that there is no independent evidence of where these might be, or how big they might be, or how different they might be from normal, it is thought best to just randomly vary the coefficient of friction from cell to cell within a reasonable range. The healing time during rupture is based on observations summarized in Somerville et al. (1999). If no healing is included in the model, then the ruptures tend to rattle on for a much longer time than observations permit. Because there is no evidence for superhydrostatic pore pressure in the region, the initial pore pressure was taken as hydrostatic. As described in the text, the pore pressure can change as a result of induced stresses. It is assumed that the pore pressure reverts to normal according to the diffusion equation whose solution for simple cases has an exponential time term (Carlslaw & Jaegar 1959;Shapiro et al. 1997). The time constant for this decay, 1 yr, is appropriate for typical values of hydraulic diffusivity in the crust (e.g. Parotidis et al. 2003). It has very little effect on the triggering statistics, except for aftershocks. Robinson & Benites (1996) studied a simpler synthetic seismicity model for the Wellington region, with approximately 100 different parameter sets, and found little change in the nature of large event temporal clustering. The frequency-magnitude distribution for all events (Fig. 4) shows some deviations from a strictly linear relation, especially when the non-cumulative count is used. When plotted as the cumulative number, which is perhaps more common, this is less obvious. On the low magnitude end of the magnitude range, which represents the small, random faults, the relation is quite linear. The humpiness on the high end comes about as a result of the discrete distribution of sizes of the major faults. Although this could be fiddled away, this is not thought to be a serious problem for the present purposes. The short historic, or even palaeoseismic, record of seismicity in New Zealand does not allow a comparison of the synthetic results with reality.
The distribution of event magnitudes on the main Alpine fault segment (Fig. 8) might be considered a near extreme version of the characteristic behaviour model. Published real distributions for a single fault (e.g. Wesnousky 1994) show many more small shocks even if the number of the large, characteristic events is enhanced. However, such plots are for a corridor along the fault, up to 5 km on either side. Event location uncertainties make this a necessity, but many small events, it is thought, would not be truly on the fault. If the cell size is reduced to say, 1 km, so that the small, random faults could be much closer to the major faults, and a 10-km-wide corridor observed, Fig. 8 20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  1  05  00  03  00  --19  08  09  03  02  02  19  05  00  07  00  05  2  05  00  03  00  --42  15  15  02  03  01  28  07  01  10  00  01  3  00  00  02  06  --81  29  26  02  02  01  13  12  01  05  00  00  4  03  00  04  06  --42  17  14  01  02  01  07  10  02  05  01  00  5  00  00  03  04  --17  05  05  01  02  01  03  02  02  05  02  00  6  05  03  02  02  --06  01  01  02  01  <1  00  00  00  02  02  00  7  03  00  00 Figure 11. Time history of the shear stress/static strength ratio during the evolution of a slip event on segments 1, 2, 3, 4 and 5 (plus a small slip on 30), for a cell on the westernmost edge of segment 24. At first, the approaching rupture positively stresses this cell but not enough for failure; the ultimate shear stress is significantly lower than initially. Note that the static strength changes with time as well as the shear stress. Figure 12. Distribution of inter-event times, dT, for events of magnitude 7.25 anywhere in the region. Red line/points = for the synthetic seismicity model; green = for a randomized catalogue. Although the average intervals are the same, the distributions for dTs less than approximately 50 yr are very different: clustering initially, followed by relative quiescence, for the synthetic catalogue.
(on the southern tip of the North Island) are not delineated by the microseismicity even though a good local seismograph network has existed since 1978 (Robinson 1986). The synthetic seismicity model does not produce a large number of aftershocks because of the lack of the required time-dependent friction, or the post-seismic reloading as a result of after-slip and viscoelastic relaxation. Another factor is the lack of small faults closer than 5 km to the major faults. The aftershocks that are produced seem to be mostly a result of the decay of induced negative changes in pore pressure.
It is sometimes suggested that the Alpine fault is overdue for a large earthquake. However, synthetic seismicity models of the Alpine-Marlborough-Buller fault network show that the inter-event times for large characteristic events on any single fault have a broad distribution. Fig. 9 shows this for the central segment of the Alpine fault. Although the coefficient of variation, 0.28, is not particularly extreme, this is because the distribution of recurrence times is quite peaked even if it has a broad distribution. Thus, it is not possible to say that a large event on the Alpine fault is definitely overdue, it could be a few hundred years off. It is possible that a deeper analysis of the inter-event times would show some helpful patterns, such as a long interval tending to follow a short one, but that is not performed here. Indeed, the synthetic catalogue could be used for many purposes, such as looking for precursory patterns in the background seismicity, as done by Eneva & Ben-Zion (1997a,b).
The rate of moment release in the catalogue is constant on a timescale of ∼1000 yr, but can fluctuate significantly on scales of a few hundred years. This has implications for the use of short historical catalogues to infer seismic hazard. Such temporal variations have been observed in the longest real world catalogues available (e.g. Marco et al. 1996) and in the synthetic catalogues of Ben-Zion et al. (1999).
If a LAFE does occur, it seems likely that it would propagate at least part way along the other segments to the northeast. It also has a fair probability of triggering rupture on the faults splaying off to the east, most likely so for the Kelly fault. However, all other fault segments have a non-zero chance on an induced event of magnitude 6.5 or more, even if quite far away and of a different mechanism. However, because of very low numbers despite the length of the synthetic catalogue, some of these apparent interactions are probably a result of chance.
Similarly, large characteristic events initiating on non-Alpine fault segments can trigger activity elsewhere. Only large events on segments 10, 32 and 35 have no chances of 10 per cent or more to trigger activity elsewhere (i.e. no bold entries in their column in Table 2). It is interesting that there are no initial (not triggered) characteristic events on segments 22 and 23, off-shoots of the Clarence fault with low slip rates. These two segments tend to get pushed around by events on more major nearby faults.
In the region as a whole, the distribution of the time intervals between magnitude 7.25+ events is quite different from that for a randomized catalogue. Following such an event there is a much higher chance of another, for a few years. Then there is a much reduced chance, as compared with a random catalogue. The behaviour for other large magnitude thresholds is much the same. Because the historic record of seismicity in New Zealand is short, it is not possible to test the predictions of the synthetic seismicity model against reality. That said, it is interesting to note the potential triggering effect (Table 2) of large events on segment 32, site of the 1929 M 7.7 Buller event, on segment 34, site of the 1969 M 7.2 Inangahua shock. Although these events are 39 yr apart, both faults have very low slip rates with recurrence times in the thousands of years. Thus this may be a case of triggering, as suggested by Hincapie et al. (2004). The most conspicuous temporal clustering of large events in the historic record is the occurrence of the 1929 M 7.7 Buller event (segment 32) and, just a few months earlier, the 1929 M 7.0 Poulter event (segment 31). Table 2 indicates no triggering between these two segments: despite their segment numbers they are very far apart. Thus, the temporal proximity of these two events cannot be explained by static stress triggering. Work currently in progress is examining in detail the series of mainly pre-historic (palaeoseismic) events along the Hope fault where it appears there may have been an unzipping similar to the migration of large events on the North Anatolian fault in Turkey (Stein et al. 1997).
The question of what instantaneous means in the synthetic seismicity model is difficult to make precise. Many rupture jumps across segment boundaries would likely be after near zero time because of the dynamic effect, but because the model lacks some timedependent properties of rock friction it is likely that some more distant interactions would be delayed, as is the case for aftershocks. Indeed, it is possible to fiddle the model to introduce such delays artificially, but it is preferred to leave the question open until the model can be improved, as discussed below. The meaning of instantaneous can probably be taken as up to ∼3 yr, the typical duration of aftershock sequences for large earthquakes. The model does include induced changes in pore pressure that decay away in time, but this does not seem to make a lot of difference.
Another factor that may be important in triggering is that the fault cells do not spend a lot of time near failure. This is because the stress drop is moderately high, as required to reproduce the observed large displacements. An increase in short-term triggering could be produced by models in which each fault displays a Gutenburg-Richter type of magnitude-frequency distribution, or if after-slip were included.
Besides the lack of rate and state friction, the synthetic seismicity models lacks any viscoelastic response. This response would tend to reload the fault that failed (keeping it closer to failure and hence more susceptible to triggering). At the same time it would tend to amplify the induced stresses at distance, again leading to more clustering over the ∼10-yr timescale. The introduction of a viscoelastic layer below a brittle crust is high on the author's list of priorities. However, this involves an order of magnitude increase in computation time and complexity. In order to reproduce the observed, real world, event displacements (which are relatively large), it has been necessary to use a stress drop of 40 per cent. This means that the fault cells are, as a whole, not close to failure at all times, i.e. the self-organized critical state is not a good representation. The introduction of viscoelastic response would decrease the time it takes a fault to again approach the failure state.
The driving mechanism of the model needs to be made more realistic, most likely via introducing explicit aseismic slip on an extension of each fault to depth and along-strike as appropriate. The present driving mechanism is meant to approximate this without the complexities of intersecting faults. However, this raises another problem: there are moderate depth (H > 50 km) earthquakes beneath the Marlborough region that define the southward extension and termination of the North Island subduction zone. Interpretations of geodetic and earthquake data imply that the subduction interface here (south of Cook Strait) is permanently (for approximately the last 1 000 000 yr) locked (Holt & Haines 1995;Reyners et al. 1997;Nicol & Beavan 2003). However, it still means that the structure is truly 3-D. It could be that 3-D finite element modelling of the C 2004 RAS, GJI, 159, 734-748 plate collision zone as a whole would be needed to properly define a driving mechanism.

A C K N O W L E D G M E N T S
This work would not have been possible without the expert geological knowledge of Rob Langridge and Russ Van Dissen. The paper has benefited from comments by Martin Reyners, Ross Stein and an anonymous reviewer. Roger Williams was responsible for the construction and operation of the Beowulf PC cluster. This work has been funded by the Foundation for Research, Science and Technology.