UP

Tectonophysics, 270: 207-219
Copyright 1997 © Elsevier Science B.V. All rights reserved.

Statistical aspects of Parkfield earthquake sequence and Parkfield prediction experiment

Y.Y. Kagan1

Institute of Geophysics and Planetary Physics, University of California, Los Angeles, CA 90095-1567, USA
1 E-mail: ykagan@ucla.edu.

Received 25 March 1996; Accepted 23 August 1996

Keywords: Parkfield earthquakes; earthquake prediction; statistical analysis; seismic gaps; seismicity


Abstract

This note discusses three interconnected statistical problems concerning the Parkfield sequence of moderate earthquakes and the Parkfield prediction experiment: (a) Is it possible that the quasi-periodic Parkfield sequence of characteristic earthquakes is no uncommon, specific phenomenon (the research hypothesis), but can be explained by a preferential selection from available earthquake catalogs? To this end we formulate the null hypothesis (earthquakes occur according to the Poisson process in time and their size follows the Gutenberg-Richter relation). We test whether the null hypothesis can be rejected as an explanation for the Parkfield sequence. (b) If the null hypothesis cannot be refuted, what is the probability of magnitude m 6 earthquake occurrence in the Parkfield region? (c) The direct goal of the Parkfield experiment is the registration of precursory phenomena prior to a m 6 earthquake. However, in the absence of the characteristic earthquake, can the experiment resolve which of the two competing hypotheses is true in a reasonable time? Statistical analysis is hindered by an insufficiently rigorous definition of the research model and inadequate or ambiguous data. However, we show that the null hypothesis cannot be decisively rejected. The quasi-periodic pattern of intermediate size earthquakes in the Parkfield area is a statistical event likely to occur by chance if it has been preferentially selected from available earthquake catalogs. The observed magnitude-frequency curves for small and intermediate earthquakes in the Parkfield area agree with the theoretical distribution computed on the basis of a modified Gutenberg-Richter law (gamma distribution), using deformation rates for the San Andreas fault. We show that the size distribution of the Parkfield characteristic earthquakes can also be attributed to selection bias. According to the null hypothesis, the yearly probability of a m 6 earthquake originating in the Parkfield area is less than 1%, signifying that several more decades of observation may be needed before the expected event occurs. By its design, the Parkfield experiment cannot be expected to yield statistically significant conclusions on the validity of the research hypothesis for many decades.


1. Introduction

Historical sequence of moderate earthquakes near Parkfield, California (Bakun and Lindh, 1985; Parkfield, 1988) is often taken to support two models of earthquake occurrence: the characteristic magnitude model and quasi-periodic recurrence model. The quasi-periodic recurrence model is supported by the regularity of the recurrence intervals between earthquakes in 1857, 1881, 1901, 1922, 1934 and 1966, i.e., with a return time of about 22 years. The characteristic magnitude distribution is supported by apparent uniformity of size, with five historic earthquakes (1881-1966) to be in the range 5.5m w 6.5. The seismograms of the last three earthquakes are similar (Parkfield, 1988, p. 49), confirming that the events are "characteristic". On the basis of the earthquake history, Bakun and Lindh (1985) predicted with a confidence level of 95% that the next earthquake with magnitude 5.5-6.5 should have occurred in Parkfield before 1993.

Although several mechanical models have been proposed to explain the Parkfield earthquakes' quasi-periodicity, these models are not unique (Kagan, 1994) due to the absence of a comprehensive theory of earthquake fault system evolution. The predictions of the mechanical models depend on many adjustable parameters and hence cannot be decisively tested against real data (cf. Oreskes et al., 1994). Our understanding of earthquake mechanics is insufficient to predict reliably whether the current distribution of creeping and locked San Andreas fault segments in the Parkfield area (Harris and Archuleta, 1988) will be preserved after a new great earthquake. Some of these models offer a plausible explanation for an observed earthquake occurrence; however, none possesses a significant predictive capability. For example, Ben-Zion et al. (1993) interpret the Parkfield sequence differently, showing the recurrence times for moderate events increasing as time from the 1857 earthquake increases. Thus the major confirmation and justification for the Parkfield prediction is earthquake statistics: the 22-year recurrence interval and 95% prediction all result from statistical analysis of seismicity. Since 1985 the Parkfield prediction has been criticized on several grounds (Davis et al., 1989; Savage, 1993 and references therein). Nonetheless, these criticisms do not challenge the basic premise of the prediction, i.e., that the observed regularity of the Parkfield characteristic earthquakes is a specific phenomenon which can be extrapolated into the future. Most tests of earthquake size and interval distributions at Parkfield treat the region in isolation, ignoring the process by which Parkfield was identified from the earthquake record used to test the hypotheses. Here we propose an alternative, or "null" hypothesis: that magnitude distribution throughout California obeys a Gutenberg-Richter (GR) law modified at the high end by introducing a maximum magnitude (Kagan, 1993), that earthquake recurrence is described by a Poisson process also throughout California and that apparent regularity at Parkfield results from data selection. Some regions will by coincidence have more earthquakes, and more regular earthquakes than others. According to the alternative hypothesis, Parkfield is a region which by chance experienced more earthquakes than the average during the historical period, and the recurrence times are by further chance distributed in a relatively narrow range.

The Poisson temporal model is selected for the following reasons: (a) It is the simplest model of the earthquake occurrence the Poisson process has maximum uncertainty (entropy) among stochastic point processes of given intensity (Vere-Jones, 1975). (b) Most continuum mechanics models for earthquake rupture and interaction have a non-linear, complex, chaotic character implying randomness of outcome, i.e., the Poisson type of behavior. (c) The Poisson model approximates well the occurrence of very large and intermediate size earthquakes in declustered catalogs.

The GR relation has been used extensively to approximate the earthquake size distribution over many orders of magnitude. It describes satisfactorily the seismicity of the Earth's tectonic plates and smaller tectonic regions, as well as micro-earthquakes in mines and acoustic emission events during rock specimen testing. The GR relation has been successfully applied to the description of global and regional seismicity since the 1940s (Kagan, 1994). It is natural to test this model as the representation of local seismic catalogs, and in particular the seismic catalogs for the Parkfield region.

The guiding principle of this study is that a simple null hypothesis (model) needs to be fully and critically investigated as an explanation for the observations. Only if the null hypothesis is decisively rejected should we formulate and test an alternative working model. Testing the research hypothesis (quasi-periodic recurrence of characteristic earthquakes) against the null hypothesis is difficult for several reasons: the characteristic quasi-periodic models have not been clearly specified at Parkfield; the earthquake catalog data are of mixed quality; and the rules by which Parkfield was selected as a "special" area are not specific. Thus we need to test the null hypothesis against a research model (the Parkfield hypothesis) that needs further clarification. Moreover, we test both hypotheses against the data which have been used to formulate the Parkfield hypothesis. In the next section we propose some modifications of testing procedures to address these problems. Two additional problems are investigated in this paper: (a) If the null hypothesis is correct, what is the yearly probability of a m 6 earthquake in the Parkfield area? (b) Can the Parkfield model of earthquake occurrence be tested experimentally over a reasonable time period?

2. Hypotheses

We formalize the Parkfield hypothesis as presented by Bakun and Lindh (1985) and the null hypothesis. As described earlier, the Parkfield model contains two basic sub-hypotheses: large earthquakes are quasi-periodic, and their size distribution follows the "characteristic" pattern. The alternative, null hypothesis, H 0, by necessity also consists of two sub-units: large earthquakes are Poisson in time and the magnitude-frequency relation is the GR law modified at the large magnitude end (the gamma distribution, see below). Four different combinations can be formed with these components:

It is not clear which of the three non-null research hypotheses needs to be tested. Apparently, H 1 is the hypothesis of choice (Bakun and Lindh, 1985); however, the strictly defined characteristic model (Schwartz and Coppersmith, 1984) allows no earthquake stronger than the characteristic one to occur on a particular fault segment. Bakun and Lindh, 1985, (p. 623) state that the rupture which starts at Parkfield might grow into a major earthquake, thus contradicting the characteristic model. A modified characteristic hypothesis might be classified as H 2, where small (m 5.5) and moderate/large earthquakes follow the GR law (possibly with different seismic activity levels, corresponding to different curve heights in a magnitude-frequency plot), but the moderate/large earthquakes are quasi-periodic.

What significance level should we use in testing our hypotheses? The frequently used 5% level signifies that even if the sample selection process is fair and unbiased and without hidden systematic errors, the null hypothesis would be rejected in one case out of 20 due to random fluctuations. As is mentioned above, the null hypothesis has been verified by a long history of successful application to seismicity modelling, especially for occurrence of large earthquakes. Thus we need only to prove that the model is not strongly refuted by available evidence. Moreover, the choice of a 5% confidence level can be appropriate for a test against new data, whereas we are forced here to use old data. Therefore, the significance level selected for the null hypothesis rejection should be much lower than the usual 5% (Anderson, 1992). Since the Parkfield model has not been rigorously specified and historical seismic data are uncertain, we calculate here only the order of magnitude estimates of probabilities.

It often happens that a slight variation of initial conditions in statistical testing yields a significantly different result. To avoid ambiguities, we need a rule to interpret uncertain, non-unique results. To this end, we propose that any unresolved doubt in the data analysis, testing and representation of the statistical test results should be interpreted in favor of the null hypothesis. In particular, if a slight, plausible variation of model parameters or input data makes the difference between accepting and rejecting the null hypothesis, the null hypothesis should not be considered as rejected. Such a rule should stimulate proponents of a new model (1) to specify the hypothesis properly and (2) to improve the quality of data.

3. Time and space distribution

Can the Poisson hypothesis be rejected as a model for occurrence times of the Parkfield "characteristic" earthquakes? A reliable earthquake history in California starts with "the gold rush" (1849), but the seismicity of the Parkfield area before 1910-1920 (the starting date for the local instrumental catalogs) is not well known. Some earlier Parkfield earthquakes tabulated in the introduction may be several tens of kilometers away from the 1966 epicenter, or can have a focal mechanism significantly different from that of the last three events. According to Segall and Du (1993), the slip distribution on the fault plane during the 1934 and 1966 events and their aftermaths were significantly different. Thus, the earthquakes should not be considered "characteristic" in the strict (Schwartz and Coppersmith, 1984) definition. Toppozada (1992) finds many more moderate earthquakes near Parkfield before 1910, although their exact location cannot be ascertained. We analyze two Parkfield sequence variants below: (1) five closed inter-earthquake intervals and one open interval during the last 138 (1857-1995) years, and (2) two closed inter-earthquake intervals and one open interval during the last 75 years (75 years approximates the time span of the northern California catalog, Bolt and Miller, 1975).

The Weibull distribution (Kotz and Johnson, 1988, pp. 549-556; Rhoades and Evison, 1989; Sieh et al., 1989) can be used to test the earthquake temporal distribution: it is a two-parameter statistical law with the density:

(t)=( / )t 1exp [(t/ ) ] (1)
where t is an inter-event time, is a shape parameter and is a recurrence time (scale) parameter. The Poisson process corresponds to =1; for >1 the event occurrence is quasi-periodic, reaching the strict periodicity if ; <1 corresponds to clustering. Using formula 8.3.1 in Nelson (1982) for five inter-earthquake intervals and one open interval of the Parkfield sequence (as of 1/1/1996), we calculate the log likelihood function () for distribution (1).

Figure 1 shows the map of the log likelihood function (). The likelihood function in Figure 1 has been normalized so that its maximum is 3.0. Wilks, 1962 (ch. 13.8) shows that the log likelihood is distributed for a large number of events as (1/2)2(2) (chi-square distribution with two degrees of freedom, corresponding to two parameters in (1)). The 95% confidence level corresponds to the contour value 0.0. Most of these contours are far from being elliptical; therefore the likelihood function is not quadratic and the standard methods for its analysis are not applicable. The maximum of yields the following estimates: =3.6 and =26.3 yr. Because the Poisson process is a special case of the Weibull distribution, the difference in between the global maximum of the likelihood function and the maximum restricted for =1 is distributed as (1/2)2(1) (Wilks, 1962); the difference is 3.7. Thus, the null Poisson hypothesis should be rejected at a 0.6% significance level. Another approximation (Bain and Engelhardt, 1981) yields a 1.5% level. A similar analysis for the last three Parkfield earthquakes yields =2.5: this -value and a map of similar to Figure 1 mean that the sequence is consistent with the Poisson model.


Figure 1. Log likelihood function for distribution of inter-earthquake times in the Parkfield sequence: is the shape parameter of the Weibull distribution, is the scale parameter (recurrence interval). Zero isoline is the boundary of the 95% confidence area for and . H sign is at the log likelihood maximum.

However, the analysis in the previous paragraph applies only if the Parkfield sequence has been selected without using seismic data or, if the sequence has been selected at random from the entire catalog. In reality, as explained earlier, the earthquake history has been a crucial factor in the Parkfield sequence selection. The criteria for the selection have not been formally presented; thus we tentatively list them: (a) Characteristic earthquakes have a magnitude of about 6.0 or greater, i.e., they rupture the entire seismogenic crust; (b) Characteristic earthquakes occur on seismically active major faults; (c) The earthquakes rupture the same segment of a fault; (d) The earthquake occurrence on a characteristic segment of a fault is independent of seismicity outside the segment.

We want to evaluate the approximate probability of selecting several moderate quasi-periodic earthquakes on a fault segment. The null hypothesis cannot be rejected, if the above probability is of the order of a few percent (5% or more). That means we should not expect to find any more such sequences besides the Parkfield earthquakes, in California earthquake history.

What is the total length of the major earthquake faults in southern California? It is difficult to provide an unambiguous answer. Working Group (1995) subdivided the region into 65 seismic zones; 16 of them contain major segments of the San Andreas, San Jacinto and Elsinore faults. The total length of these segments is 922 km. For evaluation purposes let us assume that 900 km of the major faults in southern California are subdivided into 30 segments, each 30 km long. Even large earthquakes often occur on unrecognized faults, violating (b), and they sometimes rupture through postulated segment boundaries, violating (c) and (d). Thus we do not consider the full null hypothesis which should test all features of the characteristic earthquake model, but a rather restricted null hypothesis which allows large earthquakes to occur only on geologically defined major faults within the segment boundaries.

Since some segments (Working Group, 1995) are longer or shorter than 30 km, their total number can be smaller or larger than 30. However, the selection could involve all of California, thus increasing the number of zones. Similarly, some large earthquakes occur outside of major faults, decreasing the number of qualifying earthquakes. Earthquakes' spatial distribution is inhomogeneous: their concentration in a few fault segments increases the probability that a random earthquake sequence will be counted as a characteristic series. Since we want to determine approximate estimates of probabilities, the above 30-segment model is appropriate for our calculations.

Using Ellsworth's (1990) and the Caltech (Hutton and Jones, 1993, and references therein) catalogs, we find that the number of earthquakes 5.5<m 6.5 is 0.8 eq/yr (earthquakes per year) in southern California (32-37° N, 114-122° W). This rate was reasonably stable during 138 or 62 years of each catalog's span, respectively (see fig. 6 in Kagan, 1994). Thus, during the last 75 years there should have been about 2.0 earthquakes per segment. If earthquakes occur independently and at the same rate in all segments, i.e., according to the Poisson process, 5.4 segments would be expected to have three earthquakes (the number of characteristic events in the last 75 years on the San Andreas fault near Parkfield). Similar calculations can be made for the last 138 years: on average 3.7 events occur per segment and 4.3 segments have 5 earthquakes.

Instead of a five-event sequence with a recurrence time of 22 years, it is also feasible that, another sequence of four earthquakes with inter-event intervals of about 27 years or a sequence of 6 events with 18 years average recurrence time would have been observed. Such sequences could also confirm the characteristic earthquake model. We add the probabilities of such quasi-periodic sequences to the above estimates: the number of expected segments with 4 events or more in the sample of 30 segments is 15.2. To obtain an approximate estimate of the resulting probability, we multiply the estimate of the significance for the rejection of the Poisson model versus the Weibull distribution (0.6%) by 15.2. The above procedure is legitimate if the probabilities involved are very small compared to 1. We obtain the probability of 9.1% (0.6×15.2) for the number of expected segments with four events or more in the sample of 30 segments. Such a probability is insufficient to reject the null hypothesis even under a less restrictive 5% significance level (see Section 2). Similar results are obtained if 900 km are subdivided into 45 segments, each 20 km long.

As another possibility, let us assume that earthquake epicenters are distributed uniformly and independently over 900 km of the major faults, i.e., the epicenters follow the Poisson spatial distribution along the fault lines. Several earthquakes are considered to belong to a characteristic series if pairwise distance between their epicenters is less than 5 km for three events during the last 75 years and is less than 20 km for two earlier earthquakes. According to Segall and Du (1993) the centroids for the two latest characteristic earthquakes (1934 and 1966) are separated by as much as 10 km. The average distance between adjacent epicenters () during the last 75 years is 15 km (900 km divided by the expected number of events 75×0.8); for Poisson events, the probability density is exponential:

(r)=(1/ )exp [(r/ )] (2)
where r is the distance between the epicenters. The probability of two events separated by less than 5 km is [1exp(5/15)]=0.28; for three events it is 0.282=0.08. A similar calculation for the earlier part of the sequence yields 18 and the probability of [1exp(20/18)]=0.67. We calculate the anticipated number of earthquake sequences in 138 years with epicenters separated by 5 km and 20 km. The number is approximately 138×0.8×0.282×0.672=3.9, i.e., comparable to the values that have been obtained for the faults separated into equal size segments. For example, six earthquakes 6.0m 6.6 are located on or near the San Jacinto fault zone in a rectangle 33.0-33.5° N, 116.0-116.5° E (see fig. 6.2 in Ellsworth, 1990); the maximum distance between these six epicenters is less than 45 km. Similar concentrations of epicenters can be found in the Imperial Valley and east of San Francisco Bay (Ellsworth, 1990).

The previous temporal analysis has been based on the Weibull distribution of inter-event time intervals. Let us confine our examination to the temporal Poisson model. If n Poisson events occur in a time interval T , the events are distributed uniformly inside the interval. What is the probability that the inter-earthquake intervals t are such that T 1 t T 2? We obtain (Feller, 1966, ch. I, problem 22):

pn( t)=Tn{[T(n1)T1]n+[T(n1)T2]n+} (3)
where [.]+ means that the expression is zero for the values of arguments less than zero. For T =75 yr, n =3, T 1=12 yr and T 2=32 yr, the probability is p n=0.311. For T =138 yr and n =5, p n=0.116. Thus, according to the null hypothesis, for a 75-year-interval we should expect about 5.4×0.311=1.7 segments in southern California to have such an earthquake sequence. For 138 years the segment number is 0.5. Thus, a quasi-periodic sequence similar to that observed in the Parkfield area can occur by chance if selected from all earthquake sequences in southern California.

Partial confirmation of the above statement is found in the analysis of a sequence of six earthquakes 5.4m 6.4 in eastern California (Bishop-Mammoth lakes area) by Savage and Cockerham (1987). The inter-event times are 1.64, 1.35, 1.27, 1.88, 1.65, 0.50 yr (the last item corresponds to the time difference between the last earthquake and the submission of the paper). Savage and Cockerham (1987) tentatively predicted an earthquake with a magnitude of about 6.0 to occur in December of 1987±0.7 years.

The approximation of the Bishop-Mammoth 1978-1986 sequence by the Weibull distribution yields =8; thus the sequence has an even stronger periodicity, compared to the Parkfield series. In the latter case only the characteristic magnitude and earthquake segment boundaries are formal adjustable parameters in the selection; for the Bishop-Mammoth lakes sequence, boundaries of the Long Valley Caldera area and the time of the catalog beginning can also be used in data selection. In addition, three moderate earthquakes have been excluded from the sequence (Savage and Cockerham, 1987, pp. 1347-1348). This might explain the stronger periodicity of the series. Savage and Cockerham, 1987 (p. 1354) mention the possibility that the sequence of six quasi-periodic events results from a biased selection from a Poisson process. They found this to be unlikely. The predicted Bishop-Mammoth lakes earthquake did not occur. No earthquake with magnitude m b5 is tabulated in the PDE catalog (U.S. Geological Survey, 1994) in this area during 1987-1994.

4. Magnitude distribution

We examine another aspect of the Parkfield prediction: that the earthquake size distribution in the Parkfield region exhibits a characteristic earthquake pattern (Schwartz and Coppersmith, 1984). We count the numbers of epicenters in a rectangular window 20×35 km centered on the San Andreas fault: the coordinates of the box corners are: 35.913° N, 120.638° E; 36.037, 120.467; 35.789, 120.205; 35.667, 120.375 (see Figure 2). The epicenter of the 1857 earthquake, as specified in the catalog of Ellsworth (1990), is outside the window. It is possible that the 1857 epicenter coordinates can be plausibly adjusted to fit into the window or the window can be modified to the same effect. However, as we argued in Section 2, even if this produces a different result, validating a research hypothesis should not be based on manipulating data and model parameters.


Figure 2. Map of earthquake epicenters (m 2.0) from the CALNET (Oppenheimer et al., 1992) and Ellsworth (1990) catalogs in Parkfield area. Epicenters of the Parkfield characteristic earthquakes have been slightly shifted to avoid overlapping of symbols. Parkfield region is shown by a rectangle.

We can significantly influence earthquake statistics if we draw a different box around five Parkfield events. There are five degrees of freedom in a rectangular box; even if the box center position and the box direction can be selected based on geologic data, seismic data are used partially to choose the segment ends and the width of the box. Since the number of characteristic earthquakes (five) is comparable to the number of adjustable parameters, statistical results can be biased.

In Figure 3, the magnitude-frequency relationship is shown in the Parkfield area for the CALNET (Oppenheimer et al., 1992), combined California (Real et al., 1978), Caltech (Hutton and Jones, 1993), and Ellsworth's (1990) catalogs. The number of earthquakes is normalized per year, i.e., divided by the time span of a catalog. The dashed line in Figure 3 approximates the Caltech data using the standard GR law with b -value 0.95 (Kagan, 1994). We calculate the theoretical magnitude-frequency relation, assuming that the earthquake scalar seismic moment is distributed according to a gamma distribution (Kagan, 1993, 1994).


Figure 3. Earthquake size distribution for the Parkfield area. Solid line corresponds to earthquake rate according to the Caltech catalog, 1932-1994, dashed line the GR relation approximating the Caltech data, stars - CALNET data for 1971-1993, o-signs - Real et al. (1978) catalog 1910-1974. + signs - catalog of Ellsworth (1990) 1857-1994. Dashdot lines represent two magnitude-frequency relations inferred from deformation rate near Parkfield (see text).

The gamma statistical distribution is the GR law modified at the high seismic moment end by introducing the maximum moment parameter, M xg . The gamma distribution differs from the GR law only for the values of the moment (M ) approaching and exceeding M xg . Hence we can use a more familiar GR distribution in most of our considerations. We specify the moment magnitude as:

displayed equation (4)
where the moment is measured in Nm (Newton meters).

If we know the seismic moment rate text construct , we can calculate the seismicity level on the assumption that earthquake size distribution is controlled by the gamma law (Kagan, 1993):

displayed equation (5)
where a 0 is the seismic activity level for earthquakes with moment M 0 and greater measured in eq/yr, is the slope of the moment-frequency relation in the loglog plot, and is the gamma function.

The seismic moment rate can be computed for the Parkfield area:

text construct=µ sHL (6)
where µ is the shear modulus, taken 3×1010 Pa (Pascals), s is the average yearly slip rate on the San Andreas fault (0.035 m/yr), H is the thickness of the seismogenic zone (taken 8 or 15 km) and L is the length of the Parkfield segment (35 km). A calculation using (6) yields text construct =2.9×1017 Nm/yr if H =8 km or text construct =5.5×1017 Nm/yr if H =15 km. Similar results are obtained if we use the distributions with a maximum moment other than gamma (Anderson and Luco, 1983; Kagan, 1993). The characteristic earthquake model assumes that practically all deformation in the Parkfield area is released through earthquakes with magnitude m 6.0 (M 1018 Nm). If so, the return time is from 1.8 to 3.5 years, much smaller than the 22 yr characteristic earthquake period observed during the last 138 years. Even if we take the characteristic earthquake with magnitude m 6.5, the recurrence time is 10-19 years, still short of the 22 yr period. Harris and Archuleta (1988) and Thatcher and Segall (1990) also note that even m 6.5 characteristic earthquakes cannot fully account for tectonic deformation accumulating in the Parkfield area after the 1857 earthquake. Thus, hypothesis H 1 can be rejected on the basis of these calculations.

We use the following input parameters to calculate a 0 according to (5): =0.63 (corresponds to the b -value 0.95), (20.63)0.89, and the maximum magnitude m xg =8.0 or 8.75 (M xg =1021 Nm or M xg =1.33×1022 Nm). The above maximum magnitude values correspond either to the smallest acceptable maximum magnitude for the historic California earthquakes (Ellsworth, 1990), or to the maximum magnitude of global seismicity (Kagan, 1994). Using a threshold magnitude m 0=4 (M 0=1015 Nm), a 0 is between 0.47 and 2.3 eq/yr (Figure 3): the a 0 range is obtained using a lower value of M xg with a higher value of text construct (see above), and vice versa.

Theoretical magnitude-frequency relations reasonably approximate the values obtained for the Caltech and CALNET catalogs (Figure 3), especially if one considers that not all deformation rate (0.035 m/yr) may be released at the San Andreas fault and that some deformation is released aseismically. The magnitude-frequency relation in Figure 3 displays the number of epicenters in the Parkfield window. For very large earthquakes it is possible that an epicenter is outside the box, but an earthquake still ruptures through the Parkfield segment of the San Andreas fault. We assume in (5) that the seismic moment of any earthquake is released at the epicenter, i.e., the earthquake is taken as a point source. However, recalculating the seismic moment rate using extended sources should yield the same results as (5) if we assume that earthquake ruptures do not preferentially end at special points on a fault trace ("segment" boundaries, for example). Theoretical computations using (5) yield the recurrence period from 35 to 170 years for m 6.0 earthquakes on the Parkfield segment (Figure 3).

The Caltech data suggest that m 6.0 events have a rate of 0.0074 eq/yr, i.e., the recurrence time for these earthquakes is 135 years. This inter-earthquake time value differs from the above estimate of 1.8-3.5 years return time for the characteristic m 6.0 earthquakes, or 22 years proposed by Bakun and Lindh (1985). The discrepancy can be explained: if the earthquake size is distributed according to the gamma law, most of the seismic moment rate is released by the largest earthquakes (Kagan, 1993). Kagan and Jackson (1995) show that the characteristic model routinely overestimates the probabilities of large earthquakes when they are interpreted as quasi-periodic characteristic events.

Is it possible that the difference between rate estimates from the magnitude-frequency law and from the characteristic model is caused by random temporal fluctuations of seismicity? We need to calculate the standard error () in the return time estimate. For the Caltech catalog there are n =37 m 4 events in the Parkfield window (Figure 2). If we assume the Poisson distribution, n 6, i.e., the coefficient of variation is about 16%. However, for earthquakes with m 4 the Poisson model is not appropriate, because these events often comprise aftershock sequences, and the standard error for the event number is higher than that predicted by the Poisson model. Kagan (1996) suggests that the negative binomial distribution should be used for the approximation instead. For this distribution, the 68% confidence limits (corresponding to ± for the Gaussian distribution) for the m 6 rate of occurrence are 0.0007-0.0140 eq/yr, and the return time varies from 71.5 to 1430 years. The lower limit of the recurrence time is still far from the characteristic 22-yr period. Moreover, two other catalogs (the CALNET and the combined California) also suggest that the recurrence time for m 6 events is over 100 years.

There are four m 6.0 earthquakes in the 138 year period (the 1881 earthquake has a magnitude 5.75 in Ellsworth's catalog and 1857 epicenter is outside of the box, see Figure 2). According to the null hypothesis, the expected rate, based on the Caltech data, for 138 years is 1.04 eq/yr. Then the probability of four or more earthquakes occurring in 138 years, is 2.2%. The above probability value is usually considered significant enough to reject a null hypothesis (however, see the discussion in Section 2). One explanation of the discrepancy between the observed number of moderate earthquakes and an extrapolation of the GR relation, is the mixing of the two catalogs: magnitudes in the two lists have different systematic errors which may exaggerate the earthquake frequency difference. Moreover, as indicated earlier, the Parkfield segment has been selected on the basis of a large number of moderate earthquakes displaying quasi-periodic behavior. Earthquake catalog data have been a significant factor in this selection (i.e., the selection might have been biased). If we multiply the 2.2% significance level by 15.2, the number of segments which can be expected to have more than four events during the last 138 years (see the previous Section), the characteristic earthquake rate discrepancy becomes statistically insignificant. This conclusion would not change even if we consider that not all earthquakes in 15.2 segments exhibit a quasi-periodic pattern (see (2)).

5. Hypotheses testing

Bakun and Lindh (1985) (p. 619) propose that inter-earthquake intervals for the Parkfield earthquakes be distributed according to the Gaussian distribution with =21.9±3.1. Thus the probability that the next Parkfield earthquake should have occurred between 1985 and 1993 was estimated to be 95% (see also Savage, 1993, for further discussion and criticism). It is this "original" (H 1, see Section 2) variant of the Parkfield model that is considered to fail by the end of 1992 (Ellsworth, 1993; Roeloffs and Langbein, 1994; Working Group, 1994). A variant of the H 3 hypothesis is mentioned briefly by Bakun and Lindh, 1985 (p. 624). The Gaussian distribution is inappropriate in principle for earthquake time intervals since these time intervals are non-negative (Savage, 1993). The Weibull distribution is a more suitable choice, because it includes the Poisson law as a special case. Since the hypotheses H 1 and H 2 have not been sufficiently well specified, we consider them together as H 1/H 2, and we use the Weibull distribution to describe a quasi-periodic occurrence for both hypotheses.

To compare the competing hypotheses, we calculate two sets of probabilities according to each model: (a) no characteristic earthquake occurring by the year listed, and (b) the target earthquake occurring within the year. The ratio of probabilities is the likelihood ratio which provides basic information about the merits of each pair of hypotheses. A sufficiently high likelihood ratio (20 or more) suggests that the first hypothesis of each pair is preferred over the second model by a statistically significant margin.

Table 1 shows the probabilities as well as the values of parameters determined by using the maximum likelihood, under the assumption that no moderate/large earthquakes have occurred by the date listed. If the predicted earthquake does not occur, the parameter estimates of the Weibull distribution will change for hypotheses H 1, H 2 and H 3 (Davis et al., 1989); this is generally not the case for H 0. For the H 0 hypothesis, the earthquake rate is defined by small shocks. Thus, additional data will only increase the accuracy of evaluating the recurrence interval (assuming that the rate of small earthquakes remains stable).

Table 1. Hypotheses performance in time (probabilities of earthquakes with m 6.0)

Year T H 0 P-GR H 1/H 2 W-Ch/W-GR H 3 P-Ch

(yr) Prob Prob Prob

(a) Probability of earthquake not occurring by the year listed
1985 19 0.867 4.06 27.9 0.716 21.4 0.411
1995 29 0.804 3.67 26.8 0.263 23.0 0.283
2005 29 0.746 2.60 29.1 0.118 24.7 0.206
2015 49 0.692 2.00 31.8 0.093 26.3 0.156
2050 84 0.532 1.32 40.0 0.070 32.2 0.074
2100 134 0.365 0.96 48.0 0.069 40.6 0.037
(b) Probability of earthquake occurring in the year listed, given that it has not occurred previously
2005 39 0.0075 3.15 27.6 0.240 24.7 0.041
2015 49 0.0075 2.47 30.0 0.169 26.3 0.038
2050 84 0.0075 1.52 37.4 0.062 32.2 0.031
2100 134 0.0075 1.15 43.1 0.032 40.6 0.025

T - time since the 1966 Parkfield earthquake. and are evaluated in the upper table (a) assuming an open interval (no earthquake), in the lower table (b) a closed interval is used. Hypotheses are abbreviated as follows: P - Poisson temporal distribution, W - Weibull (quasi-periodic): GR - Gutenberg-Richter relation, Ch - characteristics earthquake distribution.

The Weibull probabilities for the H 1/H 2 hypothesis in the upper part of Table 1 are calculated using the formula (see (1)):

Prob =exp [( T/ ) ] (7)
where T is the time elapsed since the 1966 Parkfield earthquake. The probabilities for H 0 and H 3 models are obtained from (7) by setting =1. In the lower part of Table 1, the Weibull probabilities are:
Prob =( / )( T/ ) 1 (8)
and for H 0 and H 3 hypotheses Prob=1/.

Given the earthquake probabilities displayed in Table 1, the H 1/H 2 and H 3 hypotheses can be rejected only after the end of the next century. Comparing H 3 with H 1/H 2 does not yield statistically significant conclusions either. This comparison was partially carried out by Bakun and Lindh, 1985 (p. 624): even if the predicted earthquake were to occur, the Poisson probability of the earthquake occurrence during 1985-1993 was 31% versus 95% for the original characteristic model. If the characteristic earthquake occurs in the near future (see the second half of Table 1), the difference between H 3 and H 1/H 2 is not large enough to yield statistically significant conclusions. The likelihood ratio H 1/H 2 versus H 0 is large enough during the end of the 20th- to the beginning of the 21st-centuries to formally reject the null hypothesis at the 5% significance level. However, as explained earlier, the high probability values for the characteristic model can be connected with the data selection.

6. Discussion

Instead of being the most obvious example of characteristic earthquakes (Bakun and Lindh, 1985), the Parkfield earthquake sequence can be interpreted as an accidental occurrence of 3 (or 5) events exhibiting a quasi-periodic regularity. The null hypothesis with three "characteristic" events during the last 75 years cannot be rejected, whereas the null hypothesis with five events during 138 years can formally be rejected at the 5% significance level. However, if we consider the data selection, the retroactive selection of adjustable parameters, and the ambiguity of the seismic data before 1910, the null hypothesis is still viable. The unusual properties of the magnitude-frequency relation for the Parkfield area the number of characteristic earthquakes is significantly higher than the extrapolation from small events would predict can also be explained by a preferential selection of data. In an area the size of California, such a sequence can appear by chance.

As has been mentioned above, the input parameters for the statistical analysis could not be specified unambiguously. For example, instead of evaluating the rate for m 6.0 earthquakes, the null hypothesis can be defined for m 5.5 or m 6.5 events. Then the GR rate and recurrence time decrease or increase by a factor of about formula. Similarly, the Parkfield box can also be defined differently, with the corresponding curves in Figure 3 yielding lower or higher recurrence times for m 6 earthquakes. But such modifications of the null hypothesis or the Parkfield model do not change the basic conclusion that the available seismicity data are consistent with the null hypothesis even at the 5% significance level. As has been indicated above (see Section 2), a much lower significance level should be used to reject the null hypothesis.

A frequently used argument against the Poisson assumption and other stochastic models for large earthquakes, is that according to standard statistical interpretation of these models (Davis et al., 1989), earthquake probability does not increase with time if an expected event does not occur. Table 1 illustrates this property: the evaluated recurrence interval increases if the characteristic event does not occur. This conclusion seems to contradict the steady accumulation of strain due to plate motion and is considered counter-intuitive and even paradoxical (Roeloffs and Langbein, 1994, p. 321). However, if the null hypothesis H 0 is correct, the strongest earthquakes release the major part of tectonic deformation (Anderson and Luco, 1983; Kagan, 1993): m 7.5 earthquakes account for more than 50% of the total seismic moment. Thus, the above arguments are applicable only to these large earthquakes. Stress is a tensor, and the postseismic static stress field has a very complex, fractal pattern (see, for example, Figure 3 in Kagan, 1994). Moreover, the stress in seismogenic zones may always be close to a critical state, even after very strong earthquakes, allowing new large events to occur very soon thereafter (Scholz, 1991). Thus the commonly accepted model of strain loading, release, and re-loading needs re-examination in light of large earthquake clustering (Kagan and Jackson, 1995). Although tectonic stress accumulates steadily, it may not be released on the same fault in a quasi-periodic fashion: very large displacements can be stored and then released in a few giant earthquakes. Additionally, a system of faults can accommodate large-scale deformation by releasing it in clusters of large earthquakes occurring on different, possibly closely related faults.

7. Conclusions

The Parkfield prediction has been unsuccessful in many ways. First, the earthquake as forecast by Bakun and Lindh (1985) did not occur. However, other interpretations of the Parkfield earthquake history are possible (Davis et al., 1989; Ellsworth, 1993; Savage, 1993; Roeloffs and Langbein, 1994): these models forecast the characteristic earthquake at later times, while rejecting the original Parkfield prediction. For example, Working Group (1994) assumes that the annual probability for the Parkfield characteristic earthquake is still about 10% per year. We argue that the Parkfield forecast is inadequate at a more fundamental level the Parkfield sequence can be explained by a simpler model: the Poisson process plus the GR relation. Since the null hypothesis is conceptually less complex than other models, it should not be rejected unless there is statistically significant evidence to the contrary. As shown above, according to the null hypothesis, the return time for an m 6 earthquake with an epicenter in the Parkfield area is more than 100 years. Thus, the experiment may need to be run for several more decades before a moderate or large earthquake occurs in the area. With regard to statistical tests, the Parkfield experiment by its design cannot answer the question of the characteristic/quasi-periodic model validity. As the analysis in this paper demonstrated, even if an earthquake similar to the predicted one were to occur, it would not be sufficient basis for drawing any statistically significant conclusions.

Acknowledgements

I appreciate partial support from the National Science Foundation through Cooperative Agreement EAR-8920136 and USGS Cooperative Agreement 14-08-0001-A0899 to the Southern California Earthquake Center (SCEC). I thank D.D. Jackson of UCLA, J.C. Savage of USGS, A.A. Gusev of the Institute of Volcanic Geology and Geochemistry in Kamchatka, R.J. Geller of the Tokyo University, F. Evison and D. Vere-Jones of the University of Wellington for very useful discussions. Reviews by J.C. Savage, E. Roeloffs, J. Langbein and an anonymous referee helped to significantly clarify the paper. Publication 4646, Institute of Geophysics and Planetary Physics, University of California, Los Angeles. Publication 291, SCEC.

References

Anderson, J. G. and Luco, J.E., 1983. Consequences of slip rate constraints on earthquake occurrence relation. Bull. Seismol. Soc. Am., 73: 471-496.
Anderson, P.W., 1992. The Reverend Thomas Bayes, needles in haystacks, and the fifth force. Phys. Today, 45(1): 9-11.
Bain, L.J. and Engelhardt, M., 1981. Simple approximate distributional results for confidence and tolerance limits for the Weibull distribution based on maximum likelihood estimators. Technometrics, 23: 15-20.
Bakun, W.H. and Lindh, A.G., 1985. The Parkfield, California, earthquake prediction experiment. Science, 229: 619-624.
Ben-Zion, Y., Rice, J.R. and Dmowska, R., 1993. Interaction of the San-Andreas fault creeping segment with adjacent great rupture zones and earthquake recurrence at Parkfield. J. Geophys. Res., 98: 2135-2144.
Bolt, B.A. and Miller, R.D., 1975. Catalogue of Earthquakes in Northern California and Adjoining Areas; 1 January 1910 - 31 December 1972, Seismographic Stations, Berkeley, Calif., 567 pp.
Davis, P.M., Jackson, D.D. and Kagan, Y.Y., 1989. The longer it has been since the last earthquake, the longer the expected time till the next? Bull. Seismol. Soc. Am., 79: 1439-1456.
Ellsworth, W.L., 1990. Earthquake history, 1769-1989. In: R.E. Wallace (Editor), The San Andreas Fault System, California. U.S. Geol. Surv., Prof. Pap., P 1515: 153-187.
Ellsworth, W.L., 1993. Getting beyond numerology. Nature, 363: 206-207.
Feller, W., 1966. An Introduction to Probability Theory and its Applications, 2. Wiley, New York, 626 pp.
Harris, R.A. and Archuleta, R.J., 1988. Slip budget and potential for a M7 earthquake in Central California. Geophys. Res. Lett., 15: 1215-1218.
Hutton, L.K. and Jones, L.M., 1993. Local magnitudes and apparent variations in seismicity rates in Southern California. Bull. Seismol. Soc. Am., 83: 313-329.
Kagan, Y.Y., 1993. Statistics of characteristic earthquakes. Bull. Seismol. Soc. Am., 83: 7-24.
Kagan, Y.Y., 1994. Observational evidence for earthquakes as a nonlinear dynamic process. Physica D, 77: 160-192.
Kagan, Y.Y., 1996. Comment on "The Gutenberg-Richter or characteristic earthquake distribution, which is it?" by S. G. Wesnousky. Bull. Seismol. Soc. Am., 86: 274-285.
Kagan, Y.Y. and Jackson, D.D., 1995. New seismic gap hypothesis: Five years after. J. Geophys. Res., 100: 3943-3959.
Kotz, S. and Johnson, N.L., 1988. Encyclopedia of Statistical Sciences. Wiley, New York, Vol. 9.
Nelson, W., 1982. Applied Life Data Analysis. Wiley, New York, 634 pp.
Oppenheimer, D.H., Klein, F.W. and Eaton, J.P., 1992. The First 20 Years of CALNET, the Northern California Seismic Network. U.S. Geol. Surv., Open-File Rep. 92-0209, Menlo Park, CA, 26 pp.
Oreskes, N., Shrader-Frechette, K. and Belitz, K., 1994. Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263: 641-646, also see Science, 264: 331.
Parkfield (Article collection), 1988. Earthquakes Volcanoes, 20: 39-92.
Real, C.R., Toppozada, T.R. and Parke, D.L., 1978. Earthquake Catalog of California; January 1, 1900 - December 31, 1974, Calif. Div. Mines Geol., Spec. Publ. 52, 15 pp.
Roeloffs, E. and Langbein, J., 1994. The earthquake prediction experiment at Parkfield, California. Rev. Geophys., 32: 315-336.
Rhoades, D.A. and Evison, F.F., 1989. Time-variable factors in earthquake hazard. Tectonophysics, 167: 201-210.
Savage, J.C., 1993. The Parkfield prediction fallacy. Bull. Seismol. Soc. Am., 83: 1-6.
Savage, J.C. and Cockerham, R.S., 1987. Quasi-periodic occurrence of earthquakes in the 1978-1986 Bishop-Mammoth lake sequence, Eastern California. Bull. Seismol. Soc. Am., 77: 1347-1358.
Scholz, C.H., 1991. Earthquakes and faulting: Self-organized critical phenomena with a characteristic dimension. In: T. Riste and D. Sherrington (Editors), Spontaneous Formation of Space-Time Structures and Criticality. Kluwer, Dordrecht, pp. 41-56.
Schwartz, D.P. and Coppersmith, K.J., 1984. Fault behaviour and characteristic earthquakes: examples from Wasatch and San Andreas fault zones. J. Geophys. Res., 89: 5681-5698.
Segall, P. and Du, Y., 1993. How similar were the 1934 and 1966 Parkfield earthquakes? J. Geophys. Res., 98: 4527-4538.
Sieh, K., Stuiver, M. and Brillinger, D., 1989. A more precise chronology of earthquakes produced by the San Andreas fault in Southern California. J. Geophys. Res., 94: 603-623.
Thatcher, W. and Segall, P., 1990. Reid's hypothesis, the time-predictable model, and the Parkfield earthquake. Eos, Trans. AGU, Fall Meet. Suppl., 71(43): 1473.
Toppozada, T.R., 1992. Parkfield earthquake history. Eos, Trans. AGU, Fall Meet. Suppl., 73(43): 406.
U.S. Geological Survey, 1994. Preliminary determination of epicenters (PDE), Monthly Listings. U.S. Dep. Inter., Natl. Earthquake Inf. Cent., Denver.
Vere-Jones, D., 1975. Stochastic models for earthquake sequences. Geophys. J. R. Astron. Soc., 42: 811-826.
Working Group to evaluate the Parkfield earthquake prediction experiment, 1994. Earthquake Research at Parkfield, California, 1993 and beyond. U.S. Geol. Surv., Circular 1116, Wash., 14 pp.
Working Group on the probabilities of future large earthquakes in southern California, 1995. Seismic hazards in southern California: Probable earthquakes, 1994-2024. Bull. Seismol. Soc. Am., 85: 379-439.
Wilks, S.S., 1962. Mathematical Statistics. Wiley, New York, 644 pp.
TOP