Analysis of earthquake sequences and activity rates: implications for seismic hazard

Resumen   Abstract   Índice   Conclusiones


Yazdi, Pouye

2021-A
Descargar PDF  



Resumen

Los terremotos son fenómenos naturales que liberan una gran cantidad de energía, la cual viaja en forma de ondas sísmicas que se propagan en todas las direcciones desde el interior de la Tierra. Cuando estas ondas alcanzan la superficie terrestre, se genera una sacudida sísmica del
suelo cuyas consecuencias son a menudo perjudiciales para el ser humano, el medio ambiente y el medio construido. Los geofísicos estudian las causas de los terremotos para conocer mejor cómo se producen y mitigar esos efectos adversos. Para ello, consideran que los terremotos no
son eventos aislados, sino que se relacionan e interaccionan con una serie de terremotos con los que comparten la misma ventana espaciotemporal y presentan características comunes o similares. Esta tesis trata del estudio de series sísmicas. Considero dos metodologías diferentes:
la sismología estadística y los métodos basados en procesos físicos.

El enfoque estadístico considera que un terremoto es un objeto puntual, con atributos que aparecen en los catálogos sísmicos, tales como las coordenadas hipocentrales, el tiempo origen y la magnitud. Estos atributos se tratan como variables aleatorias que siguen distribuciones
de probabilidad. Por tanto, la calidad y homogeneidad del catálogo sísmico disponible afecta directamente a la validez e idoneidad de los modelos estadísticos que describen las distribuciones de probabilidad de los atributos de los terremotos. En esta tesis utilizo varios modelos estadísticos en ventanas espaciotemporales finitas y los aplico de forma individual o combinada, según cada caso estudiado. Por ejemplo, para estudiar la frecuencia de terremotos de una magnitud dada se usa la ley de Gutenberg-Richter y para analizar la variación temporal de ocurrencia de terremotos se aplica el modelo de Omori-Utsu y el modelo de secuencia de terremotos de tipo epidémico (en inglés epidemic type aftershock sequence, ETAS). El modelo ETAS es un proceso puntual en el cada sismo representa un punto con los atributos mencionados. Este modelo asume que todos los eventos pueden disparar réplicas potencialmente, como en un proceso de ramificación (en inglés, branching process). La aplicación del modelo ETAS proporciona estimaciones separadas de las tasas de actividad sísmica de fondo y de réplicas, que se asocian a diferentes tipos de fuentes, que son, respectivamente: (1) fuentes asísmicas como la carga tectónica, intrusiones rápidas de magma o fluidos in eventos de deslizamiento lento transitorios; y (2) fuentes sísmicas, como el disparo elástico entre dos terremotos, lo cual controla la ratio de amificación. Las respectivas contribuciones de fuentes sísmicas y asísmicas, así como sus variaciones espaciotemporales, son determinantes para nuestro conocimiento de la naturaleza de series sísmicas. En suma, el resultado de la modelización estadística indica la naturaleza de los procesos de ramificación mediante parámetros cuantitativos. Además, estos parámetros caracterizan los cambios de tasas sísmicas (en el dominio temporal) y de densidad de terremotos (en el domino espacial), que son dos ingredientes importantes de la evaluación de la peligrosidad sísmica probabilista (en inglés probabilistic seismic hazard assessment, PSHA).

El enfoque basado en procesos físicos, en cambio, requiere disponer de información sobre otros atributos sismológicos como fases sísmicas de sismogramas (para el proceso de relocalización previo), la distribución de deslizamientos sobre el plano de ruptura, el patrón regional de
esfuerzos, etc. En esta tesis aplico el modelo de transferencia de esfuerzos estáticos de Coulomb (del inglés, Coulomb failure stress CFS) para analizar la interacción entre terremotos, o entre un terremoto y posibles planos de ruptura en la corteza terrestre. Con objeto de analizar el
disparo de réplicas (en inglés, aftershock triggering), estudio los canmbios de esfuerzos estáticos debidos a (1) eventos con una distribución de deslizamiento en el plano de ruptura determinada; y (2) procesos de deslizamiento sobre áreas extensas del interfaz de subducción (en inglés, subduction interface). Este enfoque supone modelizar los deslizamientos mencionados y mejorar la localización hipocentral para llegar a obtener resultados fiables. Los métodos empleados se aplican de manera individual o conjunta en el estudio de cuatro secuencias sísmicas, de distinta naturaleza que tienen lugar en diferentes zonas del mundo:

(I) una serie sísmica de tipo enjambre, formada por muchos sismos de baja magnitud, que ocurre en el sur de España. La serie se divide en tres partes y se estudia con métodos estadísticos. Caracterizo este enjambre principalmente como resultado de un agente generador asísmico continuado, que se acelera en la fase de mayor actividad de la serie. Propongo la presencia de una fuente adicional en la zona, además de la carga tectónica y de disparo entre terremotos, que induce la actividad de tipo enjambre.
(II) una secuencia de sismo principal y réplicas que se inicia con un doblete de terremotos grandes en el noroeste de Irán. En este caso se usa, de forma combinada, el método estadístico y la modelización de cambio de esfuerzos. Encuentro que los eventos menos energéticos están asociados a un agente generador asísmico. La secuencia de réplicas disparada por el doblete está de acuerdo con la localización hipocentral de las réplicas más profundas.
(III) una serie cortical de tipo sismo principal y réplicas que ocurre en el sur de México. En este caso se investiga la hipótesis de que la serie sea disparada por un proceso de subducción asísmico (carga tectónica y eventos de deslizamiento lentos, o en inglés slow-slip events, SSEs) se estudia mediante modelización de ∆CFS. Los resultados muestran que el impacto del disparo de la carga tectónica antes del evento de deslizamiento lento no es evidente, pero tampoco se puede descartar. Por contra, la fase inicial del evento de deslizamiento lento presenta una influencia favorable leve pero clara para el disparo de esta serie sísmica.
(IV) otra serie asociada a un terremoto grande en la zona de subducción de México. En este caso se modelizan los esfuerzos de Coulomb para determinar el posible disparo de las dos réplicas principales, que al parecer rompen diferentes asperezas cercanas a la ruptura principal.
Seguidamente se aplica el método estadístico para determinar la variación espaciotemporal de tasas sísmicas en la zona la interfaz de la subducción antes y después de la ocurrencia de la serie sísmica. Los correspondientes mapas de tasas sísmicas se usan en sendos cálculo probabilistas de peligrosidad sísmica para calcular las tasas de excedencia de determinados niveles de movimiento fuerte, observándose un pequeño aumento de la aceleración esperada con las tasas actualizadas después de la serie.

Los estudios desarrollados en esta tesis proporcionan una descripción más esclarecedora de determinados aspectos específicos de las series analizadas. Sin embargo, los procedimientos y conclusiones alcanzados se pueden utilizar en el estudio de otras series sísmicas en todo el mundo. Asimismo, se apunta el interés de incorporar las variaciones espaciotemporales de parámetros de sismicidad (tasas de actividad sísmica) para los métodos de estimación de peligrosidad que incorporan actualizaciones temporales de dichos parámetros.



Abstract

Earthquakes are natural phenomena that release enormous amounts of energy travelling in all directions and cause ground shaking when they reach to the Earth’s surface. To mitigate their adverse effects on human lives and the environment, geophysicists investigate their causes. This
involves studying earthquakes, not as isolated phenomena, but in relation and interaction with a series of earthquakes that share same time and/or space windows, and display common or similar features. This dissertation deals with the study of such series or earthquake sequences. I consider two different methodologies: statistical seismology, and physics-based methods.

The statistical approach considers earthquakes as objects with determined attributes: origin time, hypocentral coordinates and magnitude, which are available in earthquake catalogs. These attributes are treated as random values that have some probability distributions. Thus, the quality
of earthquake catalogs directly affects the performance and validity of statistical models, which describe the probability distribution of the earthquake attributes. Here, I apply several statistical models in finite spatiotemporal windows and examine their application in individual or combined manners. For instance, the earthquake magnitude-frequency relation is modelled using the Gutenberg-Richter law, and for the analysing the temporal variation of earthquake occurrence, the Omori-Utsu and epidemic type aftershock sequence (ETAS) models are applied. The ETAS
model is a point process in which each earthquake represents a point with the mentioned attributes.
This model assumes that all the earthquakes can potentially trigger aftershocks, and display a branching process. Applying the ETAS model provides separate estimates of background and aftershock rates, which are associated with two distinct source-types, respectively: (1)
aseismic sources such as tectonic loading, rapid fluid or magma intrusions and transient slow-slip events, and (2) seismic source (i.e., earthquake-earthquake elastic triggering) which controls the branching ratio. The respective contributions of seismic and aseismic sources, and their spatiotemporal variations are strongly determinative in our understanding about the nature of earthquake sequences. Overall, the result from statistical modelling indicates the nature of the branching processes via quantitative parameters. Additionally, these parameters characterize changes in earthquake rates (in time domain) and earthquake densities (in space domain), which together constitute an important component of a probabilistic seismic hazard assessment (PSHA).

The physics-based approach in turn requires information about other seismological attributes such as seismic phases, and also geophysical parameters such as the slip distribution over the ruptured areas, regional stresses, etc. The applied model in this dissertation computes the changes in Coulomb static stress (∆CFS) to analyse the interaction between earthquakes, or between earthquakes and the potential rupture planes in the Earth’s crust. To analyse the aftershock triggering, I study static stress changes due to (1) earthquakes with a constrained slip distribution on a fault plane, and (2) slip processes over an extended area of a subduction interface. This approach involves modelling the mentioned slips and also improving the earthquake hypocentral location in order to obtain reliable results. The mentioned approaches, are implemented solely or together in the study of four earthquake sequences with different nature and in different locations of the world:

(I) A swarm sequence of low magnitude earthquakes in southern Spain is modelled using the statistical approach, and during different periods. I characterize this swarm with a predominant and continuous aseismic forcing starting at the pre-activity period, that it accelerates during the
main-activity period. I propose the presence of an additional aseismic source in the area, that induce the swarm activity together with the tectonic loading and seismic triggering.
(II) A mainshock-aftershock sequence that starts with a large earthquake doublet in northwestern Iran is studied via both statistical and ∆CFS modellings. We found less energetic events associated to the aseismic forcing. The coseismic ∆CFS triggering of the doublet agrees with
the hypocentral location of deeper aftershocks.
(III) The possibility that a shallow crustal mainshock-aftershock sequence in southern Mexico is triggered by the aseismic subduction processes (tectonic loading and slow-slip event) is studied through ∆CFS modelling. The result shows that the triggering impact of the tectonic forcing
before the slow-slip events is not evident but cannot be ruled out. On the contrary, the early phase of the slow-slip event presents a clear favouring influence, though small on triggering this sequence.
(IV) A thrust earthquake in southern Mexico is studied using both statistical and ∆CFS modellings.
The spatiotemporal variation of earthquake rate in the subduction interface, prior to and after this earthquake is modelled and incorporated into PSHA. A small rise in the annual exceedance rate of the strong ground motion highlights the importance of updating earthquake rates for seismic hazard assessments.

The studies that I carry out along this thesis, provide a more insightful description of some specific earthquake sequences and the aspects of their origin. However, the achieved conclusions are of use in the study of other earthquake sequences with similar seismotectonic conditions.
It is also of great importance for the seismic hazard assessments, to include the parametrized spatiotemporal variation of the earthquake frequencies in the updating methods.



Índice

1 Introduction 1
1.1 Preface ………………………………… 1
1.2 Theoreticalframework ………………………… 3
1.2.1 Typesofearthquake sequences…………………. 4
Mainshock-aftershock ………………… 4
Swarm ………………………… 5
1.2.2 Naturaloriginofearthquake sequences . . . . . . . . . . . . . . . . . 5
1.2.3 Earthquake rate ………………………… 6
1.2.4 Exceedancerateofthegroundmotion . . . . . . . . . . . . . . . . . . 7
1.3 Researchcontext …………………………… 11
1.3.1 Swarm-typesequence ……………………… 11
The 2012-2013 Torreperogil-Sabiote sequence, southern Spain 11
1.3.2 Mainshock-aftershock sequence; crustal faulting … 12
The 2012 Ahar-Varzeghan sequence, north-western Iran … 12
The 2001 Coyuca sequence, southern Guerrero, Mexico … 13
1.3.3 Subduction sequence; large thrust earthquake … 14
The 2014 Papanoa sequence, southern Guerrero, Mexico … 14
2 Methodology … 15
2.1 Statisticalapproach … 15
2.1.1 Magnitude-frequencydistribution … 15
2.1.2 Seismicityratemodelling … 21
Hazardfunction … 22
Conditionalintensityfunction … 23
Stationaryrate … 24
2.1.3 Omori-Utsuaftershockrate … 26
2.1.4 TemporalETASmodelling … 28
Branchingratio … 28
2.1.5 Seismicitydensitymodelling … 31
Kernelfunction … 31
2.1.6 Aftershockdensity … 35
2.1.7 SpatiotemporalETASmodelling … 36
2.2 Earthquake re-location … 37
2.3 Staticstressmodelling … 39
2.3.1 Earthquake sourcemodelling … 42
2.3.2 Coulombfailurestresschange … 44
∆CFSforanearthquake … 46
∆CFSforoptimumruptures … 46
2.4 Probabilisticseismichazardestimation … 51
2.4.1 Earthquake epicentre density rate estimation … 52
2.4.2 Exceedance rate of the strong ground motion … 53
3 Swarm-Type Sequence … 57
«Statistical analysis of the 2012-2013 Torreperogil-Sabiote seismic series, Spain.» … 59
4 Mainshock-aftershock sequence; crustal faulting(I) … 61
«Analysis of the 2012 Ahar-Varzeghan (Iran) seismic sequence: Insights from statistical and stress transfer modeling.» … 63
5 Mainshock-aftershock sequence; crustal faulting (II) … 65
«The 2001 Coyuca seismic sequence.» … 67
5.1 Introduction … 67
5.2Seismotectonicssetting … 68
5.2.1 Guerreroseismicgap … 68
5.2.2 Slow-slip eventsinGuerrero … 69
5.2.3 2001-2002slow-slip event … 69
5.3 Data … 70
5.4 Subduction interface slip modelling … 71
5.4.1 Phase-I:back-slipmodelling … 71
5.4.2 Phase-II:slow-slipmodelling … 72
5.4.3 ∆CFS calculations … 73
5.5 Discussionandconclusions … 76
6 Subduction sequence; large thrust earthquake … 81
«Analysis of the 2014 Mw 7.3 Papanoa (Mexico) earthquake: Implication for seismic hazard assessment.» … 83
7 Summary and Conclusions … 85
References … 93


Conclusiones

The four earthquake sequences studied in this dissertation reflect the result of my survey through the analysis of earthquake sequences in some distinct geographical zones with different seismo- tectonics. These analyses share similar approaches for investigating the addressed aspects of the earthquake sequences.

The statistical approach is applied to analyse the earthquake sequences assuming that they are outcomes of a random point process that generates earthquakes (points) with random magnitude, time and location. This approach searches for the laws that best rule the earthquake process and tends to identify the probability distributions of earthquake parameters by performing and testing the stochastic models in finite time and space windows. The extrapolation of the modelled activity rates beyond this window does not contemplate any changes along the process of earthquake generation. Nevertheless, this approach can be insightful in analysing the causality of the earthquake sequences, which is an essential input towards earthquake forecasting effort.

For all the studied sequences, I initially take the distribution of earthquake magnitudes as the main key for classifying the earthquake sequences into mainshock-aftershock or swarm types. In chapters 3, 4 and 6, I perform a magnitude-frequency analysis using the Gutenberg-Richter law and I use the Epidemic Type Aftershock Sequence (ETAS) model to characterize each sequence. This model considers that the earthquakes in a sequence epidemically induce a new cascade of aftershocks. Thus, I inquire its applicability and performance for understanding the contribution of aseismic and seismic forcing in the generation of the earthquake sequences. Figure 7.1 gathers the ETAS model prediction for the accumulative number of events versus time in the three mentioned chapters with a very close adjustment to the observations, showing that the ETAS model is highly capable of predicting the observed activities.

A brief conclusion about what I have achieved through statistical analyses of the introduced sequences in chapters 3, 4 and 6 is listed below:

Figure 7.1: Results of the ETAS modelling in the time domain for the earthquake sequences of 2012-2013 Torreperogil-Sabiote: main activity phase (a), 2012 Ahar Varzeghan (b), and 2014 Papanoa (c and d).

>>> In Yazdi et al. (2017), we model the activity rates related to the swarm series of 2012-2013 Torreperogil-Sabiote, during the pre-, main- and post-activity phases of this long-lasting, highly-clustered sequence (chapter 3). We examine the temporal variations in the background activity rate µ(t) and we identify the aseismic forcing as the leading process for the earthquake generation in all three phases. The temporal ETAS modelling shows that during the main-activity phase, µ(t) significantly rises from an average of ∼ 0.1 to ∼ 2.9 events per day. Consequently, the aftershock contribution also rises from ∼ 20% to ∼ 40% in this phase. On the other hand, the productivity parameter α = 0.975 during this phase (with mc = 1.3), is smaller than its typical values (> 1.5) for mainshock-aftershock sequences (e.g., Hainzl & Ogata, 2005; Hainzl et al., 2013) and agrees with a swarm-type activity. The low contribution of earthquake-earthquake triggering does not support the tectonic loading (source of the tectonic earthquakes that are usually followed by numerous aftershocks) as the only aseismic source in the generation of background events. We suggest that the observed rise in background activity might be influenced by some additional forcing, apart from the tectonic loading and internal stress interactions.
The GR modelling shows a significant decrease of b-value from 1.5 to 1.1 in the main-activity phase, which is due to the occurrence of bigger events (including the largest earthquake of the sequence with Mw = 3.9). During the post-activity phase, the sequence displays a lower b-value of 0.8 and a higher α ∼ 1.5, which respectively point to a lower proportion of small magnitude events and more aftershock productivity. This change can indicate a possible transition to some higher stress level due to a continuous aseismic forcing since the pre-activity period.

>>> In Yazdi et al. (2018), we study the 2012 Ahar-Varzeghan earthquake sequence (chapter 4). We analyse a series of aftershocks that are not associated to a single mainshock, but to a doublet of Mw6.4 and Mw6.2 earthquakes with a very close spatiotemporal separation. The result of temporal ETAS modelling with cut-off magnitude of 2.0, shows that the aftershock proportion is relatively small and about 30% (with α = 1.48). In turn, applying bigger cut-off magnitudes of 2.5 and 3.0 results in an increase in the aftershock productivity parameter α, to 1.67 and 1.88, respectively. This implicates that the earthquake-earthquake triggering constructs the majority of the earthquake population (Figure 5 in chapter 4) with magnitudes≥ 2.5.
The GR modelling shows that the b-value does not vary significantly during the sequence and stays below 1.0 (Figure 4 in chapter 4), showing a considerable contribution of larger magnitudes.
Overall, the incomplete catalog records of low-magnitude events for this sequence, and our choice for a relatively high cut-off magnitude of 2.0, have a considerable influence on the result of ETAS modelling. Accordingly, the less energetic low- magnitude events are mainly associated with the background activity. Some authors (e.g., Djamour et al., 2011; Ghods et al., 2015) indicate that despite the absence of important earthquake activity during the recent past, tectonic stresses are actively ac- cumulating in the study area. Our results support that in addition to the elastic stress release due to the mainshock-aftershock mechanism and the build-up tectonic stress on the faults, an ongoing aseismic forcing with yet-undefined source contributes to the generation of background earthquakes with small magnitudes.

>>> In addition to the two crustal earthquake sequences mentioned above, in Yazdi et al. (2019) we apply the ETAS model to the subduction interface earthquake activity during almost two decades in the area where the 2014 Papanoa (Mexico) earthquake sequence occurred (chapter 6). The observed sudden increase in the overall population of earthquakes since the Papanoa mainshock with Mw7.3 was modelled and the variations of activity rate λ(t|Ht) were analysed.
For a relatively high mc = 3.8, we found a very small fluctuation in the background rate µ(t), where it mostly stays around 0.011 d−1, and then it jumps up to 0.017 d−1 following this large thrust earthquake. However, the mc since 2014 has a lower value of 2.9 for which µ(t) remains constant again (Figure 7.1d).
The achieved results point to a long term or lasting (at least until the end of 2017) rise in the state of aseismic forcing following the Papanoa mainshock. Apparently, the subduction interface in this area is experiencing a higher contribution of tectonic earthquakes with magnitudes ≥ 3.8 in comparison to the pre-mainshock period. Table 2 in chapter 6 shows that α-value does not exhibit a considerable aftershock productivity as it is usually expected for a large thrust earthquake (Hainzl et al., 2010). We also attempt to apply the spatiotemporal version of ETAS modelling with constant background rate µ (Lombardi, 2017). The obtained difference between the spatial distribution of the background activity before and after the Papanoa mainshock is consistent with the location of the mainshock and two other large events with Mw6.6 and Mw6.2 on 8th and 10th May 2014, respectively.

Other critical parameter in statistical analysis of aftershock sequence is the power of the after- shock rate decay or p. We have seen that the p value stays close to 1.1 (tables 2 in both chapters 4 and 6) for both mainshock-aftershock sequences, which is perfectly within the given range [0.8, 1.2] by Utsu & Ogata (1995).

In summary, the results of applying ETAS modelling on these mainshock-aftershock sequences, does not represent a high aftershock population due to earthquake-earthquake triggering as it is expected for large magnitude mainshocks. Hainzl et al. (2013) discuss that the magnitude- dependent aftershock productivity (or α) can be underestimated in the ETAS modelling if the catalog limitations and aseismic transients are ignored. In modellings presented above, the aseismic transients are not neglected in the parameter estimations, as we consider time-dependent background rates. In turn, missing small magnitude records in our catalogs influences the relatively low productivity values for α that would be compensated, via the process of parameter optimization, by abrupt increases in the background rate µ(t). Indeed, overcoming the deficiency of the earthquake database, notwithstanding the statistical model we use, can lead to more definitive conclusions and further implications, as for the seismic hazard and risk.

I also apply the physics-based approach in this dissertation. This approach is implemented by computing the changes in Coulomb failure stress (∆CFS) following a slip over a rupturing surface which can be a fault plane or subducting interface. For ∆CFS modelling, it is essential to have a reliable rupture model, which gives the slip distribution over the ruptured surface. Moreover, an improved accuracy in the determination of aftershocks hypocentral location is crucial for interpreting a cause-effect relation between mainshock and aftershocks. A summary of the achieved conclusions is listed below:

>>> In Yazdi et al. (2018), we survey the triggering impact of the first mainshock on the occurrence of the second one for which no clear evidence about its rupture plane orientation was previously found. We design a multiple-strike rupture plane for the first shock with Mw6.4 and compute its triggering impact on the second shock with Mw6.2, at its hypocentral location (Ghods et al., 2015), using the focal mechanism solutions given by CMT. We also analyse the spatial relation of the 2012 Ahar-Varzeghan doublet and its aftershocks distribution, which exhibits a mainly EW epicentral alignment, though the largest aftershock with Mw5.6 on 7th November, is accompanied by a NS orientated cluster.
The result suggests that if the first shock triggered the second one, it is more likely that the second shock occurred along an almost perpendicular geometry (10◦/50◦/36◦) to the first shock. This result supports the interpretation of some other studies such as Donner et al. (2015), and contradicts the findings of Ansari (2016) and Momeni & Tatar (2018) about the rupturing plane for the second shock.
Applying the obtained result about the focal mechanism of the doublet, we compute ∆CFS maps for optimum thrust and strike-slip ruptures in the area. To reduce the adverse influence of high depth uncertainties, the ∆CFS maps are estimated for two depth ranges of 5 km thickness ([4, 9] km and [16, 21] km) and the averaged value in each range is compared with the spatial distribution of relocated aftershocks within that depth range. The result exhibits considerable consistency of the positively charged area with the aftershock distribution for the depth range [16, 21] km and both optimum thrust and strike-slip mechanisms. However, the shallow cluster of the relocated aftershocks does not present a significant coincidence with the positively charged areas for none of the rupture mechanisms.

>>> In Yazdi et al. (2019), we analyse the consistency of aftershock distribution with the positive ∆CFS pattern due to the Papanoa earthquake with Mw7.3 and the next two largest events of the sequence with Mw6.6 and 6.2. The ∆CFS is estimated for pure thrust slip over the subduction interface. To account for depth errors of the aftershock hypocenter, we consider earthquakes with ±5 km vertical distance to the subduction interface with a thickness of 10 km. The result shows that the ∆CFS map computed using the slip model by Mendoza & Martínez López (2017) highly agrees with the distribution of relocated aftershocks.

>>> Chapter 5 presents another application of ∆CFS calculation, which I use for investigating the causality of mainshock-aftershock earthquake sequence of the 2001 Coyuca in the Guerrero seismic gap. In this study, I program a 3D slip model over an expanded area of Guerrero subduction zone in Mexico. This model uses the short-term coupling index between the Cocos and the North American plates that is estimated using GPS inversions for inter-SSE periods of ∼ 4 years by Radiguet et al. (2016). I consider a constant velocity for the Cocos plate shortening along the middle American trench and simulate the back-slip over patches of 5 × 5 km2 based on a 3D model of coupling index. The ∆CFS maps due to the subduction interface back-slip model do not show a clear triggering influence of 1998-2001 inter-SSE on the occurrence of 8th October 2001 Coyuca mainshock. The result indicates a small positive ∆CFS ∼ +0.005 bars on the shallower depth of 5 km, on a southward rupture plane given by Iglesias et al. (2004). However, at higher depth ∆CFS ∼ −0.1 bars. On the contrary, applying only 10% of slip during 2001-2002 SSE can cause a positive ∆CFS up to +0.05 bars on the hypocentral area of the Coyuca series, and the hypocentral locations of aftershocks highly coincide with the positively charged areas. A combination of 1998-2001 inter-SSE back-slip and 2001-2002 SSE total slip modelling does not show a significant change in the inconclusive result from back-slip modelling. Nevertheless, considering the existence of a southward normal fault, which was very close to rupture since long before, the triggering influence of early SSE phase is not unlikely. Specially, because the continuation of the 2001-2002 SSE expands the positive ∆CFS area and coincides with the hypocentral distribution of aftershocks.

Yet, it is worth remarking that in all three sequences in chapters 4, 5 and 6, the relocated hypocenters comprise a fraction of the whole aftershock activity and the available data we have contain considerable depth uncertainties. Moreover, on the grounds that there are not enough focal mechanism solutions for the aftershock earthquakes, we assume that their focal mechanism share similar rupture slip directions. However, studies have shown that for crustal earthquakes (e.g., 2012 Ahar-Varzeghan and 2001 Coyuca earthquake sequences), many of the aftershocks might have dissimilar mechanisms (e.g., Santoyo et al., 2005). Overcoming the above defects would lead to much more accurate and strong conclusions.

In the chapter 6, the implication of spatiotemporal variation in the long term rate of earthquake activity for the PSHA in is analysed as below:

>>> In Yazdi et al. (2019), we apply the result from our statistical analysis (in the time domain), which exhibits an increase in the rate of background earthquakes. Thus, we use spatiotemporal ETAS modelling for estimation of the spatial probability distribution of the background earthquakes since January 1992 up to (I) April 2014 and (II) January 2018. The achieved spatial probability distributions are applied for extracting the background density rate from the smoothed density rate of the total observed activity in subduction interface (for both periods I and II). The outputs are used as spatial distributions (in time unit) of de-clustered earthquake data in periods I and II, to compute the exceedance probability of the strong ground motion ψ∗, in one year. Similar calculations are carried out for density rate of the total observed activity (as clustered earthquake data) too.
Applying a minimum magnitude of Mw5.0, the PSHA results show that for back- ground density rate, the changes in the ground motion exceedance rates are too small and not really significant. Whereas, using total observed density rate, we see that nearby the epicenter of Papanoa mainshock on 18th April 2014, the peak ground acceleration (PGA) rises up to 7% to 0.5 g for annual exceedance probability of 0.2%, and prompts the importance of implementing updated earthquake rates and influence of clustered data in PSHA.

Overall, the researches summarized above make an endeavor to shed more light on characterizing earthquakes with different sources and tackle a reliable earthquake rate modelling in time and space domains. Future studies can extend the implication of earthquake sequences for the PSHA further. With respect to the study in Yazdi et al. (2017), although the magnitude range of this swarm sequence is small, a time-dependent PSHA study would help updating the exceedance probabilities of the ground motion in short-term and near-field. Thus, we can consider the estimation of exceedance rate Λ, for smaller time frames of approximately constant earthquake rates, and monitor the changes in the PSHA (e.g., Convertito et al., 2012). Likewise, the ETAS modellings carried out in Yazdi et al. (2018, 2019) provide updated earthquake rates associated to background and aftershock activities, which construct part of the source component for future updates in PSHA studies for north-western Iran and central coast of the Guerrero, and would imply modification of PSHA in long-term. Moreover, one of the interesting questions that later could be addressed in the ∆CFS modelling is about the influence of coseismic stress changes in the spatiotemporal distribution of earthquake rate. Some studies introduce models that combine other physics-based approaches like rate- and state-dependent friction laws and ∆CFS to compute the spatiotemporal variations in the earthquakes rates during a period of sequential events which can produce considerable coseismic stress changes (e.g., Dieterich & Conrad, 1984; Catalli et al., 2008; Chan et al., 2010). The result of such models would have direct implication for time-dependent PSHA studies and should be considered to explore in future researches. Ultimately, for providing more applicable PSHA results, the hybrid models that count for different time horizons should be taken into account (e.g., Rhoades, 2013; Gerstenberger et al., 2016).

I hope this effort, despite all the shortcomings, will trigger future research in the analysis of earthquake sequences and their influence on seismic hazard assessments in short-term to long-terms.