Submitted by cpassmore79 t3_10wecl8 in askscience

So my understanding is that Earthquakes are a release of pressure when fault lines get "stuck" and the plates can't move.

I live in the PNW, and we're always talking about "the big one" on the Cascadia fault and how we're overdue. But are we? We have a few small quakes every year... doesn't that relieve the pressure?

120

Comments

You must log in or register to comment.

CrustalTrudger t1_j7nrv48 wrote

It's important to remember that the scales we use for earthquakes (which in the US, is typically the moment magnitude scale, i.e. Mw) are logarithmic. Thus, let's say we define a big earthquake as an Mw 8.0 and a little earthquake as an Mw 2.0, the Mw 8.0 is 1,000,000 times larger than the Mw 2.0 (or alternatively if we say a Mw 3.0 is small, the Mw 8.0 is 100,000 larger, and so on).

Now, this is just thinking about the magnitude as represented on a seismogram, if we want to say how many earthquakes of a given small magnitude equal a given single large magnitude earthquake, we need to consider this through the lens of radiated energy. For this purpose we can use the equation on the linked wiki page that relates Mw and radiated energy Es, specifically,

Mw = 2/3 log(Es) - 3.2

So, we can use this to calculate the amount of energy released by a single Mw 2.0 or Mw 3.0 and a Mw 8.0 earthquake and thus just how many Mw 2.0 or 3.0 events we'd need to equal the energy of a single Mw 8.0. If you go through the math, you'll find that to equal the released energy of a single Mw 8, you would need ~31 million Mw 3.0 or ~1 billion Mw 2.0 events. Let's be more generous and consider something of a more moderate event, like a Mw 5.0, but even then you'd need around 32,000 Mw 5.0 events to release the same energy as a single Mw 8.0.

With this, you could play other games, like lets say the fault system in question has stored enough energy to generate a Mw 8.0, but you have 25 Mw 5.0 earthquakes over a given period, how much energy is left? Again, doing the math, enough to generate a Mw 7.9997 earthquake.

Suffice to say, no, a few small quakes every year are a literal drop in the bucket toward the total strain budget of a system capable of generating a large magnitude earthquake so these do not really do much in terms of preventing an eventual large magnitude event.

EDIT: Writing this answer as I was falling asleep led to me not addressing the "overdue" aspect of the original question. If you would like a deeper dive on why the concept of earthquakes being "overdue" is incredibly problematic, I'll refer to you this FAQ.

356

GalaxyGirl777 t1_j7omb3g wrote

You might like to look up ‘slow’ or ‘slow-slip’ earthquakes which occur over a much longer timescale than your typical short sharp earthquake but can release just as much energy. It doesn’t answer your question about pressure, but it is interesting to note that there are slo-mo events occurring in faults around the world.

3

gristc t1_j7osz38 wrote

It's significantly easier to build something that will withstand M5.0 earthquakes, so I'd go with that option. Especially if it happened every day. I reckon we'd get pretty good at it.

27

[deleted] t1_j7otsia wrote

Huh. Didn’t know richter was outdated. Why do the news keep that when both scales are logarithmic tho? Doesn’t even help the average Joe to better understand the magnitude doesn’t it?

3

CrustalTrudger t1_j7oybui wrote

To clarify, the media isn't using the Richter scale, the media is reporting what ever magnitude a given service (e.g., the USGS or GFZ Potsdam GEOFON, etc) reports and then calling it a "Richter" magnitude. That magnitude is typically a moment magnitude, but depending on the location and details, it might be one of several seismic magnitude scales, e.g., occasionally you'll see a body wave magnitude (mb) or a surface wave magnitude (Ms) reported for a particular earthquake. As to why calling everything a "Richter" magnitude has persisted, it's unclear. The Richter scale was the first, but it was always a local scale (i.e., it was only really calibrated to be used in one part of the world) and it hasn't effectively been used for >50 years.

8

CrustalTrudger t1_j7oz7j4 wrote

A "local" scale is specifically calibrated so that some measurable quantity (like the amplitude of seismic waves as measured on a seismometer) gives a somewhat repeatable estimate of earthquake size, but only for a specific area. This is because local scales, like the Richter scale, are effectively a measure of ground shaking. For a given magnitude of earthquake (in the moment magnitude sense, which is a measure of an intrinsic property of the earthquake, i.e., the seismic moment), the details of ground shaking will depend on distance/depth but also details of the rock that the seismic waves passed through between the source and the seismometer. So for the Richter scale and other local magnitude scales, if you try to transport it somewhere else, the magnitude won't be equivalent. I.e., a true Richter magnitude of X in one place won't actually be the same size earthquake of a Richter magnitude of X earthquake somewhere else. That's not a a very useful property for a scale to have.

6

doucheluftwaffle t1_j7ozgpr wrote

OP- the small little quakes that we have in the PNW don’t occur on/in the cascadia subduction zone.

The little earthquakes that we have are a result of us being pushed north by the San Andreas Fault. Assuming you’re in WA state, the further north you go towards Bellingham, the geology up there is mostly granite. So when we’re pushed northward, there’s no where for us to go except into the granite and lala you get the occasional low magnitude earthquake.

On land are major faults are strike slip and thrust faults and not subduction. Those faults aren’t going to help relieve Cascadia nor are they foreshocks to “The Big One.” The quakes on these faults are from normal movement and occasionally they get stuck.

As for being overdue- it’s nearly impossible to predict when Cascadia will rupture. However, geologists can study the sediment layers on the coast along with the ghost forests. Look up WA coast ghost forest; it’s really fascinating. They can also look at Native American Legends along with the written records in Japan and deduce that every X amount of years the Cascadia Subduction Zone ruptures with some regularity.

Typically scientists cant say with certainty whether or not an earthquake is a foreshock. Its only after a big one can they say that the previous one was likely a foreshock. For example; in 2002 Sumatra had 7.3 quake and then in 2004 they had a 9.1. It was only after the 9.1 did they say that the 2002 7.3 was a foreshock; separated by 2 years.

If you look at the Tohoku Japan quake (Fukushima) on they had 2 foreshocks; a 7.3 on 3/9/2011 and a 6.4 on 3/10/2011. Then on 3/11/2011 they had a 9.1.

2

CrustalTrudger t1_j7p4jnd wrote

> and deduce that every X amount of years the Cascadia Subduction Zone ruptures with some regularity.

I guess this depends on your definition of "regular." If you look a the intervals between events reconstructed from the turbidite record (Table 12 on page 115 of Goldfinger et al., 2012), you'll see that these aren't exactly evenly spaced. E.g., the spacing in years between events is 232, 316, 446, 311, 982, 492, 415, 665, 661, 1189, 508, 715, 443, 548, 733, 195, 117, 577. From this you can calculate an average and it tells you that generally you'd expect an event every few hundred years, but after a given event, there's not necessarily anything to indicate whether the next one is going to be in ~100 years or ~1000 years. I would not describe that as having a particularly "regular" pattern of strain release.

7

UnamedStreamNumber9 t1_j7pdnxl wrote

While not an expert in the field, one thing that is missing in your explanation is the relationship between fault displacement and stored energy. For the cascadia fault in particular I recall reading in SciAm about “slow earthquakes” where sections of the fault slip/move over a period of hours or days instead of seconds, and in doing so dissipate some of the energy/tension stored on the fault without an intense release of energy to create a quake

3

CrustalTrudger t1_j7pgzlr wrote

Yes, slow slip events, or alternatively episodic tremor and slip (ETS), and a variety of other "aseismic" processes represent long-duration versions of strain release that occur on a variety of subduction zones (Cascadia included) either completely independent of traditional seismic events or in concert (e.g., afterslip) with them. Of relevance though, they are explicitly not earthquakes in the typical definition (i.e., they are aseismic) and as the focus of the question is "do small magnitude earthquakes impact the probability of large magnitude earthquakes?", slow-slip / tremor discussions gets a bit into the weeds (so me leaving them out was a conscious choice).

If we consider equivalent magnitudes, most observed slow-slip or ETS events are still kind of in the ball park of "small events" , i.e., mid 5s to 6s, but some do release equivalent magnitudes of strain as a Mw 8+ if you "sum up" the total moment of the event over the days, weeks, months, etc. (e.g., Schwartz & Rokosky, 2007). Perhaps more importantly, the extent to which patches of subduction zones which experience these various aseismic type of slow/quiet/silent slip (1) restrict which patches fail seismically, (2) influence the balance between seismic slip vs aseismic afterslip in the patches that do fail at least in part seismically, or (3) themselves can rupture seismically given the right conditions are all very active areas of research, largely without clear answers, or at least answers that are easily generalized to all subduction zones (e.g., Rolandone et al., 2018, Mallick et al., 2021, Zhao et al., 2022, etc.). Thus, while it is reasonable to consider that slow slip and similar aseismic processes influence the style of seismic strain release, how they do so (both mechanistically but also in terms of actual event temporal and spatial statistics) is a large open question.

12

DanCongerAuthor t1_j7r5a2z wrote

No, they don’t. Earthquakes are on a logarithmic scale. An 8.0 releases roughly 30X the energy of a 7.0. Anybody want to live through 30 7.0’s to avoid a single 8.0? Following that scale, where a 9.0 releases 30X the energy of an 8.0, 900 earthquakes with a 7.0 magnitude would be required to release the same energy as one 9.0.

2

GaiusCosades t1_j7s1zmt wrote

Everything you say is a great explanation, and I agree that things are more complex in contrast to the "overdue" concept with an imaginery constant energy bucket that must be emptied in some event.

But if it was true that a constant amount of energy must be dispensed regularly, I think that there is some kind of sweetspot with semi regular Mw 4.0 - 6.4 events which have their centers kind of distributed, instead of one big Mw 8.0 event where everything gets damaged. At least to obviously see which structures will crumble with the next event and which most likely won't.

1

CrustalTrudger t1_j7u2922 wrote

For the sake of argument, lets sidestep that we can't effectively induce earthquakes in a controlled sense (i.e., we can't do something that we know for sure that will generate an earthquake of a target magnitude) or that wholesale changing the style of strain release of a given fault zone from something like 1 Mw 8.0 every 100 years to 1 Mw 5.0 every day (which is effectively what you would need to release the same radiated energy of a Mw 8.0 in Mw 5.0 events spread out over 100 years) is impossible.

Let's instead entertain the idea that there is some mechanism to start this process, i.e., we begin chipping away at the stored elastic strain sufficient to generate a Mw 8.0 with a carefully targeted Mw 5.0 event that we some how arrest the rupture of to keep it at a Mw 5.0. What did we accomplish? Well, we released a miniscule fraction of the total radiated energy we need, but we also have now changed the stress state on other parts of the target fault and neighboring faults (and in this, we need to remember that virtually no large fault is a single fault, but a network of faults, i.e., a fault system) through Coulomb stress transfer. So when we move to our next "patch" to try to rupture, the stored strain (and proximity to failure, etc.) will no longer be the same, not to mention we've now loaded adjoining faults, etc. The point being, you can't just have patches of fault fail in a vacuum, each one will impact the state of the system and not always in the direction you want, i.e., an earthquake on one patch can increase the strain on another patch, etc.

1

GaiusCosades t1_j807fjp wrote

I in general am in completely agreement, because fault zones are not energy buckets that get filled and must be emptied by earthquakes. Nor can we stimulate the system to release a specific amount of energy. That is not how this works.

But just if we assume hypothetically that it was an energy bucket that gets filled constantly, and we would be able to trigger events of a specific magnitude, it would be beneficial economically (in repair cost) to trigger Mw 4.0 to 6.4 events regularly instead of waiting for the inevitable 8+ event.

I am just arguing a mathematical hypothetical, nothing more ;)

1