Mike Moxcey

ER450 Marine Geology Paper
Spring 1996

Fractals of Fractures:

New Mathematical Techniques for Studying Earthquakes

Earthquakes are difficult to predict. With our understanding of plate tectonics and knowledge of past history, we can accurately predict where earthquakes are likely to happen. Predicting exactly when is much more difficult. "Sometime within the next 50 years" is adequate for devising building codes, but not for evacuating a major city. While many researchers are delving into geological aspects of earthquake prediction, many others study earthquakes using fractals and chaos theory mathematical techniques which apparently outlaw the possibility of good predictions.

An Overview of Earthquake Prediction

Earthquakes occur mainly along the boundaries of tectonic plates (although some occur in mid-plate), places where pieces of the earth collide with other pieces. Some of these collisions force one plate under another (subduction zones such as deep sea trenches), some slide sideways past each other (transform faults such as the San Andreas), and some just smash into each other and pile up huge mountains (the Himalayas). Places where the earth tears apart (rift valleys and mid-oceanic ridges) are also prone to earthquakes.

An earthquake in a populated area can kill hundreds of people, make thousands homeless, and cause billions of dollars of property damage. Even an earthquake in a completely unpopulated underwater area can cause immensely destructive tsunamis. Many of the major population centers of the world lie directly on major tectonic faults (Los Angeles, Mexico City, and Tokyo for example) and any port city could be drowned by a tsunami so researchers all over the world are working on predicting earthquakes and mitigating their effects. Earthquake prevention does not seem possible.

"Four kinds of geologic data are most useful for predicting potential tectonic activity: subsurface ruptures along fault zones, stratigraphic sequences related to tectonic activity, landforms related to surface rupture and regional uplift or subsidence, and relationships between seismicity and tectonic features." (Callendar 124)
Yet even with this knowledge and the immense amount of data already gathered, Callendar says efforts to predict earthquakes "have met with little success." (137)

Solid-state physicist Panayiotis Varotsos of the University of Athens has studied the electric properties of squeezed dry rocks just before they fracture. Assuming that earthquakes are larger varieties of fractures, Varotsos, Alexopoulos, and Nomicos use their own V.A.N. method of detecting seismic electric signals to predict earthquakes. They have had a success rate of 65 to 70% for their predictions but electrical geophysicist Stephen Park of the University of California at Riverdale says, "Varotsos has only issued predictions for 10% of the earthquakes that have actually occurred." And even the correct predictions have only been timed within several weeks and located within 100 k.(Kerr 911)

Perhaps the main problem with earthquake prediction is that the faults display fractal geometry and the final buildup and release of stress "are fast and strongly non-linear" processes. (Meissner 161) Even "the most universal formula in seismology," (Diao et al 205), the Gutenberg-Richter Law relating magnitude, energy, and frequency, is a classic fractal of dimension 1.8 to 2. (Turcotte 7) Of course, fractals weren't quantified and explained by Mandelbrot until 30 years after Gutenberg and Richter codified their law in 1954.

What Is a Fractal?

"A fractal is, by definition, an object of Hausdorff-Besicovitch dimension strictly exceeding its topological dimension." (Herzfeld 218) Perhaps one of the most vexing things about fractals is they have a fractional dimension. This bothers most people who were raised on standard Euclidean geometry of one-dimensional lines and two-dimensional planes.

An easy way to think of a fractional dimension is to picture a line on a sheet of paper. The sheet of paper represents two-dimensional space. Any normal line drawn on it would be one-dimensional. But now consider someone like my three-year-old daughter Karen getting very excited and scribbling a line all over the paper. If the line completely covered the paper, then you could say it turned into a two-dimensional line and sometime during the scribbling it was a one-and-a-half dimensional line. Sufficiently complex lines can have dimensions greater than one.

A more rigorous and classic development of fractals uses an example from marine geology coastlines. Mandelbrot's book even has a chapter titled "How Long Is the Coast of Britain?" (25) The main problem with measuring a coastline is that no matter what base unit of measure you use, you can always use a smaller one to get a longer length. Mapping the east coast of America using a scale of 100 miles per inch (since we're in America) will display some of the larger bays and peninsulas such as Chesapeake Bay and Cape Cod but will ignore smaller ones. Dropping down an order of magnitude and drawing a map at a scale of ten miles per inch will meander around many of the smaller bays and protrusions missed in the larger mapping. This would make the total length of the coast quite a bit longer. Dropping another order of magnitude to one mile per inch, then to one-tenth mile and so on will keep locating smaller and smaller bays, coves, and inlets, each of which increase the total length of the coastline. This keeps happening down to tiny scales such as measuring the water where it winds in and out around individual rocks, pebbles and grains of sand.

Mathematicians have used many methods to measure lengths of naturally convoluted shorelines but "all measurement methods ultimately lead to the conclusion that the typical coastline's length is very large and so ill-determined that it is best considered infinite." (Mandelbrot 25) In real life, L.F. Richardson in Statistics of Deadly Quarrels found "notable differences in the lengths of common land frontiers claimed by Spain and Portugal (987 versus 1214 km)." (Mandelbrot 33) The length of everyday, non-Euclidean objects depends on the length of the measuring stick. Benoit Mandelbrot decided to try to quantify this dependence and discovered the Hausdorff-Besicovitch idea of dimension which is able to distinguish between highly convoluted coastlines and less-convoluted ones made of cliffs and sea walls. This dimension measurement is independent of the length of the measuring stick and is much better than topology which "fails to discriminate between different coastlines." (Mandelbrot 17)

To better study these types of curves, Mandelbrot began creating and studying other mathematical "curves." These usually consist of a basic section or polygon and then a "generator" which replaces each section of the curve. The classic Koch Island begins with a triangle and replaces each side with a new side that is 4/3 the length of the original side.

Each smaller section is then replaced with another triangular segment. At each iteration, the total length of the curve gets 1/3 longer than before (and the dimension is log4/log3 ÷1.2618). Thus any measurement using a smaller stick will always find a larger length and the true mathematical length is infinite because the mathematical iteration can be carried out infinitely many times. Despite some reservations about using these curves to study reality, they have turned out to be useful and Mandelbrot turns the arguments around by saying the "monster" Koch curve's "irregularity is far too systematic" and can only be suggestive of a real coastline's convolutions. (35)

Self-Similarity

The development of the Koch curve displays self-similarity, another important aspect of fractals. Mathematicians even call self-similarity another dimension with another method of measurement. They use it to determine how similar a piece of a curve is to another piece of the curve. For example, any piece of a straight line is identical to any other piece or the entire straight line and any piece of a Koch curve is very similar to any other piece under any scale (another concept that goes along with self-similarity).

Between certain levels of magnitude, mathematicians often use the similarity dimension to guess the Hausdorff dimension. (Mandelbrot 37) A piece of a coastline such as a small cove and outcropping is similar to a large bay and peninsula and to an indentation and projection of the edge of a single rock. Of course, "below the lower limit, the concept of coastline ceases to belong to geography." (Mandelbrot 38) However, Hausdorff showed how his dimension is useful in mathematics and Mandelbrot claims it is "vital in the natural sciences." (44)

Areas of Fractal Research in Geology

Many geologists are verifying that claim and have begun using fractal methods in their studies and in their reviews of past studies such as the Gutenberg-Richter law. Some new studies include fractals in fracture patterns, joints in rocks, formation of extensional veins, distribution of gold deposits, crystallization processes, bioactive marine sedimentation, and Pleistocene marine oxygen isotope records. Sergey Ivanov found the global relief of the earth displayed fractal geometry over four scales of magnitude with fractal dimensions ranging from 1.3 to 2.4. Jorn Kruhl, editor of a book on fractal research in geology says, "It is well-known that geological structures can exhibit self-similarity over several, but not infinite, orders of magnitude." (v)

Earthquake researchers have found fractals in three different ways:

  1. models of earthquake behavior
  2. the statistics of earthquake occurrence
  3. studies of fractures, especially non-linear processes.
Of course, some of the research using fractals is not well done. Ute Herzfeld says the Hausdorff-Besicovitch dimension is often derived by fitting a line to a log-log plot which is then used as "proof that the object under study is fractal or self-similar." (219) And despite the large amount of fractals found studying them, "earthquakes are not always fractal." (Diao 200)

Earthquake Models

Earthquake modeling began with simple stick-slip mechanical models proposed by Burridge and Knopoff in the 1960s. These are made up of a chain of blocks connected to each other by springs and acting on a fixed base plate through friction. As pressure is slowly applied to a block, it builds up until finally the block slips (simulating an earthquake on one section of a fault) and also sends some pressure down the springs to the next block (putting pressure on another section of a fault). By altering a couple of the variables, they were able to create a chaotic model "that strongly resembled the observed sequence of earthquakes in the Nankai trough" along the coast of Japan from 684 to 1946 AD. (Turcotte 15) The stick-slip model is believed to work because "there is a continuous input of energy (strain) through the relative motion of tectonic plates. This energy is dissipated in a fractal distribution of earthquakes." (Turcotte 9)

 These simple stick-slip models were modified by later researchers to include various options such as:

  1. increasing the driving force randomly in each time interval
  2. adding blocks to both sides
  3. modeling plasticity by using probabilistic slipping
  4. using a random threshold for slippage
  5. randomly distributing stress to blocks that haven't slipped
  6. dissipating some stress in various, non-slip ways (Markus et al 182-183)
In a review of these studies, Markus et al found the "essential predictions [were] quite independent of their particular assumptions." (183) Another modeling technique researchers tried was cellular automata. This is done on computers and consists of setting up a map of neighboring cells, devising how actions on one cell create interactions with other cells, and then submitting certain cells to initial actions. This model has been useful in many natural sciences including fluid dynamics, spin glasses, ferromagnetism, pattern recognition, precipitation processes, immunological systems, brain patternization, ecology, cytoskeleton dynamics, branching growths of fungi, coupling of mitotic cycles and tectonics and seismology. (Markus 181)

Turcotte's first cellular automata model resembled seismicity in a zone of crustal deformation but didn't generate any characteristic earthquakes or fore- and after-shocks. (11) The revised model allowing instabilities to affect other cells not only created after-shocks, but 31% of the largest events had fore-shocks a good correlation with reality. (12)

However, "even the most refined slider-block models are only poor replicae of natural processes." (Meissner 160) Other methods have been developed to find the fractal dimension of active faults, of ruptures in experimental rocks, of drainage systems and dimensions related to "crustal deformation, electrical resistivity, and groundwater, where variations are concerned with earthquakes." (Diao et al 199) And it has been accepted that individual faults have individual earthquakes and that the distribution of faults and of earthquakes are each fractal, but are not necessarily the same dimension. (Turcotte 7)

Non-Linear Processes in Earthquakes

Chaos, the study of non-predictable, non-linear processes, is also being studied in earthquakes. Some researchers truthfully say that chaos does not equal fractal (Herzfeld 219), but that is a shortsighted view of a mathematical technique. While Mandelbrot doesn't provide a lot of hard, fast examples in his "essay," he nevertheless provides examples and makes "pleas for a more geometric approach to turbulence and for the use of fractals." (10) (Turbulence is the classic non-linear problem in mathematical physics). There are many instances of mathematical tools developed in one area turning out to be useful in other areas. For example, the study of unending series (Number Theory) was found to be useful in determining integrals of hyperbolic functions (Calculus) which were used to derive my favorite equation based on transcendental (Algebra) functions: eãi = -1 . So, while Ute Herzfeld brings up a few good points about the misuse of fractals, confusing chaos and fractals is not one of them.

Fractal curves can be constructed to model chaotic processes. These curves allow discovery of the dimension of the process. In addition, with a curve associated with a "weakly chaotic system, a limited prediction might be possible" (Meissner 159) in exactly the same way an algebraic curve of a normal calculus function is used for prediction. But a fractal curve cannot be used for exact prediction due to its inherently chaotic, non-differentiable nature (which implies fractals are perfectly suited for the study of chaos).

The slow grinding of tectonic plates produces pressure which is released in non-linear ways. Even the spring-block models lead to "examples of deterministic chaos." (Turcotte 16) Others say, "The fractal dimension of an active fault system is probably related to the stress character." (Diao et al 203) And in the final stage of stress accumulation, micro cracks coalesce into large cracks which fill with liquids. "Both processes are fast and strongly non-linear." (Meissner 161) Rolf Meissner also offers a couple other arguments for the non-linearity of earthquakes the lack of a tidal cause and the loading of stress by plate creep. (163)

This stress has been thought by many to cause the entire crust of the earth to get in a state of self-organized criticality where any change makes unpredictable changes in other areas such as in a game of Pick Up Sticks.(Turcotte 9) One piece of evidence is the fact that the filling of large dams always induces seismicity. But there are no equations and while researchers have tried calculating various fractal characteristics and have found some chaotic attractors for certain earthquake regions in China, none "are sure whether earthquakes are chaotic." (Diao et al 207)

Where We Stand Now On Predicting Earthquakes.

Although chaotic, non-linear models and fractal curves technically prevent precise predictions, they are useful for a more general sort of prediction. Yet we need to do much more studying, measuring, and monitoring to get more accurate equations and useful models. For example, on some faults the fractal dimension of small quakes reduces just before a major quake; on others the dimension rises; on others it fluctuates. "Thus it is difficult to forecast earthquakes using the fractal dimension." (Diao et al 201) This is just one of many problems associated with earthquake prediction.

Using a very simple fractal analysis of the Los Angeles area, S. E. Hough found it reasonable to assume one M 7.4 to 7.5 earthquake every 245-325 years and six events in the M 6.6 range, less than what has occurred in the historic record. (213) Rolf Meissner says, "The magnitude seems to be the most elusive parameter [to predict] because of the additional non-linearity of the rupture process itself which apparently can hardly be modeled by the slider-block models." (164) John Callendar says, "The most significant problem with earthquake prediction is probably the lack of acceptable models that explain why precursors work." (137)

There are probably other problems of which we are currently unaware. Yet until we understand earthquakes better, we'll have little chance of ever "predicting their fickle outbursts." (Anderson 190) The mathematical technique of fractals is one of the best approaches for studying non-linear, chaotic kinds of problems and will continue to be heavily used in earthquake research. Perhaps Donald Turcotte sums it up best. "The earth's crust resembles a thermodynamic system near a critical point. The fluctuations are earthquakes." (19)

Bibliography