Artificial intelligence is transforming the field of seismology, enabling scientists to detect thousands of previously unknown earthquakes from existing data. Experts describe the advancement as being like "putting on glasses for the first time," providing a much clearer picture of the Earth's tectonic activity.
These new machine-learning tools have automated a fundamental task that once required human analysts or less effective computer programs. They are particularly skilled at identifying small seismic events, even in noisy urban environments, leading to a tenfold increase in the number of cataloged earthquakes in some regions.
Key Takeaways
- AI models can detect earthquakes up to 10 times more effectively than previous methods, especially small-magnitude events.
- Unlike older techniques, AI is computationally efficient, allowing analysis on standard computers instead of supercomputers.
- The technology has provided clearer images of volcanic magma systems by mapping thousands of tiny quakes.
- While AI has revolutionized detection, it has not yet led to reliable earthquake prediction.
The Traditional Approach to Finding Earthquakes
Seismology is the study of earthquakes and the propagation of seismic waves through the Earth. When an earthquake occurs, it sends out vibrations, much like sound waves traveling through the air. Scientists analyze these waves to understand the planet's internal structure.
The primary tool for this work is the seismometer, an instrument that records ground motion in three directions: north-south, east-west, and up-down. The data from these devices, called seismograms, shows the distinct patterns of different seismic waves.
Understanding Seismic Waves
Earthquakes generate several types of waves. Two of the most important are Primary (P) waves and Secondary (S) waves. P waves are faster and arrive first, while S waves follow. Identifying the precise arrival time of these waves, a process called phase picking, is crucial for locating an earthquake's origin and understanding its characteristics.
Before the widespread use of advanced computing, this process was manual. Joe Byrnes, a professor at the University of Texas at Dallas, noted that labs would employ teams of students and interns to visually inspect seismograms. He said that "traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”
This manual method was slow and limited the number of earthquakes that could be cataloged. The development of computers brought early algorithms, but they struggled to distinguish small earthquakes from background noise, such as traffic in a city.
Advancements Before Modern AI
One significant improvement was a technique called template matching. Scientists created templates from known, human-verified earthquakes. A computer program would then scan seismogram data for patterns that closely matched these templates.
Template matching proved highly effective. In 2019, researchers at Caltech used this method on data from Southern California and identified 1.6 million new earthquakes, a tenfold increase over the existing catalog. Most of these were tiny events, with magnitudes of 1.0 or less.
The Scale of the Caltech Project
The 2019 template matching project in Southern California was a massive undertaking. It required 200 Nvidia P100 GPUs running for several days to process the data, highlighting the method's high computational cost.
However, template matching has two major drawbacks. First, it requires an extensive, pre-existing library of labeled earthquakes, which is not available for many regions of the world. Second, it is extremely computationally expensive, limiting its accessibility for many research teams.
How AI Models Changed the Game
Modern AI models, particularly those based on deep learning, have overcome these challenges. They are faster, more accessible, and can be applied to regions without comprehensive existing earthquake catalogs.
“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.
One of the most influential models is the Earthquake Transformer, developed by a Stanford University team around 2020. This model borrows concepts from image recognition technology. While an image recognition AI like AlexNet uses two-dimensional analysis to find features in pictures, Earthquake Transformer uses one-dimensional analysis to find patterns over time in seismogram data.
The model analyzes small time segments of vibration data to identify low-level features, then combines them to recognize more complex patterns characteristic of an earthquake, such as the sequence of P and S waves.
The Power of Data
The success of these AI models is heavily dependent on large, high-quality datasets for training. The Earthquake Transformer was trained on the Stanford Earthquake Dataset (STEAD), which contains 1.2 million labeled seismogram segments from around the world.
According to Professor Byrnes, the current generation of models is "comically good" at identifying and classifying earthquakes. They consistently find 10 or more times the number of quakes previously known in a given area. Unlike template matching, these AI models are very small and can run efficiently on consumer-grade CPUs.
Practical Applications of Enhanced Detection
The ability to create vastly more detailed earthquake catalogs has unlocked new scientific possibilities, even if it hasn't solved the challenge of earthquake prediction.
One of the most striking applications is in volcanology. Volcanic activity generates numerous small earthquakes, and mapping them helps scientists understand the structure of underground magma systems. In 2022, researchers used an AI-generated earthquake catalog to create a detailed 3D image of the magma network beneath Hawaii.
The study provided direct evidence for a magma connection between two volcanic structures, Pāhala and Mauna Loa. This level of detail could improve real-time monitoring and eruption forecasting.
Managing Big Data
AI is also critical for new data-intensive monitoring techniques. Distributed Acoustic Sensing (DAS) uses fiber-optic cables to measure seismic activity with incredible resolution, capable of detecting individual footsteps. However, a single DAS array can produce hundreds of gigabytes of data per day.
Jiaxuan Li, a professor at the University of Houston, explained that AI tools are essential for accurately timing earthquake phases in this massive volume of data. He stated that without AI, much of his work would have been "much harder."
The Road Ahead and Current Limitations
Despite these successes, the ultimate goal of accurately predicting earthquakes remains elusive. Judith Hubbard, a Cornell University professor, noted that the current applications of AI are more technical and less sensational than prediction.
There are also concerns within the scientific community about the over-application of AI. Some researchers feel pressure to incorporate AI into their work, even when it may not be the best tool for the job. Hubbard and Bradley have observed papers that use AI techniques but "reveal a fundamental misunderstanding of how earthquakes work.”
Nonetheless, the impact on earthquake detection is undeniable. In just a few years, AI has almost completely replaced older methods for a core task in seismology, providing a clearer and more detailed view of the dynamic planet beneath our feet.





