In the search for new sources of oil and natural gas, energy companies are guided by evidence gleaned from huge arrays of data gathered by sending sound waves deep into the ground. At MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), experts in machine learning and computer vision are developing mathematical tools that can speed up the examination of such data sets and help human analysts “see” geological structures where oil and natural gas are likely to be trapped.
As the world’s demand for oil and natural gas continues to grow, energy companies must search for deposits of those hydrocarbons in increasingly difficult locations, including complex geological structures buried many kilometers under the seafloor. The Earth’s geological history suggests where to look. Over millions of years, layers of material were deposited on the seafloor, some of them rife with decomposing organisms that under high pressure became oil or natural gas. Those hydrocarbons would diffuse upward and escape unless either an impermeable layer were deposited on top or the layers were broken and tilted, creating a space where the hydrocarbons were trapped.
To identify such geological features deep underground, energy companies often perform seismic imaging. Using air guns or explosives, they send sound waves deep into the earth. How those waves are reflected by different underground layers provides information that sophisticated signal-processing techniques can turn into a 3-dimensional data set—a “seismic volume”—that represents the subsurface.
However, identifying geological structures within a seismic volume is difficult. The image on this page provides a sense of the challenge. This cross section is a single vertical slice from a volume based on acoustic information from a region below the seafloor. It provides a good side view of the geological layers that are present.
During an analysis, experienced company geologists or engineers would hand-mark this slice to note potentially interesting features. But then they would have to examine thousands of other slices oriented in various directions, building up an idea of the shapes of the layers and structures within the 3-dimensional space. After picking out likely trapping spots, they would generate more detailed images and examine them to see if their guesses held up. Although mathematical procedures, or “algorithms,” help by generating the different views, the whole process can take months. That’s a problem. Companies pay millions of dollars to lease time-limited drilling rights in certain areas, so it’s important that analyses of the seismic data move forward quickly. And since drilling a single well can cost as much as $100 million, those analyses also must be as accurate as possible.
Performing such analyses is the focus of the Sensing, Learning, and Inference Group in CSAIL. “Our group focuses on statistical analysis of complex sensor data from sources such as high-resolution video cameras used for computer vision,” says John W. Fisher III, senior research scientist at CSAIL and head of the group. “To extract higher level information from such data sets, we develop algorithms that can identify objects, shapes, patterns, edges—all the things that are important to human analysts.” That capability is highly valuable to the analyst poring over seismic data to find geological structures where oil may hide.
“We aren’t experts on seismic data, but we collaborate with those who are in order to leverage our expertise in mathematical algorithms, machine learning, and statistical inference for their applications,” says Fisher. “We bring a different perspective to the data than the trained geologists and geoscientists do, and we are often able to adapt methods we’ve developed for processing other data types to their problems.” The MIT team incorporates the experience and insights of the human experts into their algorithms, which then continue to learn on their own by using information gained in past analyses to perform subsequent ones more efficiently.
“So how do we take this complex morass of data and pull out the structures that may hold hydrocarbons?” asks Fisher. One approach is to look for shapes—and an important shape is the salt dome. Salt domes form deep underground when salt beds from ancient oceans flow upward through heavier sedimentary layers above them. The salt extrudes upward (like globules rising inside a lava lamp, says Fisher) and in the process tilts and blocks off adjacent sedimentary layers, creating pockets where hydrocarbons can be trapped. To retrieve those hydrocarbons, an energy company must drill into the sedimentary rock but not into the salt dome itself because it will contain no oil. Knowing the shape of a given salt dome in some detail is therefore critical.
To see such a 3-dimensional shape, the MIT team studies videos made of sequences of slices from the seismic volume viewed in rapid succession— essentially a seismic video. (Think of “flip” books in which the line drawing on each page is slightly different from the ones before and after it. Flip the pages quickly and you see a movie.) In a video that moves downward through the seismic volume, a salt dome first appears as a small circle—its top—that stands out because of its fine-grained texture. As the video proceeds, the circle steadily grows larger as the slices cut through the dome’s expanding girth and then shrinks again as the slices approach the bottom.
In computer vision, a standard approach to recognizing such a shape in a video is to impose a rule saying that no pixel can change significantly from one slice to the next. Any major change may indicate the edge, or boundary, of a shape. However, given the vast amount of seismic data, monitoring changes in all of the pixels would be a huge computational task.
Jason Chang, graduate student in electrical engineering and computer science (EECS) and a Shell-MIT Energy Fellow, has made the task more efficient by assigning those changes different probabilities of being a significant boundary. His first job was to establish a model based on common sense. If this emerging shape is a salt dome, what are we likely to see over time? “Knowing how salt domes grow, you’d think there’d be a gradual change from one slice to the next. You wouldn’t think it’d spawn a new region,” says Chang. “A boundary would probably grow or shrink in the same area as in the previous slice.”
So his algorithm constantly compares current and previous observations. If a change occurs in the same region as a previous change did, those observations are assigned a high probability of being a boundary. A change occurring in a new region is assigned a low probability. But if subsequent changes occur in that new region, those changes are assigned a higher probability of being a boundary—possibly the emerging top of a different salt dome. If a given change is not followed by further changes in the same region, its assigned probability declines until the algorithm stops tracking it (see the image above). As a result, calculations are required only for changes that exhibit a growing probability of being the evolving edge of a salt dome—an approach that significantly reduces the computational load.
However, there’s a further complication. As a salt dome pushes upward through the sedimentary layers, it can split into several branches. In a video that moves downward through the seismic volume, the first evidence of that salt dome may therefore be several distinct circular regions, which in subsequent images gradually coalesce into the main body of the structure. In the analysis, those circular regions must be handled together as one shape.
“This sounded pretty straightforward conceptually, but mathematically it was a challenge because we had to compute probabilities over shapes, and shapes turned out to be very complicated objects in this application,” says Fisher. “For example, we couldn’t assume a commonly used notion of simply connected shapes, meaning that each shape is confined to a single region with no branching.” Chang’s work allows for such variations while still maintaining computational efficiency—a critical aspect when processing such enormous quantities of data.
In complementary work, graduate student Dahua Lin of EECS is developing a technique for analyzing seismic videos that involves tracking not shapes but motion itself. In general, a seismic video will progress smoothly because the sequential slices (the pages in our flip book) won’t differ substantially. But occasionally there may be a discontinuity in the flow. Such an abrupt change could be caused by the presence of a fault, a feature created when sections of the earth shift relative to one another, sometimes trapping hydrocarbons in the process. Detecting such discontinuities required a way to estimate motion in the seismic video—the challenge taken on by Lin.
One method of estimating motion in video is called optical flow. It follows the trajectories of individual points or objects from one frame to the next and then combines all those trajectories to produce a so-called “motion field.” Optical flow works well for, say, tracking the motion of a crowd of people walking on a New York City sidewalk, where individuals are likely to be walking in all different directions. “But that approach wasn’t really formulated as a way for estimating persistent motion, which is what we observe in our seismic video,” says Fisher. To illustrate persistent motion, he points to the movement of cars on a highway. In that case, there’s an overall pattern of motion that’s pretty organized and consistent. “There’s no single point or location that describes the motion,” he says. “You can’t look at a small section of the field—as if through a little window—and know what the overall motion is. Your perception of motion is the combined change of appearance across the entire scene.”
Describing such motion therefore requires a method that can use all the data available in the entire video. It must aggregate concurrent observations over a long period of time and infer a common motion pattern. Drawing on a mathematical discipline called differential geometry, Lin is developing an algorithm that can perform that analysis, not only to estimate the persistent motion field but also to identify anomalies in the data that are not consistent with such motion and are thus possible indicators of geological faults.
The mathematics involved in these new techniques is highly sophisticated. “But there’s no need for the seismic analysts to learn differential geometry,” says Fisher. “That’s what we do in our research group.” He stresses that their job is not to replace the human experts but rather to provide statistical models that can help them do their work more quickly and easily. And he sees the relationship as “win-win.” In working with the seismic analysts and data, he and his team have an opportunity to apply and extend their techniques to the important real-world task of finding hydrocarbon resources to meet near-term energy demand—and, says Fisher, “perhaps to help buy time for other researchers who are developing alternative energy sources so that we can reduce our reliance on oil and gas.”
This research was supported by Shell, a Founding Member of the MIT Energy Initiative. Further information can be found at groups.csail.mit.edu/vision/sli and in the following publications:
J. Chang and J. Fisher. Analysis of Orientation and Scale in Smoothly Varying Textures. 2009 IEEE International Conference on Computer Vision, Kyoto, Japan, September 27–October 4, 2009.
J. Chang and J. Fisher. Efficient MCMC Sampling with Implicit Shape Representations. 2011 IEEE Computer Vision and Pattern Recognition, Colorado Springs, Colorado, June 20–23, 2011.
D. Lin, E. Grimson, and J. Fisher. Modeling and Estimating Persistent Motion with Geometric Flows. 2010 IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, California, June 13–18, 2010.