May 2016
Columns

What's new in exploration

Exploration therapy: Admit who you are, and be prepared for stares
William (Bill) Head / Contributing Editor

This May, I have been attending the 47th annual Offshore Technology Conference for my public-private research duties. OTC is the largest offshore oil-and-gas mega-show in the world. I am grateful that there are still enough of us left to pull that conference together. For explorationists, OTC features presentations organized by SEG, AAPG and SPE. I come to these types of meetings for therapy of what I am, not just what I should be doing next. “Hello, my name is Bill, I am a geologist” (or geophysicist, depending on who I am talking to).

I will wander around exhibits, asking if anyone using all that hardware knows what explorationists do. Few will know. Some actually have a sense that their future is tied by mystery glue to ours. I do see papers at OTC discussing exploration of deepwater reservoirs with good case-histories of Mexico and Brazil. But engineers won’t be attending. Pity.

I remember a seafloor project in the 1980s for Shell, offshore Spain, in deep water. It was about 900 ft, and planners were thinking about going to 1,200 ft. I, the explorationist, worked with engineers at Conoco when they designed their first TLP (Hutton, 1983–1984) for deployment in about 480 ft of water. It’s okay for explorationists to talk to engineers. Today, we need to be competent when drilling at more than 12,000 ft, subsea, and be prepared for reservoir pressures, anywhere from over 18,000 psi to less than 4,000 psi, occurring while evaluating prospects.

A prototype robot “juggie” deploying new radio geophone.
A prototype robot “juggie” deploying new radio geophone.

There will be many government oil companies at OTC, who appear curious about technology to develop their coastlines. All booth displays are still in English, some with subtitles in Chinese. Is the Kingdom buying assets in the Gulf of Mexico (GOM)? They have started an offshore business unit. Offshore seismic, marine and seafloor, is about the only game going right now, but not in the U.S. GOM.

But that could be changing. I witnessed one geophysical start-up company’s prototype for robotic, land seismic deployment. They came to Research Partnership to Secure Energy for America (RPSEA), more than a year ago, for funding research, but RPSEA had to decline, since the (Speaker of the House) Paul Ryan (R – Wis.) Budget of 2014 pretty much killed new project funding in our research program for E&P, safety or environmental mitigation. Go figure.

Private equity stepped in and mentored the new geo company, resulting in some cool pre-field robot manufacturing. At the least, they will change the surveyor’s role in land seismic forever. If successful, we will no longer be referring to geophone hustlers as jug crews, but as broadband signal displacement specialists, who, in pairs, work 15,000 to 25,000 geophone spreads. Really. While the prototype idea (see photo on this page) is operational efficiency, I could not help but deduce that the phone housing mechanism will significantly improve sensor coupling of the geophone or accelerometer to the subsurface. That may be worth the entire technology readiness level effort, especially in sand, snow and muck. (http://geophysicaltechnology.com).

Signal processing is an area that needs improvement. I have had in my hand a three-component, fiber optic sensor that can measure amplitude at less than -4 Richter, and has a useful bandwidth of about 2 hz to over 12,000. Earlier versions I have bragged on were 1,200 hz, 1,500 hz and 2,000 hz. We are way past that. WAIT, the “however” here is how to signal process data containing a thousand times more information on every trace, at a minimum sample rate of 20 times per millisecond. Terabytes per what? No one has ever seen seismic data like these before. But we will.

Seismic CDP theory may hold up, but what about the need to filter unwanted signals? Today, we call this level of information noise? What is interpretation protocol? NASA sign bit and laser modulation methods are useless, if you want to preserve amplitude, and we do want preservation. If you have any money left for investment, this is a possible game-changer.

Ninety-five percent of 3D/2D processing is pseudo vector. Explorationists use regularized bins of scalar data, azimuthally directed. While vector signal recording is possible, real vector, simultaneous multi-dimensional processing is beyond us, even in wide azimuth. [Your marine cable is a scaler, your land cable is commonly processed, single-directional channel per phone.] With improved, future generation processing, maybe true vectors will be the norm, especially for subsalt, rock mechanics, fluid substitution, and deep vertical work. There have been decades of work on vector P-wave signals and directional shear wave energy. Imagine with me, a large areal data set of vector P and vector S data with 1,000-fold coverage, of which any attribute in the z domain can be interpreted with properly visualized displays of amplitude.

More rock physics and fluid content analysis breakthroughs will occur. Not just improvements on Poisson’s Ratio from the square root of a log of a log, squared minus a log of a log squared, divided by another log of a log squared, minus the square of the second log of a log, or maybe you used the modulus divided by a density from a log, then ... Subsurface models might just better match our gather-seismic without stretch or hockey sticks, o’hala? wo-box_blue.gif

About the Authors
William (Bill) Head
Contributing Editor
William (Bill) Head is a technologist with over 40 years of experience in U.S. and international exploration.
Related Articles
Connect with World Oil
Connect with World Oil, the upstream industry's most trusted source of forecast data, industry trends, and insights into operational and technological advances.