If you don’t enjoy working under pressure, spare a thought for the global oil industry’s electrical submersible pumps (ESPs). Powered by a high‑voltage electric motor capable of operating at temperatures of up to 300°F, the world’s fleet of 130,000 or so ESPs—located on land or under the sea bed—pumps oil to the surface at pressures of up to 5,000 pounds per square inch, from wells as deep as 12,000 feet.
A pump’s normal service life is difficult to predict. Oil reservoir operating pressures and temperatures, plus the viscosity of the oil being pumped, have an obvious bearing. But so too does the presence of natural gas in the oil and water mix that an ESP is pumping, and the abrasion caused by sand and other solids in the mix.
All of which matters a great deal to a surprisingly wide group of people, from crude oil pipeline operators to Wall Street analysts, says Mike Bahorich, CTO of Apache Corp., a Houston-based oil exploration and production company that has operations in Australia, the United States, Egypt, Argentina, and the United Kingdom’s North Sea.
A source of operational challenges for years, ESPS are even the subject of an industry collaborative effort to document and quantify the locations and operating conditions of more than 100,000 pumps. A combination of data mining in this database and applied predictive analytics is allowing Apache to be more proactive in its pursuit of improved reliability of the ESPs in its control, reduced production losses and increased output through greater overall equipment uptime.
“ESPs are very important pieces of equipment in our industry, and 60 percent of the world’s oil production runs through them,” says Bahorich. “At Apache, we are very, very interested in having more reliable ESPs, because it helps us to give Wall Street more reliable numbers.”
And from an operational perspective, too, he adds, a better understanding of when a given pump might fail allows ESP maintenance to be tied in with the maintenance of the production platform with which it is associated. “It’s much more efficient to carry out maintenance on both of them at the same time, because there’s no additional loss of production, with either a platform or an ESP idled as the other is repaired,” says Bahorich. “So having an understanding of when an ESP might fail is very valuable to us.”
But how to gain that better understanding of a likely failure timescale? It’s a challenge that has eluded the industry for years, exacerbated by the facts that ESPs vary extensively in configuration and no two ESPs experience exactly the same set of operating conditions, Bahorich says.
Enter 10-year-old analytics specialist Ayata, a former research laboratory now in the commercial sector based in Austin, Texas, and whose analytics applications have been implemented by companies such as Cisco, Dell and Microsoft for use cases related to market projections, customer service systems and product launches, respectively.
Synthesizing big data, mathematical sciences, business rules and machine learning, Ayata’s software is designed to predict future outcomes and prescribe decision options to respond to these outcomes, says chief executive Atanu Basu. “We call it prescriptive analytics,” he says. “The idea is to extend predictive analytics by making a prediction, and providing guidance as to how to respond to that prediction.”
It was the ability to suggest remedial action as well as predict failure that intrigued Bahorich, leading to a decision to engage with Ayata in late 2012.
“Apache didn’t know that it needed prescriptive analytics,” is how Basu characterizes the discussions. “What it did know that it was losing around 10,000 barrels a day of oil through ESP failure, and that this loss was painful.” (This week, one barrel of crude oil cost about $100.)
First step: data mining an oil industry collaborative database, ESP-RIFTS, which contains data on around 104,000 ESPs, contributed by partners such as ConocoPhillips, BP, ExxonMobil, Chevron, Shell, and Apache itself.
“The plan was to look at what combination of factors led to more reliable ESPs, and from this to then pull out broad themes around what makes a pump fail, and how that failure occurs—which set of conditions, and which set of individual circumstances,” he explains.
And armed with this information, Apache could move from the purely predictive, to the prescriptive—figuring out what sort of conditions would precipitate failure, and what could be done about it.
“There are about 40 ‘actionable’ variables, and around ‘25 non-actionable’ variables,” says Bahorich. “You can’t change variables such as reservoir pressure and reservoir temperature, but you can change variables such as the specific type of ESP, its motor, the type of seal assembly, and other characteristics, and then put together a pump that is more resilient in coping with those specific characteristics.”
Or, as Basu puts it: “We can say that a pump in well five, in field three, is going to fail in the next three months—and that you can do this, this and this to avoid or put back that predicted failure.”
To date, however, the project is still firmly at the ‘predictive’ stage: ‘prescriptive’ is the end-point, but has yet to be reached.
“We’ve loaded the data, and we’re learning from it, and getting roughly actionable ideas, and tweaking our understanding,” says Bahorich. “‘Prescriptive analytics’ is we’re headed, but right now we’re still at the predictive stage.”
And as for a likely ROI, he remains tight-lipped, citing commercial confidentiality.
“We’re certainly learning a lot about which combination of ESP characteristics is optimal,” Bahorich says. “But what I can say is that the oil and gas industry will benefit significantly from big data and analytics, and that the chances of this project not working out at all is zero. That just won’t happen.”
Malcolm Wheatley, a freelance writer and contributing editor at Data Informed, is old enough to remember analyzing punched card datasets in batch mode, using SPSS on mainframes. He lives in Devon, England, and can be reached at firstname.lastname@example.org.