For manufacturers, utilities, transportation networks, and other asset-heavy businesses, the costs of unplanned equipment downtime can be extraordinarily expensive.
In factories, there’s the cost of lost production output, which can range from lost profit to subsequent overtime payments as managers try to catch-up. There are also the costs of idled operatives to consider and the waste of any materials going through a production line or piece of equipment at the point of failure.
And there’s the cost of the repairs themselves—which, unlike planned maintenance, must take place urgently, again leaving the business open to overtime payments and weekend work shifts for technicians and engineers.
Unplanned maintenance is expensive. Marshall Institute, an asset management consultancy, reckons that a good rule of thumb is that breakdown or emergency repairs cost three times the cost of preventive, predictive or planned corrective maintenance. Others put the figure even higher.
Yet breakdowns—even when totally unexpected—are rarely as unpredictable as harassed managers believe. In the run up to a breakdown, equipment often gives off warning signs that sudden failure is possible. The problem: these signals can be buried—literally—in the noise and mass of data that typifies plant floors and other asset-intensive operations.
Gauges on machinery such as gearboxes, for instance, can detect changes in vibration levels, oil temperatures, and pressures. Precise measurements of components and products being produced, too, can provide early indications of manufacturing machinery parts that are becoming worn. Special microphones can detect noises inaudible to the human ear. And sensors installed in machinery can methodically count usage- or cycle-time based wear patters: the revolutions of a gear, the strokes of a pump, or the number of times a press rises and falls.
But clearly, the volume of such data will be high—especially when the data collection process is extended not just across critical bottleneck pieces of plant and machinery, but to the entire factory floor or transportation fleet.
Hence an announcement March 21 by IBM of a new software and consulting services capability, intended to harvest and predictively analyze such large datasets in order to minimize the threat of equipment failure. SAS, it should be noted, has a broadly-similar offering, and Accenture too has capabilities in this area.
IBM’s Predictive Asset Optimization offering brings together capabilities such as maintenance-specific IBM SPSS visualization and analytics from IBM’s business analytics division, data handling technologies from the company’s information management division, as well as asset management expertise and consulting services.
Described as comprising part of a larger focus within IBM on big data and analytics, the solution spans hardware, software, services and research, and harnesses data from instrumented assets and to identify irregularities in the manufacturing process and the products that stem from it, and thereby forecast a range of asset performance risks before a problem ever arises, with alerts displayed on an employee’s tablet, smartphone or Web browser, “with recommended corrective actions,” the company says.
According to IBM, a rollout in Cambridge, Ontario, saw the city’s approach to maintenance evolve from a “break‑fix” repair-led mode of operation to a more proactive, preventative maintenance-led approach on sewer lines and other infrastructure projects.
Michael Hausser, director of asset management and supporting services in the Cambridge’s Transportation and Public Works Department, said in a statement that analytics gives his agency insight into what to expect, adding: “We were heading toward a point where reliability of service would be reduced and we’d be beyond our resource capacity to re‑actively resolve issues in a timely manner. We are in transition to be more proactive and gain efficiencies in day to day maintenance management activities.”
And the need for a consulting aspect to the Predictive Asset Optimization offering is neatly illustrated by an anecdote related by Eric Brethenoux, IBM director of predictive analytics.
With two identical engine cylinder lines, one in Stuttgart and one in Munich, global auto manufacturer BMW was seeing discrepancies in the size of cylinders produced by one line, but not the other. Usually, such out-of-tolerance production indicated a maintenance issue. But here, the deviations from nominal showed a time-based dispersion: a shift might start normally, then see parts go off-limits, and then have them return to nominal—all without apparent cause, or operator interference.
The cause? Not failing machinery, it turned out, but sunlight streaming through a window, and warming-up parts that were waiting to be worked on, taking them to a temperature that was outside the range that had been assumed when the production process was designed, Brethenoux says.
Brethenoux says IBM also includes training for users in how to build maintenance-specific predictive analytics models, and how those trained users can go on to train others. The goal would be for organizations to develop a predictive analytics-based preventative maintenance center of excellence within their businesses.
Malcolm Wheatley, a freelance writer and contributing editor at Data Informed, is old enough to remember analyzing punched card datasets in batch mode, using SPSS on mainframes. He lives in Devon, England, and can be reached at email@example.com.