Go back

An X-Ray View of Sensor Data: Reduce downtimes by predicting failure probability

Goals of our client

  • Predict the failure probability of large medical devices
  • Optimize stockholding for replacement parts and operations management
  • Improve customer service and reduce downtimes

Our approach with predictive maintenance

  • Read system logs and record sensor data centrally
  • Recognize patterns based on historical data
  • Calculate failure probability automatically
  • Merge sensor data with information from the data warehouse

Our client’s results

  • Central view of a wide range of business information for the first time
  • Fast and targeted evaluation of data for different departments
  • Increase of product and service quality throughout the company

Initial situation

Large medical devices such as CAT scanners and magnetic resonance imaging systems are a major investment for doctors’ practices and hospitals. Unexpected breakdowns don’t just cause huge costs but also jeopardize patients’ medical care.

For manufacturers, this means many spare parts must be kept in stock permanently, resulting in high capital tie-up. If a device breaks down, the technicians have to take numerous spare parts with them to the customer on spec. And if the parts aren’t needed, they have to be thoroughly checked before they can be restocked.

A German company decided to analyze the automatically transmitted sensor data centrally and – based on an analytical model – calculate just how probable it is that individual parts will break down.

Big data: information from log files

The big data challenge is ever-present and has been discussed again and again over the past years. Data is lying idle in many companies without being efficiently used.

All the manufacturer’s large medical devices send log files with system-relevant status information to the respective development department every day. Yet, in the past, this data was only spot-checked and evaluated by experts manually.

The information could neither be used across departments, nor were targeted analyses with a large data basis possible. In consequence forecasts about a device’s failure probability couldn’t be made. For the manufacturer, this implied:

  • Many of the spare parts had to be kept in stock and sent to customers on spec if a device broke down.
  • Restocking the parts that weren’t needed required lengthy checks.
  • There was always the risk of high costs caused by device downtimes, because the response times in the service level agreements were tight.

To reduce costs in the long-term while at the same time increasing the service quality even more, the company started an initiative to implement predictive maintenance and got CBTW on board. The first step was to centrally record all the data from the transmitted log files and enrich it with information from the SAP system. Meanwhile, an analytical model was created that could be used to detect recurring patterns in the data.

Save time and money with predictive maintenance

Knowing that a machine will soon break down and preventing this at the right moment saves companies time and money.
To get this knowledge, analytical evaluation of the existing data is decisive.

Predictive maintenance architecture

Automated evaluation: added value from raw data

Step-by-step, the project team implemented various software components to transform the usable information and evaluate it.

  • The large devices send log files to the manufacturer via a file subscription system.
  • There, the relevant sensor data and events are read using a Hadoop cluster.
  • The aggregated data is subsequently stored centrally in a Teradata Data Warehouse.
  • For the analysis and further processing of the Hadoop data, the company opted for a business intelligence platform from the software provider SAS.
  • The team generated the analytical models as the basis for predictive maintenance with SAS Enterprise Miner.
  • Thanks to the SAS Visual Analytics component’s easy-to-use web interface, business users can create graphics fast with the existing information in in-memory storage.
  • The performance of the BI solution used turned out to be decisive for the acceptance of the predictive maintenance solution among users.
  • The amount of data is huge, and the in-memory analytics solution offers significant advantages and answers even complex queries fast and reliably.
  • Even long-term data storage was included in the project: In the first 90 days, the data is stored in what’s known as warm data storage, comprising 30 TB.
  • After that, the system automatically moves the data to a cold data storage area with 200 TB, before the data is finally archived.

Use cases: diverse use of the central data

The implementation of the BI platform and the development of predictive maintenance triggered a ripple effect at the manufacturer. It became clear that even vast quantities of data could be analyzed with good performance. At the same time, the evaluations turned out to contain much more valuable information overall due to the broader data basis. Further data sources were gradually connected to the platform and new possibilities for use are arising continuously.

  • Sales – The manufacturer can assess the use of the installed devices more accurately and draw up tailored offers for customers
  • Product development – If specific parts are more vulnerable to faults, they can be optimized or replaced when new products are developed
  • Marketing – The better you know a device’s capacity utilization, the more selectively you can advertise it

All this saves the company license and maintenance fees. Furthermore, the manufacturer benefits from having data and analyses from the individual business departments available centrally and being able to exchange them within the organization. The CBTW experts are supporting production operation and the further refinement of pattern recognition as well as the continuous expansion of the BI platform for the manufacturer. Significant cost savings and remarkable insights are already evident in many areas.

Building a central data pool requires very detailed planning and in-depth knowledge of the individual software components.

But those who take up the challenge profit very quickly from a wealth of insights.