August 15, 2018 — Intel and Philips recently tested two healthcare uses for deep learning inference models using Intel Xeon Scalable processors and the OpenVINO toolkit. One use case focused on X-rays of bones for bone-age-prediction modeling, the other on computed tomography (CT) scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.
Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.
Central processing units (CPUs) – in this case Intel Xeon Scalable processors – do not have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, Intel Xeon Scalable processors can better meet data scientists’ needs than GPU-based systems, according to Intel. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers.
AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals.
Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are more suited for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models – particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.
These tests show that healthcare organizations can implement AI workloads without expensive hardware investments.
The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations – an increase of 188 times. The lung-segmentation model surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.
Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies like Philips, because it allows them to offer AI-based services that do not drive up costs for their end customers, according to Intel. As shown in this test, companies like Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.
Multiple trends are contributing to this shift:
- As medical image resolution improves, medical image file sizes are growing – many images are 1GB or greater;
- More healthcare organizations are using deep learning inference to more quickly and accurately review patient images; and
- Organizations are looking for ways to do this without buying expensive new infrastructure.
For more information: www.intel.com, www.usa.philips.com/healthcare