Hitachi Review

AI Technology to Innovate Diagnostic Imaging

Skip to main content

Hitachi

Hitachi Review

Technological Innovations Accelerating Worldwide Digitalization

AI Technology to Innovate Diagnostic Imaging

Highlight

The medical field is actively taking advantage of AI to solve issues that include improving work efficiency, improving the quality of diagnoses, and improving medical care outcomes. Diagnostic imaging is one of the medical fields considered to be the earliest to take up the practical application of AI. Although much is still in the research stage, devices that have been FDA-approved for diagnosis support are starting to appear in the USA, and moves towards their practical application are in progress. Under its hybrid learning concept, Hitachi is driving research and development of unique AI technology support for diagnostic imaging that leads to high precision, medically relevant results. This article introduces specific examples of automatic measurement technology for cardiovascular ultrasound in order to achieve high-precision examinations that do not depend on the skill or experience of a physician, and computed tomography lung cancer scanning computer-aided detection/diagnosis technology in order to achieve high-precision nodule detection.

Table of contents

Author introduction

Masahiro Ogino

Medical Systems Research Department, Center for Technology Innovation – Healthcare, Research & Development Group, Hitachi, Ltd. Current work and research: Development of diagnostic imaging solutions using image processing and machine learning. Society memberships: The Japanese Society of Medical Imaging Technology (JAMIT) and the Society for Imaging Informatics in Medicine (SIIM).

Masahiro Kageyama, Ph.D.

Medical Systems Research Department, Center for Technology Innovation – Healthcare, Research & Development Group, Hitachi, Ltd. Current work and research: Improvement of image quality and video quality using signal processing, and development of diagnostic imaging solutions using machine learning. Society memberships: The Institute of Image Information and Television Engineers (ITE).

Peifei Zhu

Medical Systems Research Department, Center for Technology Innovation – Healthcare, Research & Development Group, Hitachi, Ltd. Current work and research: Development of diagnostic imaging solutions using image processing and machine learning. Society memberships: The IEEE Computer Society.

1. Introduction

With the rapid development of artificial intelligence (AI) technology in recent years, there is high expectation in the medical field for the early detection and optimal therapy for diseases utilizing AI. As such, the research and development of AI for medical use is increasing rapidly. In particular, the field of diagnostic imaging, such as X-rays, echoes (ultrasound), and magnetic resonance imaging (MRI), is one of the earliest to take up the practical application of AI, and is expected to produce significant innovations, such as more efficient automatic image scan and a more efficient examination and diagnosis workflow. In the technology-leading USA, computer-aided diagnosis (CAD) software and automatic diagnostic solutions that incorporate AI technology have begun to be approved as medical devices(1), (2) and their practical application in the medical field is expected to advance worldwide in the near future.

This article describes the trends in AI relating to diagnostic imaging, and introduces diagnostic imaging support AI technology that Hitachi is working on.

2. Trends in AI Relating to Diagnostic Imaging

Figure 1—Future of Diagnostic Imaging Support ResearchCADe, which supports the detection of lesion candidate areas, CADx, which goes further to define whether a lesion is benign or malignant, etc., and CAP, which, for example, makes future predictions from images using scores and probability.

At a lecture in 2016, Professor Hinton of the University of Toronto, Canada, a leader in deep learning technology, which sparked the third boom in AI, announced, “In five years, deep learning is going to do better than radiologists.”(3) It was a huge shock for radiologists the world over to hear that diagnostic imaging could be replaced by AI in the near future. Following that, start-ups in the diagnostic imaging AI field appeared one after another, including DeepMind Technologies Limited, which is part of Google LLC and offers support for retinal disease diagnosis; Enlitic, Inc., which offers support for lung cancer detection/diagnosis; and Arterys, Inc., which offers support for heart disease diagnosis. Currently, there are over 100 such entities in the USA. For example, the system developed by DeepMind can detect diseases that lead to blindness with a 94% accuracy(4), that is comparable to an actual doctor.

Figure 1 shows the expected way forward for AI in diagnostic imaging. Based on how easily AI technology can be incorporated into the medical field and the implementation costs, its use is first expected to be seen in the automatic adjustment of diagnostic imaging equipment and automatic measurement, which is aimed at reducing the workload of physicians and technicians. Following this, computer-aided detection (CADe) is expected to spread with the aim of supporting the detection of lesion candidate areas. It is then further expected to be applied to computer-aided diagnosis (CADx), which defines whether a lesion is benign or malignant. Ultimately, AI is expected to help achieve computer-aided prediction (CAP) that makes disease state predictions using scores and probability. Currently, the leading research is being done on anomaly detection in chest X-rays and breast X-rays (mammography), however applications in lung and head examination and diagnosis support using X-ray computational tomography (CT) and MRI are advancing such that deployment in all diagnostic imaging devices and diagnostic imaging areas is expected to accelerate in the future.

Although, as mentioned above, it appears that AI could replace many areas of diagnostic imaging, in practice, physicians diagnose not only images, but also a lot of information such as interviews and patient medical histories. Therefore, the work in diagnostic imaging will not be completely replaced by AI, but rather, by automating parts of the process with AI, it will become possible for radiologists to focus their abilities on complex judgments in diagnosis. As a result, AI will improve the ability of physicians themselves, which will ultimately lead to the provision of higher-quality medical care.

3. Research and Development Concept of Diagnostic Imaging Support AI

Figure 2—Hybrid LearningThis combines medical findings built up as a manufacturer of medical diagnostic imaging equipment over many years, the image processing technology accumulated across numerous industrial fields, and the machine learning represented by deep learning.

Figure 2 shows an overview of the basic concepts of hybrid learning, Hitachi's unique AI technology concept for diagnostic imaging. Using operational technology (OT) and IT combining the medical knowledge it has built up as a manufacturer of medical diagnostic imaging equipment over many years with the image processing technology it has built up in several industrial fields and machine learning represented by deep learning, Hitachi is focused on developing unique AI technology that leads to high accuracy and medically relevant results.

In the application of AI to diagnostic imaging at clinical sites, there are two major issues in addition to the aspect of performance. One issue is providing the basis of AI judgement, and the other is securing a large amount of learning data. In hybrid learning, improved processing transparency and learning efficiency with little data can be expected by setting medical knowledge, such as image filters capable of capturing tumor features, as initial values in deep learning network configuration. This method aims to take advantage of the characteristics of data-driven machine learning while modeling and incorporating data from a lot of existing knowledge.

The following introduces the automatic measurement technology and CADe technology currently being developed under the concept of hybrid learning.

4. Achieving Ultrasound Examination Using Automatic Measurement Technology not dependent on the Skill of a Physician

4.1 Background and Purpose

Figure 3—EchocardiographyVariations among conventional (2D) measurement values and the increased time taken for measurements were reduced using 3D modeling and AI automation.

Ultrasound diagnostic devices are widely used in heart disease examinations. However, as complicated probe operation techniques are required, variations in operator measurement values and an increase in measurement time have become big challenges. Particularly in the USA, there is a large number of heart disease patients, with approximately 610,000 people dying from the disease annually(5). Heart disease is the leading cause of death regardless of gender. Reducing the work load on site by improving examination efficiency is an important issue in the quest for early diagnosis. The skill level of sonographers is high in the USA, where manual measurement plus auxiliary semi-automatic assisting functionality is mainstream. However, the establishment of three-dimensional (3D) measurement guidelines and automatic measurement technology is an important theme in the quest for radical improvement.

This is where Hitachi is studying a streamlined examination workflow with automation technology for standard plane extraction and heart-pump-function measurement, which are the foundation of cardiac examinations.

Six standard planes are defined in the guidelines(6) for cardiac examinations, so that uniform consultations can be given, and it is essential to extract these planes correctly. In current practice, practitioners manually detect each of the six planes one by one, which is very time-consuming. Furthermore, in the aforementioned extracted standard plane, the end-diastole and end-systole images are selected, the endocardium contour of the left ventricle is extracted, and the pump function of the heart is evaluated by calculating the area and volume values from the contour information obtained in each temporal phase. In the extraction of contour information, which is required when performing this evaluation, conventionally, examiners manually specify the contour of the endocardium, which is very complicated and, on top of that, the diagnosis results vary depending on the skill of the examiner.

In recent years, two-dimensional (2D) array probes capable of obtaining 3D cardiac images at high speed are becoming popular. Hitachi commercialized such products in 2018 (see Figure 3). However, although this is expected to reduce the complexity of probe operations, challenges remain in the efficiency of standard plane extraction from 3D imaging, and contour extraction in pump function evaluations. Therefore, Hitachi is developing guideline-based machine learning for automatic echocardiography, which combines the aforementioned guidelines (anatomical knowledge) with machine learning. An outline of the methodology is explained below.

4.2 Method

Figure 4 shows an overview of automatic plane extraction technology. Planes are extracted quickly and accurately by using the positional relationship between the heart's feature points and the planes defined in the guidelines. First, a locoregional cut from a cardiac 3D image is input into a random forest-based classifier, and the region most similar to the feature point, such as a cardiac apex or mitral annulus, is extracted. Then, from the extracted feature point, the reference apical four-chamber (A4C) plane is extracted using the positional relationship satisfying the recommendations of the guidelines. The A4C is the plane passing through the apex and the mitral annulus so that the diameter of the mitral valve is maximized. The point here is to incorporate the guideline content into the machine learning identification process when performing plane extraction. This will also contribute to the output reliability of this application. Moreover, using the A4C plane as a reference, and utilizing the anatomical characteristics described in the guidelines, it has become possible to extract the other 5 planes at high speed (see Figure 5)(7).

Next, Figure 6 shows an overview of automatic contour extraction technology. Based on knowledge of the anatomical structure of the heart, Hitachi has devised a two-step contour extraction method. In step 1, the initial contour is extracted by fitting the average shape of the heart wall to the feature points, and in step 2, the contour is refined to fit the heart wall with the application of an active shape model (ASM). Most conventional contour extraction methods that use ASM set the initial contour close to the target contour and then try to converge them from there(8), (9). This results in poor convergence. It is essential to set optimal initial contours. As such, Hitachi studied a method of setting initial contours by detecting the anatomical features of one apex point and two mitral annulus points and then fitting the detected feature points with the average shape of the heart wall. Moreover, using the ASM, it made accurate contour extraction possible by deforming the initial contour to fit where the gradient concentration is strong(10).

Figure 4—Standard Plane Automatic Extraction AlgorithmPlanes are extracted quickly and accurately by using the positional relationship between the heart’s feature points and the planes defined in the guidelines.

Figure 5—Standard Six Planes ExtractionUsing the A4C plane as a reference, and by utilizing the anatomical characteristics described in the guidelines, the other five planes are extracted at high speed.

Figure 6—Algorithm for Heart Wall Contour ExtractionBased on the knowledge of the anatomical structure of the heart, the initial contour is extracted by fitting feature points to the average shape of the heart wall.

4.3 Results

This method is expected to achieve improved operability and to significantly reduce the amount of trouble for practitioners. In-house preliminary evaluations using clinical data achieved the following performance results: a standard plane extraction accuracy of more than 80%, a contour extraction accuracy of more than 90%, and a processing time of less than 1 second for both functions [without using a graphics processing unit (GPU)]. In the future, Hitachi intends to attain clinical approval, with the target of practical application. Additionally, it plans to promote cloud reporting for future global expansion, and to expand sales of automatic measurement technology incorporating AI.

5. Achieving Highly Accurate Lesion Detection Functionality with CT Lung Cancer CAD Technology

5.1 Background and Purpose

Lung cancer is the world's leading cause of death from cancer, which makes early detection and diagnosis essential challenges. The National Lung Screening Trial (NLST) conducted in the USA in 2011 demonstrated that low-dose CT chest screening (hereinafter “lung cancer CT screening”) is effective in reducing the lung cancer death rate among heavy smokers(11). In Japan, preventative-type screening is carried out using chest X-ray screening, and arbitrary type screening is carried out using lung cancer CT screening, in addition to chest X-ray screening. In examinations using CT imaging, a physician must look over more than 100 images per examinee, putting a considerable burden on the physician, both psychologically and physically. Moreover, in order gain quality interpretation of the images, the images must be looked over by two physicians, which increases costs as well as the burden on medical staff.

In China, which accounts for 30% of the world's cancer cases and deaths, men occupy the largest number of lung cancer cases, with high smoking rates and air pollution, such as PM2.5, said to be contributing factors(12). There is a great need for early detection measures, and China is predicted to become the world's largest market for lung imaging analysis software that incorporates AI(13).

Hitachi has been carrying out research into lung cancer CADe systems since the late 1990s(14). It developed a technology called modeled kernel convolutional neural network (MK-CNN), which was achieved by combining deep learning with its years of experience and knowledge. The mission is to improve the accuracy of lesion detection by constructing a hybrid computer-aided detection (CAD) system that incorporates rule-based detection technology based on the insight from doctors (see Figure 7).

While it is relatively easy to achieve high performance with deep learning, the learning results are in a black box, therefore, there are challenges such as the difficulty of adjusting performance and of explaining the AI reasoning process. In particular, the relationship between the hyperparameters (number of network layers, number of nodes in each layer, and the convolution kernel initial value in nodes, etc.) that a designer determines before learning starts and performance has not been systematized, so a lot of trial and error has been necessary in adjusting the hyperparameters to achieve highly accurate lesion detection.

To address the issue, Hitachi has developed a new hyperparameter design method named the modeled kernel method and has achieved effective, streamlined network learning while significantly reducing the amount of trial and error(15). In this design method, the functions required for each layer constituting a convolutional neural network (CNN) is modeled in advance, and a convolution kernel is designed based on the frequency characteristics of each model, then these are used as initial values when learning starts. At this point, general imaging filters such as low pass filters (LPF) and high pass filterss (HPF) are combined on a rule basis, and weighting factors of all nodes' convolution kernels in CNN are temporarily set before learning commences. This CNN is called a modeled kernel convolutional neural network (MK-CNN). An overview is described below (see Figure 8).

Figure 7—Hybrid CAD SystemA hybrid CAD system that combines rule-based algorithms and deep learning achieves lesion detection based on learning results including physicians’ insights.

Figure 8—Modeled Kernel Convolutional Neural NetworkThe functions required for each layer of a convolutional neural network are modeled in advance, and a convolution kernel is designed based on the frequency characteristics of each model, then these are used as initial values when learning commences.

5.2 Method

Figure 9—Example of Input/Output Image of Lung Cancer CADBecause lesion and blood vessel planes have similar shadowed shapes, for more accurate lesion detection, it is essential to learn the characteristics that can distinguish the shapes of both with high precision.

In the image acquired from lung cancer CT screening (three-dimensional volume data), not only the shadows of the lesion are mirrored, but also shadows such as those from blood vessel planes, which bear a similar shape to lesions. Thus, for more accurate lesion detection, it is essential to learn characteristics that make it possible to distinguish both shapes with high precision. This is done by modeling, in the first layer of the MK-CNN, a function for detecting lesions, which are the subject for detection, and a function for detecting blood vessels that could cause false positives (see Figure 8). First, the shape of the lesion being detected is simplified by treating it as a 26-sided polygon chamfered in 45° units. On the upper boundary of this 26-sided polygon, the change in the voxel values of the image are small as the x and y directions are flat, however, the voxel values change sharply in the z direction. For this reason, a three-dimensional filter is designed by combining LPF in the x and y directions and HPF in the z direction, and temporarily setting the convolution kernels (weighting factors) as nodes. In the same way, the shapes of blood vessels, which could cause false positives, are simplified and regarded as cylinders in the z direction. In the case of this cylinder boundary, the voxel values change sharply in the x and y directions, but the changes in voxel values are small in the z direction. Therefore, a three-dimensional filter is designed by combining two-dimensional HPF in the x and y directions and LPF in the z direction, and the convolution kernels are temporarily set as nodes. Although these nodes alone can only detect the upper boundary of the 26-sided polygon and the boundary of the cylinder in the z direction, if each convolution kernel is geometrically rotated in three-dimensional space, it becomes possible to detect boundaries in any direction. To do this, the same number of nodes as there are boundaries (26) are prepared, the convolution kernel that detects the upper boundary is rotated three-dimensionally 26 times in 45° increments and temporarily set for each node, thereby detecting the 26-sided lesion. The boundaries of the blood vessels (cylinders) are rotated in the same way. Note that, although the characteristics of each filter (LPF, HPF) and the number of boundary directions (number of nodes), shown in Figure 8, need to be adjusted while looking at the overall performance of the MK-CNN, this can be done easily as the physical description is clear.

In the second layer and third layer, the functionality for moving boundaries in various directions, making it easier to detect a lesion, and making it more difficult to detect false positives from blood vessels, is modeled with a three-dimensional filter to obtain convolution kernels that are temporarily set as each node. In this case, by adding to the network between the first layer and the second layer, it becomes possible to detect not only simplified shapes, as outlined above, but also lesions and blood vessels of various shapes. In the fourth layer, functionality for reducing the detection of false positives is modeled with a three-dimensional filter, temporarily setting the convolution kernels.

All node parameters temporarily set by the above procedure (convolution kernels, and bias initialized as “0”) are optimized using machine learning. That is to say, a large number of input images (three-dimensional volume data) and pairing learning images (three-dimensional mask data) with the same image size, that show lesion areas as white and normal areas as black, are prepared. The convolution kernels of all nodes are updated while forward and backward propagation is repeated to allow CNN output images to resemble the learning images as much as possible. This updating is performed a specified number of times to obtain the final MK-CNN parameters.

By using an MK-CNN designed and trained using machine learning in this way, it has become possible to accurately distinguish lesions from blood vessel planes (see Figure 9).

5.3 Results

From the chest CT imaging database [the Lung Image Database Consortium image collection (LIDC-IDRI) data](16) provided free of charge on the Internet by the National Cancer Institute (NCI) of USA, a rate of 93.4% was achieved for the detection of solid nodular lesions as a result of experiments to detect lung cancer lesions using only MK-CNN. The experiments used 816 machine learning case images (three-dimensional volume data) and images of another 202 cases as evaluation data.

In the future, Hitachi intends to establish this technology to accelerate global development, including treatment for many common and rare diseases, by taking advantage of the characteristics of hybrid learning that allows streamlined machine learning with little data.

6. Conclusions

This article introduced automatic ultrasound measurement technology and CT lung cancer CAD technology that uses AI developed based on Hitachi's hybrid learning concept.

By combining the knowledge and insight that Hitachi has built up over many years with data-driven methods, in addition to achieving high precision and high reliability, it has also achieved learning efficiency. While contributing to the expansion of its solutions business in the field of diagnostic imaging, in cooperation with overseas research & development bases in the USA and China, etc., Hitachi intends to expand the scope of application of this technology globally, while enhancing its value at clinical sites.

REFERENCES

1)
US Food and Drug Administration, “Indications for Use,” (May 2017)(PDF Format, 224KB)
2)
FDA News Release, “FDA Permits Marketing of Artificial Intelligence-based Device to Detect Certain Diabetes-related Eye Problems,” (Apr. 2018)
3)
Creative Destruction Lab, “Geoff Hinton: On Radiology,” (Nov. 2016)
4)
J. De Fauw et al., “Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease,” Nature Medicine, Vol. 24, pp. 1342–1350 (Sep. 2018).
5)
Centers for Disease Control and Prevention, “Heart Disease Facts,” (Nov. 2017)
6)
R. M. Lang et al., “Recommendations for Cardiac Chamber Quantification by Echocardiography in Adults: An Update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging,” Journal of the American Society of Echocardiography, 28, pp. 1–39 (Jan. 2015).
7)
P. Zhu et al., “Guideline-based Learning for Standard Plane Extraction in 3-D Echocardiography,” Journal of Medical Imaging, Vol. 5, No. 4, 044503, (Nov. 2018).
8)
J. A. Noble et al., “Ultrasound Image Segmentation: A Survey,” IEEE Transactions on Medical Imaging, 25, pp. 987–1010 (Aug. 2006).
9)
T. F. Cootes et al., “Active Shape Models-Their Training and Application,” Computer Vision and Image Understanding, 61, pp. 38–59 (Jan. 1995).
10)
P. Zhu et al., “A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes,” World Academy of Science, Engineering and Technology, 10, pp. 47–51 (Nov. 2016).
11)
The National Lung Screening Trial Research Team, “Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening,” The New England Journal of Medicine, 365, pp. 395–409 (Aug. 2011).
12)
NLI Research Institute, “What are the Three Major Causes of Death in China? — One in Four Die of ‘Cancer',” NLI Research Institute Report (Jun. 2018) in Japanese.
13)
S. Harris, “Machine Learning in Medical Imaging – World Market,” Signify Research (Jul. 2018).
14)
S. Kusano et al., “Efficacy of Computer-aided Diagnosis in Lung Cancer Screening with Low-dose Spiral Computed Tomography: Receiver Operating Characteristic Analysis of Radiologists' Performance,” Japanese Journal of Radiology, 28, pp. 649–655 (Nov. 2010).
15)
M. Kageyama et al., “Nodal Detection CAD System for Lung Cancer CT Screening Using New Convolutional Neural Network Design Method,” The 75th Annual Meeting of the Japanese Society of Radiological Technology (JSRT), (Apr. 2019) in Japanese.
16)
S. G. Armato III et al., “The Lung Image Database Consortium image collection (LIDC-IDRI),” The Cancer Imaging Archive, https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI
Download Adobe Reader
In order to read a PDF file, you need to have Adobe® Reader® installed in your computer.