Artificial intelligence in automatic classification of invasive ductal carcinoma breast cancer in digital pathology images

  • Home
  • -
  • Uncategorized
  • -
  • Artificial intelligence in automatic classification of invasive ductal carcinoma breast cancer in digital pathology images

Abstract

Background: Breast cancer is one of the most causes of death in women. Early diagnosis and detection of Invasive Ductal Carcinoma (IDC) is an important key for the treatment of IDC. Computer-aided approaches have great potential to improve diagnosis accuracy. In this paper, we proposed a deep learning-based method for the automatic classification of IDC in whole slide images (WSI) of breast cancer. Furthermore, different types of deep neural networks training such as training from scratch and transfer learning to classify IDC were evaluated.

Methods: In total, 277524 image patches with 50×50-pixel size form original images were used for model training. In the first method, we train a simple convolutional neural network (named it baseline model) on these images. In the second approach, we used the pre-trained VGG-16 CNN model via feature extraction and fine-tuning for the classification of breast pathology images.

Results: Our baseline model achieved a better result for the automatic classification of IDC in terms of F-measure and accuracy (83%, 85%) in comparison with original paper on this data set and achieved a comparable result with a new study that introduced accepted-rejected pooling layer. Also, transfer learning via feature extraction yielded better results (81%, 81%) in comparison with handcrafted features. Furthermore, transfer learning via feature extraction yielded better classification results in comparison with the baseline model.

Conclusion: The experimental results demonstrate that using deep learning approaches yielded better results in comparison with handcrafted features. Also, using transfer learning in histopathology image analysis yielded significant results in comparison with training from scratch in much less time. Keywords: Invasive ductal carcinoma, Breast cancer, Artificial intelligence, Convolutional neural networks, Deep learning, Digital pathology

↑ What is “already known” in this topic:

Early diagnosis and detection of Invasive ductal carcinoma cancer (IDC) is an important factor for the treatment of IDC cancer. Computer-aided approaches have a great potential to improve diagnosis accuracy. There are some solutions and approaches to auto classification of IDC, that some of them used Deep Learning and handcrafted features.

→ What this article adds:

In this study, we used a simple Deep Learning and pre-trained CNN model to classify Invasive Ductal Carcinoma in digital pathology images. In this study we addressed a data insufficiently in medicine fields and introduced a Transfer Learning approaches to overcome this issue.Go to:

Introduction

Breast cancer is one of the most causes of death in women. In the United States, approximately 252,710 new cases of breast cancer founded and 40610 of them deeded in 2017. Among a types of breast cancers, Invasive Ductal Carcinoma (IDC) is the most common cause of death in women (1). In this cancer type, cancer cells originate from duct cells and spread to the surrounding tissue. Over time, cancer cells may spread through the lymph system or bloodstream to other organs or bones (2). Early diagnosis of breast cancer is one of the most important factors in determining the treatment stages for w omen with malignant tumors. Diagnosis of breast cancer generally involves initial diagnosis through manual examination or regular examination through mammography and ultrasound imaging (3). Among these methods, a biopsy is a gold standard for breast tissue examination. The biopsy of the breast tissue allows pathologists to access the microscopic structures of the breast tissue (4). Histology images allow the differentiation between healthy, benign and malignant tissues. However, the analysis of histopathological images is time consuming and challenging task because it involves a pathologist scanning large images of tissues that requires a knowledge of professionals and therefore outcome of analysis may be affected by the level of experience of the pathologist. Therefore, Computer-assisted diagnostic (CAD) systems are developed to help pathologist and doctors in early diagnosis of cancer and abnormalities detection and has a major role in the early diagnosis and prognosis of breast cancer (5). Since, histopathological images of breast tissue are high-resolution images that contain rich geometric features and also, inter-variability in classes makes classification difficult the process of developing CAD systems is a challenging task. In addition, due to the inadequate feature selection methods for pathological images of breast cancer, we may have difficulty extracting features. Traditional feature selection methods such as scale-invariant feature transform (STIF) (6) and gray-level co-occurrence matrix (GLCM) (7), all rely on labeled data. In recent years, artificial intelligence in a variety of medicine fields has been applied (812). Machine learning algorithms are the most popular subset of artificial intelligence for image interpretation that relies on the extracted features (13). Deep learning is an approach to machine learning methods where artificial neural networks, algorithms inspired by the human brain, learn from a large number of data (14). Deep Learning algorithms try to learn high-level features from data and also, deep learning models try to accurate predictions by himself (15). Therefore, deep learning reduces the need of hand-crafted features of every problem. In neural networks, Convolutional neural network (ConvNets or CNNs) is one of the main categories to do images recognition, images classifications (16). Todays, digital pathology is one of the recent examples of big data in the medical field. Already approaches have shown to be useful in many studies as they have a potential to reduce the tedious of providing accurate quantification and act as a second reader helping to reduce inter reader variability among pathologists (1719). Artificial intelligence and especially deep learning approaches provide great opportunities to extract and learn hidden features that cannot be assessed in routine laboratories tests. However, training of CNNs from scratch needs a large labeled data, which is a common issue in the medical field (9). There are some approaches such as transfer learning, to overcome this problem (20, 21). In this study, we propose a deep learning-based method of classifying invasive ductal carcinoma in positive and negative categories from histopathology images. Also, we addressed data insufficiently in medial fields and to overcome this issue, we proposed transfer learning approaches in two different manners. First, a pre-trained CNNs model was used to extract features automatically and secondly, the same CNN model also used as fine-tuned model for classification histopathology images. After training, each model tasted on test image set separately and results will be compared.

Related Works: The application of deep neural networks in breast cancer detection and classification has been studied. Liu Y et al. used convolutional neural networks to detection and segmentation of metastasis lesions in breast Gigapixel histology images. Detection accuracy was 92.4% that in comparison to 82.7% was 10% improved (22). In another study by Reiazi et al. a CNN model, named faster R-CNN, used for lesion detection in mammography images. They used 102 mammography labeled images for training. Overall precision was 0.2 in detection lesions (21). In other work by Araújo T et al. Detection and classification of cancer cells in histology images was done. They used 14 layers convolutional neural networks to detect cancer cells and then classify cancerous cells in carcinoma and non- carcinoma categories in 274 histology images. The sensitivity of network was 95% and finally, they reach 83% accuracy (23). Romo‐Bucheli D et al. used 9-layer CNN to classify cancer cells breast histology images. They used 741 histology images to train CNN in a supervised manner. The model accuracy was 83.19% on the test set (24). Romano AM et al. developed a deep learning architecture to classify in malignant and begin IDC images. They introduced a new pooling layer, called accepted-rejected pooling, with pool size of 2×2. Totally 277,524 image patches were used. After training, the test set was run through the model. Balanced accuracy and F1-measure were used as a performance measure. Balanced accuracy was 85.41% and F1 is 0.8528 and compared results against existing approaches. According to their results, F1-score was better and balanced accuracy also was slightly higher with percentage improvement of 11.51% and 0.86% for F1-score and balanced accuracy respectively (25). In another study by Xie J et al. a supervised and unsupervised deep learning models are used to classify invasive ductal carcinoma in histopathology images. They used adapted Inception_V3 and Inception_ResNet_V2 architecture as transfer learning approach for binary and multi-class issues of breast cancer classification. Also, they constructed a new autoencoder network to transform the extracted features by Inception_ResNet_V2 to perform image analysis. Their results show that proposed autoencoder yielded better results than feature extraction only rely on Inception_ResNet_V2. In totally, their results shown Inception_ResNet_V2 network for transfer learning provides a new tools of histopathological image analysis (26). Rakhlin A et al. utilized several deep neural network architectures and gradient boosted trees classifier to classify histopathology images in four classes. ResNet-50, InceptionV3 and VGG-16 networks are used as feature extractors. They remove fully connected layer from each model. For ResNet-50, InceptionV3 a Global Average Pooling layer used to convert 2048 channel features map into one-dimensional feature vector. Also, for VGG-16 they applied a same Global Average Pooling layer into four internal CONV layer 128, 256, 512 and 512 channel and finally concatenated all channels into a vector with length of 1408. After training, for binary classification, they report accuracy of 93.8%, AUC 97.3% and sensivity/specificity of 96.5/88.0% (27). Cruz-Roa A et al. developed a CNN framework for visual analysis of tumor regions for diagnosis support. They used a 3-layers CNN architecture which consist of two layers of convolution and pooling and remaining layers were fully connected layers. They used 113 whole slide images (WSI) used for training and 49 WSI used for independent test. After training, their model yielded in 71.80% for F1-measure and 84.23% for balanced accuracy. Also, they compare results with an approach using handcrafted features and machine learning classification for breast cancer classification (66.64% and 77.24% respectively for F1-measure and balanced accuracy) (28). Brancati N et al. proposed a method named SEF that based on deep neural network to learn histopathology images to avoid handcrafted pathology features. SET is based on Residual CNN and Softmax classifier to detect and classify IDC. They compare results against different CNN models (UNet and ResNet and FusionNet) on a same dataset. The results show that Autoencoders extract features that not useful for classification, since they learn image reconstruction. However, their model yielded improvement of 5.06% in F-measure for detection task and an improvement of 1.09% in the accuracy for classification task (29). Alom MZ et al. proposed Inception Recurrent Residual Convolutional Neural Network (IRRCNN) for breast cancer classification. IRRCNN is a combination of Inception Network (Inception-V4), the Residual Network (ResNet) and the Recurrent Convolutional Neural Network (RCNN). They used two publicly histopathology image data set for training and test. After training, performance of model evaluated based on sensivity, area under curve (AUC), ROC curve and global accuracy. Also, they consider different criteria such as magnification factor, resized samples, augmentation in this study. They reported 99.05% accuracy for binary classification and 98.59% accuracy for multi-class breast cancer classification (30). Go to:

Methods

Dataset

The publicly available image dataset provided in Kaggle coding website (http://Kaggle.com) used in this study. The original dataset come from the study by (28) and (31). The dataset consists of high-resolution images (2040 * 1536 pixels) from 162 women diagnosed with IDC at the Hospital of the University of Pennsylvania and The Cancer Institute of New Jersey. All slides were digitized via the same scanner at 40x magnification (0.25 µm/pixel resolution). Images down sampled to a smaller patch in 50 by 50-pixel. A total of 277524 smaller image patches were extracted from the original images. Each labeled image as either “IDC positive” or “IDC negative” via manual delineation by an expert pathologist. 78786 images belong to “IDC (+) and 198738 images to “IDC (-)” categories “. Figure 1 shows the patches of these images in these two classes. Fig. 1

An example of pathology images with labels

Preprocessing

All images were rescaled to gray level (0,1) to reduce computational cost and faster training. Furthermore, by standardizing the data in this way, a mean (µ) = 0 and standard deviation (σ) =1 and this allow optimizer to converge faster. 20% of the total images were used for the test phase and the rest of them used for training. We also need a validation set in order to check to overfitting. Another issue in this dataset is the imbalance between classes. This means that the number of data in the benign class is about 2.5 times the number of data in the malignant class which have a detrimental impact of CNN performance (32). According to (32) oversampling emerged as dominant approach in almost all analyzed scenarios. SMOTE stands for Synthetic Minority Over-sampling Technique was used as oversampling approach. The basic idea is to generate synthetic data using the nearest neighbor method. This is done using imblearn library (https://imbalanced-learn.readthedocs.io/).

Figure 2 shows the number of images in both classes before (left image) and after (right image) oversampling. Also, we shuffled images to remove any patterns if presented. Shuffling data serves the purpose of reducing variance and making sure that models remain general and overfit less (33).

An external file that holds a picture, illustration, etc.
Object name is mjiri-34-140-g002.jpg

Fig. 2

Class distribution. Left: Before oversampling. Right: After oversampling

Deep Convolutional Neural Network

We used two different approaches to train our classifier model: training from scratch and transfer learning. In training from scratch, a custom model train with random initialization. It requires many data and computationally expensive and needs to design a new model architecture. On the other hand, in transfer learning, a pre-train model that learned some basic features such as edges, corners and lines, used for new task.

Training from scratch

As shown in Figure 3, the network architecture (named it baseline model) consists of 4 layers of convolution with ReLu activation function except for last. Drop out layers used to reduce the number of trainable parameters. The input layer consists of 32 filters with a kernel size of 3 × 3 which has an input size of (50×50×3) that 3 refer to the RGB color channels. Also, the max-pooling layer was used to reduce size of features map. Finally, a flatten layer used to change the feature map dimension to 1D array for following fully connected layer with two neurons and Softmax activation function (Fig. 3). Fig. 3

Base Model Architecture

The baseline model was trained end-to-end on training images and used Adam optimizer (34) (learning rate=0.001). Binary cross entropy also used as the loss function for 20 epochs with batch sizes of 32. Several data augmentation methods were used to artificially increase the size of the dataset. This process helps in solving overfitting and model’s generalization during training. The settings deployed in image augmentation are listed in Table 1.

Table 1

Settings for image Augmentation

MethodSetting
Rotation Range40
Width Shift0.2
Height Shift0.2
Shearing0.2
Zoom Range0.2

Open in a separate window

The rotation range indicated the range in which the images will rotate by specifying the rotation in degrees. We are rotating the image by -40 degrees. The width shift and height shift ranges are within which to randomly translate images vertical or horizontally by 0.2 percent. Furthermore, the shearing in range 0.2 was used to clips the image angles in counterclockwise direction. Also, zoom range of 0.2 was used to randomly zooming inside the images.

Transfer Learning

Transfer learning is a deep learning technique where a model trained on one taskuse of the knowledge gained while solving one problem and applying it to a different but related problem. In general, there are two types of transfer learning in the context of deep learning:

  • Transfer learning via feature extraction
  • Transfer learning via fine-tuning

In deep learning models, different layers learn different features and this is a hierarchical representation of layered features. These layers are connected to a last layer (usually dens layer) to get final output. In transfer learning via feature extraction, these layers allow to utilize a pre-trained CNN models as a feature extractor with customized final layer. Fine-tuning on the other hand is more involved technique, where we need to selectively retrained some of the previous layers and last layer need to be changed. VGG-16 used as a pre-train network for feature extraction and further fine-tuning (Fig. 4). This model created by Geometry Group at university of Oxford, which specializes architecture for deep convolutional networks for large-scale visual recognition and pre-trained on ImageNet dataset (35). Fig. 4

VGG-16 model architecture

Transfer Learning via Feature extraction: In this method, all layers were frozen except for the last three fully connected layers. These layers were replaced with two fully connected layers with 512 and 2 neurons respectively. A drop out layer also used to reduce trainable parameters and avoid over-fitting problem (Fig. 5). Fig. 5

Transfer learning via feature extraction using VGG-16 CNN model

This model was trained on 81% of the training set in 50 epochs with batch size of 32, and validated on the remaining images. Categorical cross entropy was used as loss function. This network trained end-to-end using Adam optimizer with standard parameters (and learning rate=0.001.

Transfer Learning via Fine- tuning

We tuned VGG-16 model architecture by freezing all layers except last two Conv blocks and modified fully connected layer with 512 and 2 neurons respectively for classification (Fig. 6). This model trained for 50 epochs and batch sizes of 32 with categorical cross-entropy as the loss function. Adam optimizer also used for model optimization (learning rate=0.001). Fig. 6

Fine-Tune VGG-16 model for classification

Performance Evaluation

All models evaluated based on F-measure (F1) and overall accuracy on the test set images according to equations (1) and (2) respectively. Classification results come from confusion matrix when applying trained model on test set image. Furthermore, we report training time for each model, since it is a key factor in application of deep learning context. F1=2*Precision*SensivityPrecision+Sensivity(1)Accuracy=(TP+TN)(TP+TN+FP+FN)(2)

Implementation Detail

Python programming language was used for implementing the proposed deep learning approaches. Furthermore, different packages and libraries such as Keras (https://github.com/keras-team/keras) (version 2.2.4) with TensorFlow backend for implementing neural network, Scikit-Learn (https://scikit-learn.org/) (version 0.21.2) library for statistical and machine learning analysis also used. Matplotlib (https://matplotlib.org/) (version 3.1.2) library for plotting and visualization, and Scikit-Image (https://scikit-image.org/) (version 0.15.0) also used as image processing tool. Numpy (https://numpy.org/) package (version 1.16.4) used for numerical computing. All models were trained on a NVIDIA® GTX 1080 GPU and Intel core i9 Xeon CPU with 128 GB RAM. Go to:

Results

This section will present our classification experimental results on histopathological test images from dataset. After training of all models, a test dataset was run through the model and the results obtained were recorded.

Baseline Model

Figure 7 shown the model accuracy and loss decay in 20 epochs on training and validation data. The highest accuracy achieved during the validation was 0.85. Fig. 7

Base line model accuracy and loss curves on training and validation data with prediction results on test set images

Transfer Learning via Feature Extraction

The overall accuracy was 81% on test set. Figure 8 show model’s accuracy and loss curves confusion matrix from le and prediction results on test set images (Fig. 8). Fig. 8

VGG-16 as a feature extractor for image classification: left: model accuracy. Middle: model loss curve. Right: confusion matrix for model prediction on test set

Extracted features were projected into two-dimensional representation of the initial features by using t-SNE on extracted features (Fig. 9). t-SNE is an efficient algorithm for dimensionality reduction that preserves distance between samples (36). Fig. 9

Extracted features by VGG-16 CNN model

As shown in Figure 9, extracted features were well-differentiated and make an image classification easy task.

Transfer Learning via Fine-Tuning

The highest accuracy score achieved during the validation set was 0.51. Figure 10 shows the model’s accuracy and loss decay on the training and validation set and also prediction results on test set. Figure 10

Fine-tuned VGG-16 accuracy and loss during training and validation data

Table 2 provided prediction results for all models. Also, Table 3 represent all experiment results in summary. High values were bolded.

Table 2

Classification results for all CNN models on test dataset

ModelTrue positive (TP)
(%)
False Positive (FP) (%)True Negative
(TN) (%)
False Negative (FN)
(%)
Base line model40.56.4543.49.7
VGG-16- as a feature extractor40.69.940.39.3
VGG-16- with fi ne tuning1.80.035048.7

Table 3

The summary experiment results of all training approaches

ModelArchitectureTraining Time (min)TrainingF1-MeasurePrecisionAccuracy
Base line modelCNN225.3Scratch0.830.860.85
VGG-16- as a feature extractorCNN21.6Transfer
learning
0.810.810.81
VGG-16- with fine tuningCNN200Transfer
learning
0.350.740.51

In original paper by Cruz et al. some of the state-of-the-art handcrafted features methods applied on this data set. Table 4 represent quantitative comparison our transfer learning via feature extraction with handcrafted feature method.

Table 4

Quantitative comparison of Handcrafted features methods with our Transfer learning via feature extraction approach

MethodPrecisionF1Accuracy
Fuzzy Color Histogram (40)0.70860.67530.7874
RGB Histogram (41)0.75640.66640.7724
Gray Histogram (42)0.71020.60310.7337
JPEG Coefficient Histogram (43)0.75700.57580.7126
MPEG7 Edge Histogram (44)0.73600.54850.6979
Haralick features (45)0.62460.39150.6199
Local Binary Partition Histogram (46)0.75750.35180.6048
Graph-based features (45)0.61840.34720.6009
HSV Color Histogram0.76620.34460.6022
Our Method (Transfer learning via feature extraction)0.810.810.81

Discussion

In this study, a CNN-based approach for the classification of histopathology breast cancer images was introduced. We addressed data deficiency in the medical field. Some studies suggest using methods such as transfer learning (37-39). We examined two different types of training for breast cancer classification. In the first method, we train a four layers convolutional neural network on the training set. The model achieved 85% accuracy on the test set. Table 1 shows the quantitative results of our CNN classifier. Among test images, 40.5% was correctly classified as malignant (TP) and 43.4% of the test set correctly classified as benign (TN). Furthermore, 9.7% of all images classified as benign, while they were malignant (FN) and 6.45% classified as malignant, while they were benign (FP). In another method, we used transfer learning to classify images. First, we used transfer learning via feature extraction. VGG-16 CNN model was used as a pre-trained model for feature extraction. After training, the overall accuracy of the test set was 81%. Also, prediction results were: TP=40.6%, TN=40.3%, FP=9.9%, FN=9.3% on test set. In a second manner, a fine-tuned VGG-16 trained and achieved 51% accuracy on the test images set. Prediction results for fine-tuned VGG-16 was TP=1.8 %, TN=50%, FN=48.7%, FP=0.03%. According to Figures 8 and 10, the validation curve has some fluctuations due to small validation size and also because, in these models, almost weights were tuned from scratch but in Figure 9, since a pre-trained CNN model was used, the validation curve approximately flat.

According to the results, it can be stated that using transfer learning yielded significant results in comparison to training a model from scratch. Also, among transfer learning approaches, transfer learning via feature extraction in comparison with transfer learning via fine-tuning tuning due to retraining some of the convolutional blocks in fine-tuning, yielded better results in less time (about 0.01 less).

Comparison against state-of-the art handcrafted features

In order to evaluate the efficiency of our approach, we compared transfer learning via feature extraction against a set of handcraft features in histopathology images. Table 3 shows quantitative classification performance on histopathology images. Our transfer learning via feature extraction yielded the best overall performance in term of both accuracy and F-measure (0.81, 0.81). The best handcraft features are Fuzzy Color Histogram (0.67, 0.78) and RGBHistoram (0.66, 0.77). However, our approach improved F-measure and results by 20.9% and 3.84 % respectively over the best handcrafted features.

Comparison against state-of-the-art CNN model

We also evaluate the efficiency of our baseline model and transfer learning via fine-tuning approaches. Table 5 shows the performance of our model and different studies. The baseline model yielded better performance in comparison with original paper on this dataset in term of both accuracy and F-score (0.8562, 0.8350). also, our model yielded a near equivalent performance in comparison with Romano et al (25).

Table 5

F score and Accuracy of Our Method and Comparison with Existing Deep Learning Approaches

MethodF1-ScoreAccuracy
Original Paper (28)0.71800.8423
Accept-Reject pooling (25)0.85280.8541
AlexNet, Resize by (9)0.76480.8468
Our Method0.83500.8562

Conclusion

In this study, we present two deep learning approaches to classify IDC in histopathology images. According to results, our baseline models yielded significant improvement in comparison with the original paper on this data set. Furthermore, transfer learning via feature extraction yielded better results in comparison with handcrafted features. We also, compared training from scratch and transfer learning approaches. According to results, transfer learning via feature extraction has equivalent and even better performance in comparison with training from scratch in much less training time (0.01 less). Also, among transfer learning methods, feature extraction using pre-trained networks has better results in comparison with fine-tune pre-trained models.Go to:

Conflict of Interests

The authors declare that they have no competing interests.Go to:

Notes

Cite this article as: Abdolahi M, Salehi M, Shokatian I, Reiazi R. Artificial intelligence in automatic classification of invasive ductal carcinoma breast cancer in digital pathology images. Med J Islam Repub Iran. 2020 (20 Oct);34:140. https://doi.org/10.34171/mjiri.34.140 Go to:

Footnotes

Conflicts of Interest: None declared

Funding: This study was supported by Bushehr University of Medical Sciences foundation grant no 1103.Go to:

References

1. DeSantis C, Ma J, Bryan L, Jemal A. Breast cancer statistics, 2013. CA Cancer J Clin. 2014;64(1):52–62. [PubMed] [Google Scholar]2. Borst MJ, Ingold J. Metastatic patterns of invasive lobular versus invasive ductal carcinoma of the breast. Surgery. 1993;114(4):637–42. [PubMed] [Google Scholar]3. Unger-Saldaña K. Challenges to the early diagnosis and treatment of breast cancer in developing countries. World J Clin Oncol. 2014;5(3):465. [PMC free article] [PubMed] [Google Scholar]4. Rehnberg G, Absetz P, Aro A. Counseling Women’s satisfaction with information at breast biopsy in breast cancer screening. Patient Educ Couns. 2001;42(1):1–8. [PubMed] [Google Scholar]5. Castellino RA. Computer aided detection (CAD): an overview. Cancer Imaging. 2005;5(1):17. [PMC free article] [PubMed] [Google Scholar]6. Lowe DG. editor Object recognition from local scale-invariant features. Proceedings of the seventh IEEE international conference on computer vision, Kerkyra, Greece. 1999;2:1150–1157. [Google Scholar]7. Haralick RM. Statistical and structural approaches to texture. Int J Biomed Imaging. 2015:2015. [Google Scholar]8. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M. et al. A survey on deep learning in medical image analysis. Med Image Analys. 2017 Dec 1;42:60–8 2017. [Google Scholar]9. Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19(6):1236–46. [PMC free article] [PubMed] [Google Scholar]10. Sidey-Gibbons JA, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol. 2019;19(1):64. Published 2019 Mar 19. [PMC free article] [PubMed] [Google Scholar]11. Chen D, Liu S, Kingsbury P, Sohn S, Storlie CB, Habermann EB. et al. Deep learning and alternative learning strategies for retrospective real-world clinical data. NPJ Digit Med. 2019;2(1):1–5. [PMC free article] [PubMed] [Google Scholar]12. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271–e97. [PubMed] [Google Scholar]13. Fatima M, Pasha M. Survey of machine learning algorithms for disease diagnostic. J Intell Learn Syst Appl. 2017;9(1):1–16. [Google Scholar]14. Yang J. Deep learning. 2016. 15. Shen D, Wu G, Suk HI. Deep learning in medical image analysis Annu Rev Biomed Eng. 2017;19:221–248. [PMC free article] [PubMed] [Google Scholar]16. Lan K, Wang D-t, Fong S, Liu LS, Wong KK, Dey N. A survey of data mining and deep learning in bioinformatics. J Med Syst. 2018;42(8):139. [PubMed] [Google Scholar]17. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B. Histopathological image analysis: A review. IEEE Rev Biomed Eng. 2009;2:147–71. [PMC free article] [PubMed] [Google Scholar]18. Veta M, Pluim JP, Van Diest PJ, Viergever MA. Breast cancer histopathology image analysis: A review. IEEE Trans Biomed Eng. 2014;61(5):1400–1411. [PubMed] [Google Scholar]19. Bhargava R, Madabhushi A. Emerging themes in image informatics and molecular analysis for digital pathology. Annu Rev Biomed Eng. 2016 Jul 11;18:387–412. [PMC free article] [PubMed] [Google Scholar]20. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys. 2019;29(2):102–127. [PubMed] [Google Scholar]21. Chen S, Ma K, Zheng Y. Med3d: Transfer learning for 3d medical image analysis. 2019. 22. Liu Y, Gadepalli K, Norouzi M, Dahl GE, Kohlberger T, Boyko A. et al. Detecting cancer metastases on gigapixel pathology images. IEEE Trans Med Imaging. 2019;38(8):1948–1958. [Google Scholar]23. Belsare A, Mushrif M, Pangarkar M, Meshram N, editors editors. Classification of breast cancer histopathology images using texture feature analysis. PLoS One. 2017 Jun 1;12(6):e0177544. [PMC free article] [PubMed] [Google Scholar]24. Romo‐Bucheli D, Janowczyk A, Gilmore H, Romero E, Madabhushi A. A deep learning based strategy for identifying and associating mitotic activity with gene expression derived risk categories in estrogen receptor positive breast cancers. Cytometry A. 2017;91(6):566–573. [PMC free article] [PubMed] [Google Scholar]25. Romano AM, Hernandez AA, editors. Enhanced Deep Learning Approach for Predicting Invasive Ductal Carcinoma from Histopathology Images. 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD) 2019: IEEE. 26. Xie J, Liu R, Luttrell IV J, Zhang C. Deep learning based analysis of histopathological images of breast cancer. Front Genet. 2019;10:80. Published 2019 Feb 19. [PMC free article] [PubMed] [Google Scholar]27. Rakhlin A, Shvets A, Iglovikov V, Kalinin AA, editors editors. Deep convolutional neural networks for breast cancer histology image analysis International Conference Image Analysis and Recognition; Med Image Anal. 2019;56:122–139. [Google Scholar]28. Cruz-Roa A, Basavanhally A, González F, Gilmore H, Feldman M, Ganesan S. et al. editors Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. Sci Rep. 2017;7:46450. [PMC free article] [PubMed] [Google Scholar]29. Brancati N, De Pietro G, Frucci M, Riccio DJIA. A deep learning approach for breast invasive ductal carcinoma detection and lymphoma multi-classification in histological images. IEEE Access. 2019;7:44709–44720. [Google Scholar]30. Alom MZ, Yakopcic C, Nasrin MS, Taha TM, Asari VK. Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. BioMed Res Int. 2018:2018. [PMC free article] [PubMed] [Google Scholar]31. Janowczyk A, Madabhushi A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J Pathol Inform. 2016;7:29. [PMC free article] [PubMed] [Google Scholar]32. Buda M, Maki A, Mazurowski MA. A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 2018;106:249–259. [PubMed] [Google Scholar]33. Inoue T, Vinayavekhin P, Wang S, Wood D, Munawar A, Ko BJ, et al. Shuffling and Mixing Data Augmentation for Environmental Sound Classification. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 109–113, New York University, NY, USA, Oct. 2019. 34. Kingma DP, Ba J. Adam: A method for stochastic optimization. Conference paper at the 3rd International Conference for Learning Representations, San Diego 2015;arXiv:1412.6980v9. 35. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Conference paper at ICLR 2015;arXiv:1409.1556v6. 36. Maaten Lvd, Hinton G. Visualizing data using t-SNE. J Machine Learn Res. 2008;9:2579–260537. [Google Scholar]37. Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB. et al. Deep learning in medical imaging: general overview. Korean J Radiol. 2017;18(4):570–84. [PMC free article] [PubMed] [Google Scholar]38. Ribeiro E, Uhl A, Wimmer G, Häfner M. Exploring deep learning and transfer learning for colonic polyp classification. Comput Math Methods Med. 2016:2016. [PMC free article] [PubMed] [Google Scholar]39. McBee MP, Awan OA, Colucci AT, Ghobadi CW, Kadom N, Kansagra AP. et al. Deep learning in radiology. Acad Radiol. 2018;25(11):1472–1480. [PubMed] [Google Scholar]40. Han J, Ma KK. Fuzzy color histogram and its use in color image retrieval. IEEE Transact Image Proces. 2002;11(8):944–52. [PubMed] [Google Scholar]41. Deselaers T, Keysers D, Ney H, editors. Features for image retrieval: A quantitative comparison. Joint Pattern Recognition Symposium 2004: Springer. 42. Rueda JCC. A Prototype System to Archive and Retrieve Histopathology Images by Content. 2008. 43. Lux M, Chatzichristofis SA, editors. Lire: lucene image retrieval: an extensible java cbir library. Proceedings of the 16th ACM international conference on Multimedia; 2008. 44. Messing DS, Van Beek P, Errico JH, editors. The mpeg-7 colour structure descriptor: Image description using colour and local spatial information. Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205) 2001: IEEE. 45. Basavanhally A, Ganesan S, Feldman M, Shih N, Mies C, Tomaszewski J, et al. Multi-field-of-view framework for distinguishing tumor grade in ER+ breast cancer from entire histopathology slides IEEE Transactions on Biomedical Engineering 60 (2013): 2089-2099. [PMC free article] [PubMed]46. Ahonen T, Hadid A, Pietikainen MJItopa, intelligence m. Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006;28:2037–2041. [PubMed] [Google Scholar]

Leave a Reply

Your email address will not be published. Required fields are marked *