Publication: Artificial Intelligence (AI) in Pathology – A Summary and Challenges Part 2

First part of “Artificial Intelligence (AI) in Pathology – A Summary and Challenges” is here!

We continue with…

 

  1. AI research in pathology

In this section we cover research in topics of origins of image analysis, computational pathologist, machine learning in pathology, Digital Pathology, Convolutional Neural Network in pathology, and other AI in cancer applications.

 

4.1 Origins of image analysis

Meijer et al. summarized origins of image analysis in the field of clinical pathology related to routine diagnostic cytopathology, histopathology, and research. They distinguished between three different areas of image analysis namely: a) Evaluating morphological features of tissues/cells/nuclei/nucleoli, b) Counting of tissue/cell constituents, and c) Cytometry and configuration recognition. Further they discussed historical significance of Morphometry (quantitative description of geometric features of structures of any dimension), Planimetry (measurement of geometric features of structures in two dimensions), Stereology (quantitative information about geometric features of structures with a test system of lower dimension than the structure itself), and Counting objects techniques in relation to image analysis. Techniques of Cytometry and DNA Cytometry using visible light, coupled with powerful computers have allowed the development of systems for automatic cell classification based on pattern recognition.31

 

4.2 Computational pathologist

Beck et al. developed an ML based method for automatically analyzing cancer images and predicting prognosis called the Computational Pathologist (C-Path). Their image processing structure performed an automated, ranked scene subdivision generating measurements in thousands, comprising standard morphometric descriptors of image objects and upper level contextual, relational, and global image characters. The pipeline comprised of three phases. First, their processing steps included a) separating the tissue from its background, b) partitioning the image into smaller regions with a consistent appearance recognized as superpixels, c) finding nuclei inside the superpixels, and          d) constructing cytoplasmic and nuclear characters within the superpixels.  Next, within every superpixel they estimated the size, shape, intensity, and texture of the superpixel and its neighbors. Afterwards, to create more biologically significant features, they categorized superpixels as either epithelium or stroma. They used an ML based approach of L1-regularized logistic regression, in which they hand-annotated superpixels from 158 photos and utilized those images to train the classifier. The resultant classifier composed of 31 characters achieved a categorization accuracy of 89% on detained data. The authors using a series of relational characters produced a set of 6,642 features per image.  Predicting survival based on the images from patients who were alive 5 years after surgery and also from patients who had died at 5 years after surgery they built the prognostic model. After constructing the model, it then was utilized to a verify set of breast carcinoma photos which were not part of the model creation to categorize patients as either low or high risk of dying  at 5 years. A bootstrap examination on the data set and for each of the 6,642 features the authors obtained a 95% Confidence Interval for the feature’s coefficient estimate.32

 

4.3 Machine learning in pathology

To achieve optimum Supervised Machine Learning Model Rashidi et al. proposed four questions:  i) Does the

endeavor tackle a necessity?, ii) Is enough data accessible which is appropriate type that scrutinized by clinical specialists?, iii) Which Machine Learning method to utilize?, and iv) Are the enhanced ML simulations appropriate and general enough when used with a new data set? The authors support a balanced approach using clinical trial data merged with real world data to optimize ML training. They recommend that pathologists/laboratorians must be sufficiently familiar with available modeling options in order to make meaningful contributions within the team.33

Moxley-Wyles et al. introduce the basics of AI in pathology and discuss the future and challenges for the discipline  with focuse on surgical pathology instead of cytology. The authors foresee AI’s potential to obtain derive novel biological insights by identifying subtle cell changes, which are not recognized by pathologists (using the Haematoxylin and Eosin (H&E) stain) that can predict specific mutations within the cell. Predictions using AI have been proven for Speckle-Type POZ Protein (SPOP) mutation in prostate cancer, BRAF in melanoma, and many mutations in lung adenocarcinoma. They observe that with robustly validated AI tools second opinions from other pathologists could become not necessary. The authors expect AI’s potential assistance in predicting outcomes of responses to treatments after regulatory approvals. However, in their opinion the use of Artificial Intelligence in diagnostic practice is rare due to some of the limits of Artificial Intelligence including regulatory and validation issues, as well as a high cost.34 Li et al. used the fluorescence hyperspectral imaging technique to acquire spectral images for the early diagnosis of gastric cancer. They combined DL with spectral-spatial categorization techniques utilizing 120 fresh tissue specimens with an established diagnosis by histopathological assessments. The method was utilized to detect and extract the ‘spectral + spatial’ characters to create an early cancer diagnosis model. It resulted in the accuracy of 96.5%, specificity of 96%, and sensitivity of 96.3% for non-precancerous lesion, precancerous lesion, and cancer groups.35

 

4.4 Digital Pathology (DP)

Hartman et al. numerate how DP is more advantageous over traditional pathology based on ‘physical slide on a physical microscope.’ This tool development did benefit from 24 public challenges based publications in specific pathological diagnostic tasks. However, there is a true disconnect between the types of organs studied in these public challenges and the large volume of specimens typically available in clinical practice. Even though disciplines of dermatology and gastrointestinal collect a majority of samples in pathology laboratories, so far there are no pathology based dermatology public challenges while only a few in regards to the gastrointestinal field. This mismatch is the key reason there being a limit on the wider adoption of AI in pathology field.36

Niazi et al. have developed the generation of synthetic digital slides that can be used for educational purposes to train future pathologists. Their Conditional Generative Adversarial Networks approach contains two main components of the generator and the discriminator. The generator creates fake stained images, while the discriminator tries to catch them. Their approach of distinguishing between 15 real and 15 synthetic images yielded an accuracy of 47.3% amongst three pathologists and two image analysts. The authors do see a role for AI in quality assurance by improving the pathologist’s performance with the use of intelligent deep learning and AI tools.37

DP involving the slide digitization process in some instances does create artifacts that are ‘Out-Of-Focus’ or OOF. OOF is typically noticed after a careful review which requires a whole-slide rescanning, as the manual screening for OOF affecting only parts of a slide is not feasible. Kohlberger et al. developed a ConvFocus using a refined semi-synthetic OOF information production process and was assessed using seven slides covering three dissimilar tissue and three dissimilar stain types and then was digitized. For 514 separate regions representing 37.7K 35 μm × 35 μm image patches, and 21 digitized “z-stack” Whole Slide Images containing known Out-Of-Focus patterns, ConvFocus scored Spearman rank coefficients of 0.81 and 0.94 on two separate scanners, and it replicated the expected Out-Of-Focus patterns from z-stack scanning. More importantly the authors observed a decrease in the accuracy with increasing OOF.38

Hartman et al. investigated a US healthcare organization with 20+ hospitals, 500 outpatient sites, international affiliations of one hospital in Italy and a lab in China. The organization employs 100+ pathologists, does  consultations by telepathology from the Chinese lab, and uses Digitized Pathology scanned over 40,000 slides. Their conclusion for attainment of successful DP is performing a combination of pre-imaging adjustments, integrated software, and post-imaging evaluations.39 Parwani observed that to attain DP in a lab requires an essential alteration in how tissue is handled and the workflow is harmonized, and the laboratory has attained a digital workflow. It is more than making the workflow to digital and acquiring WSI scanners. He numerates a key advantage the digital workflow provides of reduction in errors in DP and obtaining a second opinion.40

In DP problems of color variations do arise in tissue appearance due to the disparity in preparation of tissues, difference in stain reactivity between different batches and different manufacturers, user and/or protocol dissimilarity, and the use of scanners from diverse vendors. Khan et al. present a novel preprocessing approach to histopathology image stain normalization using  representation derived from color deconvolution based on non-linear mapping of a source image to a target image. A method of color deconvolution obtains stain intensity values when the stain matrix, which describes how the colour is changed by the stain intensity is made available. Instead of using the standard stain matrices, which might be unsuitable for a specified image, they recommend the utilization of a colour based classifier incorporating a new stain colour descriptor to compute image explicit stain matrix.41

Janowczyk et al. developed a tutorial on focusing on the critical components needed by DP experts in automating tasks of grading or investigating clinical hypothesis of prognosis prediction. The authors examined seven use cases of (i) nuclei segmentation, (ii) epithelium segmentation, (iii) tubule segmentation, (iv) lymphocyte detection, (v) mitosis detection, (vi) IDC detection, and (vii) lymphoma classification, and demonstrated how DL can be applied to the most common image analysis tasks in DP using open source framework Caffe. They further subdivided the seven tasks into three categories of detection (mitotic events, lymphocytes), segmentation (nuclei, epithelium, tubules), and tissue classification (IDC, lymphoma sub-types), as the approaches used are similar within each analysis category. With over 1200 DP images used during evaluation produced the following: (i) nuclei subdivision  with F score of 0.83, (ii) epithelium subdivision with F score of 0.84, (iii) tubule subdivision with F score of 0.83, (iv) lymphocyte detection with F score of 0.90, (v) mitosis recognition with F score of 0.53, (vi) invasive ductal cancer recognition with F score of 0.77, and (vii) lymphoma categorization with categorization accuracy of 0.97. In many of these cases the results are excellent versus seen from the modern feature-based categorization approaches.42

To guide surgical decisions further, intraoperative frozen sections are useful for rapid pathology-based diagnosis. However, the quality of frozen sections is lower compared to formalin fixed paraffin embedded tissue43 and that they must be diagnosed within 20 min of receipt. In current clinical practice, thyroid nodule surgeries are the most common in requiring intraoperative consultations. However, using traditional approach the sensitivity for diagnosing thyroid nodules from frozen sections is around 75%.44 Li et al. investigated for the first time if a ‘patch-based diagnostic system’ with DL methodology can diagnose thyroid nodules from intraoperative frozen sections. They approached the problem as a three-category classification problem of benign, uncertain, and malignant classes. In order to reduce the overall time cost, they applied tissue localization first in the whole slide diagnosis to locate thyroid tissue regions. This rule-based system considered the conservative diagnosis manner of the practical thyroid frozen section diagnosis. Their computerized diagnostic technique demonstrated a precision of malignant and benign of thyroid nodules of 96.7% and, 95.3% respectively, and 100% sensitivity for the unsure category. Moreover, the methodology resulted in diagnosis of a typical Whole Slide Image in less than one min.45 Paeng’s presentation covers limitations of pathology and relative advantages of DP of reproducibility, accuracy, and workload reduction. Key applications of DP are a) Tumor proliferation score prediction – breast resection, and b) Gleason score prediction – prostate biopsy. The author’s method scored the best in Tumor Proliferation Assessment Challenge. He achieved Gleason score prediction of 83%  for core-level performance and discussed overcoming: how to handle gigapixel images, how to handle quality variation between slides, and how to handle ambiguous ground-truth.46

 

4.5 Convolutional Neural Network (CNN) in pathology

Hegde et al. for histopathology images introduced ‘SMILY’ (Similar Medical Images Like Yours) which is a DL based reverse image search tool. Their tool follows the steps of: a) Create a database of image patches and a numerical portrayal of each patch’s image fillings called the embedding, b) Calculate the embedding utilizing a CNN, c) SMILY calculates the embedding of the selected query image and matches it in a proficient manner with those in the database, and in the last step d) SMILY yields the k most similar patches, where k is customizable. To create the database the authors used images from TCGA with the evaluations utilizing127K image patches from 45 slides while the question set included 22.5K patches from additional 15 slides. The CNN algorithm, instead of using large, pixel-annotated datasets of histopathology images, was trained using a dataset of images of people, animals, and manmade and natural objects. In the assessment of prostate specimens for finding similar histologic features, SMILY scored 62.1% on average which is, considerably higher than the random search results score of 26.8% with p -value < 0.001. SMILY’s score for histologic feature match, when queried from multiple organs, was appreciably higher than random with the score of 57.8% vs. 18.3% with p-value < 0.001. The authors claim that SMILY can be used as a general-purpose tool in multiple applications of diagnosis, research, and education even though it will have lower accuracy than an application specific tool.47

 

References

  1. Meijer G., Beliën J., van Diest P., et al. “Origins of … image analysis in clinical pathology.” J Clin Pathol. 1997;50(5):365-370. doi:10.1136/jcp.50.5.365.
  2. Beck A., Sangoi A., Leung S. et al. “Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival.” Science Translational Medicine 09 Nov 2011: Vol. 3, Issue 108, pp. 108ra113 DOI: 10.1126/scitranslmed.3002564.
  3. Rashidi H., Tran N., Betts E., et al. Artificial “Intelligence and Machine Learning in Pathology: The Present Landscape of Supervised Methods.” Acad Pathol. 2019;6:2374289519873088. Published 2019 Sep 3. doi:10.1177/2374289519873088.
  4. Moxley-Wyles B., Colling R., Verrill C. “Artificial intelligence in pathology: an overview.” Diagnostic Histopathology, Volume 26, Issue 11, 2020, Pages 513-520, ISSN 1756-2317. https://doi.org/10.1016/j.mpdhp.2020.08.004.
  5. Li Y., Deng L., Yang X., et al. “Early diagnosis of gastric cancer based on deep learning combined with the spectral-spatial classification method.” Biomed Opt Express. 2019;10(10):4999-5014. Published 2019 Sep 9. doi:10.1364/BOE.10.004999.
  6. Hartman D., Van Der Laak J., Gurcan M., et al. “Value of Public Challenges for the Development of Pathology Deep Learning Algorithms.” J Pathol Inform. 2020 Feb 26;11:7. doi: 10.4103/jpi.jpi_64_19. PMID: 32318315; PMCID: PMC7147520.
  7. Niazi M., Parwani A., Gurcan M. “Digital pathology and artificial intelligence.” Lancet Oncol. 2019 May;20(5):e253-e261. doi: 10.1016/S1470-2045(19)30154-8. PMID: 31044723.
  8. Kohlberger T., Liu Y., Moran M., et al. “Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection.” J Pathol Inform. 2019;10:39. Published 2019 Dec 12. doi:10.4103/jpi.jpi_11_19.
  9. Hartman D., Pantanowitz L., McHugh J., et al. “Enterprise Implementation of Digital Pathology: Feasibility, Challenges, and Opportunities.” J Digit Imaging. 2017;30(5):555-560. doi:10.1007/s10278-017-9946-9
  10. Parwani, A. “Next generation diagnostic pathology: use of digital pathology and artificial intelligence tools to augment a pathological diagnosis.” Diagn Pathol 14, 138 (2019). https://doi.org/10.1186/s13000-019-0921-2.
  11. Khan A., Rajpoot N., Treanor D., et al. “A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution.” IEEE Trans Biomed Eng. 2014 Jun;61(6):1729-38. doi: 10.1109/TBME.2014.2303294. PMID: 24845283.
  12. Janowczyk A., Madabhushi A. “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases.” J Pathol Inform. 2016;7:29. Published 2016 Jul 26. doi:10.4103/2153-3539.186902.
  13. Novis D., Gephardt G., Zarbo R. “Interinstitutional comparison of frozen section consultation in small hospitals: a college of American pathologists Q-probes study of 18532 frozen section consultation diagnoses in 233 small hospitals.” Arch Pathol Lab Med, 120 (12) (1996), p. 1087. https://search.proquest.com/openview/72a14d7423df2f7ce34028c1b4e1f74f/1?pq-origsite=gscholar&cbl=42082.
  14. Kahmke R. , Lee W., Puscas L., et al. “Utility of intraoperative frozen sections during thyroid surgery.” Int J Otolaryngol (2013), 10.1155/2013/496138.
  15. Li Y., Chen P., Li Z., et al. “Rule-based automatic diagnosis of thyroid nodules from intraoperative frozen sections using deep learning.” Artificial Intelligence in Medicine, Volume 108, 2020, 101918. ISSN 0933-3657. https://doi.org/10.1016/j.artmed.2020.101918.
  16. Paeng K. (2017). ARTIFICIAL INTELLIGENCE FOR DIGITAL PATHOLOGY [PowerPoint presentation]. GPU Technology Conference, San Jose, CA, USA. https://on-demand.gputechconf.com/gtc/2017/presentation/s7677-paeng-kyunghyun-artificial-intelligence-for-digital-pathology.pdf.
  17. Hegde, N., Hipp, J.D., Liu, Y. et al. “Similar image search for histopathology: SMILY.” npj Digit. Med. 2, 56 (2019). https://doi.org/10.1038/s41746-019-0131-z.

Leave a comment

Summery for: Artificial Intelligence (AI) in Pathology – A Summary and Challenges Part 2
First publisher:

Global Journal of Medical Research

Date published: 02/27/2021

Author

Designation: MBBS

Table of Contents