• Title/Summary/Keyword: Histopathology images

Search Result 24, Processing Time 0.018 seconds

User Interface Application for Cancer Classification using Histopathology Images

  • Naeem, Tayyaba;Qamar, Shamweel;Park, Peom
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.17 no.2
    • /
    • pp.91-97
    • /
    • 2021
  • User interface for cancer classification system is a software application with clinician's friendly tools and functions to diagnose cancer from pathology images. Pathology evolved from manual diagnosis to computer-aided diagnosis with the help of Artificial Intelligence tools and algorithms. In this paper, we explained each block of the project life cycle for the implementation of automated breast cancer classification software using AI and machine learning algorithms to classify normal and invasive breast histology images. The system was designed to help the pathologists in an automatic and efficient diagnosis of breast cancer. To design the classification model, Hematoxylin and Eosin (H&E) stained breast histology images were obtained from the ICIAR Breast Cancer challenge. These images are stain normalized to minimize the error that can occur during model training due to pathological stains. The normalized dataset was fed into the ResNet-34 for the classification of normal and invasive breast cancer images. ResNet-34 gave 94% accuracy, 93% F Score, 95% of model Recall, and 91% precision.

Prostate MR and Pathology Image Fusion through Image Correction and Multi-stage Registration (영상보정 및 다단계 정합을 통한 전립선 MR 영상과 병리 영상간 융합)

  • Jung, Ju-Lip;Jo, Hyun-Hee;Hong, Helen
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.700-704
    • /
    • 2009
  • In this paper, we propose a method for combining MR image with histopathology image of the prostate using image correction and multi-stage registration. Our method consists of four steps. First, the intensity of prostate bleeding area on T2-weighted MR image is substituted for that on T1-weighted MR image. And two or four tissue sections of the prostate in histopathology image are combined to produce a single prostate image by manual stitching. Second, rigid registration is performed to find the affine transformations that to optimize mutual information between MR and histopathology images. Third, the result of affine registration is deformed by the TPS warping. Finally, aligned images are visualized by the intensity intermixing. Experimental results show that the prostate tumor lesion can be properly located and clearly visualized within MR images for tissue characterization comparison and that the registration error between T2-weighted MR and histopathology image was 0.0815mm.

Fractal dimension analysis as an easy computational approach to improve breast cancer histopathological diagnosis

  • Lucas Glaucio da Silva;Waleska Rayanne Sizinia da Silva Monteiro;Tiago Medeiros de Aguiar Moreira;Maria Aparecida Esteves Rabelo;Emílio Augusto Campos Pereira de Assis;Gustavo Torres de Souza
    • Applied Microscopy
    • /
    • v.51
    • /
    • pp.6.1-6.9
    • /
    • 2021
  • Histopathology is a well-established standard diagnosis employed for the majority of malignancies, including breast cancer. Nevertheless, despite training and standardization, it is considered operator-dependent and errors are still a concern. Fractal dimension analysis is a computational image processing technique that allows assessing the degree of complexity in patterns. We aimed here at providing a robust and easily attainable method for introducing computer-assisted techniques to histopathology laboratories. Slides from two databases were used: A) Breast Cancer Histopathological; and B) Grand Challenge on Breast Cancer Histology. Set A contained 2480 images from 24 patients with benign alterations, and 5429 images from 58 patients with breast cancer. Set B comprised 100 images of each type: normal tissue, benign alterations, in situ carcinoma, and invasive carcinoma. All images were analyzed with the FracLac algorithm in the ImageJ computational environment to yield the box count fractal dimension (Db) results. Images on set A on 40x magnification were statistically different (p = 0.0003), whereas images on 400x did not present differences in their means. On set B, the mean Db values presented promising statistical differences when comparing. Normal and/or benign images to in situ and/or invasive carcinoma (all p < 0.0001). Interestingly, there was no difference when comparing normal tissue to benign alterations. These data corroborate with previous work in which fractal analysis allowed differentiating malignancies. Computer-aided diagnosis algorithms may beneficiate from using Db data; specific Db cut-off values may yield ~ 99% specificity in diagnosing breast cancer. Furthermore, the fact that it allows assessing tissue complexity, this tool may be used to understand the progression of the histological alterations in cancer.

Improved Classification of Cancerous Histopathology Images using Color Channel Separation and Deep Learning

  • Gupta, Rachit Kumar;Manhas, Jatinder
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.175-182
    • /
    • 2021
  • Oral cancer is ranked second most diagnosed cancer among Indian population and ranked sixth all around the world. Oral cancer is one of the deadliest cancers with high mortality rate and very less 5-year survival rates even after treatment. It becomes necessary to detect oral malignancies as early as possible so that timely treatment may be given to patient and increase the survival chances. In recent years deep learning based frameworks have been proposed by many researchers that can detect malignancies from medical images. In this paper we have proposed a deep learning-based framework which detects oral cancer from histopathology images very efficiently. We have designed our model to split the color channels and extract deep features from these individual channels rather than single combined channel with the help of Efficient NET B3. These features from different channels are fused by using feature fusion module designed as a layer and placed before dense layers of Efficient NET. The experiments were performed on our own dataset collected from hospitals. We also performed experiments of BreakHis, and ICML datasets to evaluate our model. The results produced by our model are very good as compared to previously reported results.

Cerebellar Liponeurocytoma with an Unusually Aggressive Histopathology : Case Report and Review of the Literature

  • Chung, Sang-Bong;Suh, Yeon-Lim;Lee, Jung-Il
    • Journal of Korean Neurosurgical Society
    • /
    • v.52 no.3
    • /
    • pp.250-253
    • /
    • 2012
  • We report a rare case of cerebellar liponeurocytoma with an unusually aggressive histopathology. A 49-year-old man presented with a four-month history of headache, vertigo, and progressive swaying gait. Magnetic resonance imaging showed a $3{\times}3.5cm$ sized relatively well-demarcated round mass lesion in the fourth ventricle, characterized by high signal intensity on T2-weighted images. Postcontrast images revealed strong enhancement of the solid portion and the cyst wall. The patient underwent suboccipital craniectomy and tumor removal. The pathologic diagnosis was cerebellar liponeurocytoma. Adjuvant radiotherapy was offered due to concerns related to the high proliferative index (Ki-67, 13.68%) of the tumor. At the last routine postoperative follow-up visit (12 months), the patient complained of no specific symptom and there was no evidence of tumor recurrence. However, longterm follow-up and the analysis of similar cases are necessary because of the low number of reports and the short follow-up of cases.

Breast Tumor Cell Nuclei Segmentation in Histopathology Images using EfficientUnet++ and Multi-organ Transfer Learning

  • Dinh, Tuan Le;Kwon, Seong-Geun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1000-1011
    • /
    • 2021
  • In recent years, using Deep Learning methods to apply for medical and biomedical image analysis has seen many advancements. In clinical, using Deep Learning-based approaches for cancer image analysis is one of the key applications for cancer detection and treatment. However, the scarcity and shortage of labeling images make the task of cancer detection and analysis difficult to reach high accuracy. In 2015, the Unet model was introduced and gained much attention from researchers in the field. The success of Unet model is the ability to produce high accuracy with very few input images. Since the development of Unet, there are many variants and modifications of Unet related architecture. This paper proposes a new approach of using Unet++ with pretrained EfficientNet as backbone architecture for breast tumor cell nuclei segmentation and uses the multi-organ transfer learning approach to segment nuclei of breast tumor cells. We attempt to experiment and evaluate the performance of the network on the MonuSeg training dataset and Triple Negative Breast Cancer (TNBC) testing dataset, both are Hematoxylin and Eosin (H & E)-stained images. The results have shown that EfficientUnet++ architecture and the multi-organ transfer learning approach had outperformed other techniques and produced notable accuracy for breast tumor cell nuclei segmentation.

LI-RADS Version 2018 Treatment Response Algorithm: Diagnostic Performance after Transarterial Radioembolization for Hepatocellular Carcinoma

  • Jongjin Yoon;Sunyoung Lee;Jaeseung Shin;Seung-seob Kim;Gyoung Min Kim;Jong Yun Won
    • Korean Journal of Radiology
    • /
    • v.22 no.8
    • /
    • pp.1279-1288
    • /
    • 2021
  • Objective: To assess the diagnostic performance of the Liver Imaging Reporting and Data System (LI-RADS) version 2018 treatment response algorithm (TRA) for the evaluation of hepatocellular carcinoma (HCC) treated with transarterial radioembolization. Materials and Methods: This retrospective study included patients who underwent transarterial radioembolization for HCC followed by hepatic surgery between January 2011 and December 2019. The resected lesions were determined to have either complete (100%) or incomplete (< 100%) necrosis based on histopathology. Three radiologists independently reviewed the CT or MR images of pre- and post-treatment lesions and assigned categories based on the LI-RADS version 2018 and the TRA, respectively. Diagnostic performances of LI-RADS treatment response (LR-TR) viable and nonviable categories were assessed for each reader, using histopathology from hepatic surgeries as a reference standard. Inter-reader agreements were evaluated using Fleiss κ. Results: A total of 27 patients (mean age ± standard deviation, 55.9 ± 9.1 years; 24 male) with 34 lesions (15 with complete necrosis and 19 with incomplete necrosis on histopathology) were included. To predict complete necrosis, the LR-TR nonviable category had a sensitivity of 73.3-80.0% and a specificity of 78.9-89.5%. For predicting incomplete necrosis, the LR-TR viable category had a sensitivity of 73.7-79.0% and a specificity of 93.3-100%. Five (14.7%) of 34 treated lesions were categorized as LR-TR equivocal by consensus, with two of the five lesions demonstrating incomplete necrosis. Interreader agreement for the LR-TR category was 0.81 (95% confidence interval: 0.66-0.96). Conclusion: The LI-RADS version 2018 TRA can be used to predict the histopathologic viability of HCCs treated with transarterial radioembolization.

A Study on Deep Learning Binary Classification of Prostate Pathological Images Using Multiple Image Enhancement Techniques (다양한 이미지 향상 기법을 사용한 전립선 병리영상 딥러닝 이진 분류 연구)

  • Park, Hyeon-Gyun;Bhattacharjee, Subrata;Deekshitha, Prakash;Kim, Cho-Hee;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.539-548
    • /
    • 2020
  • Deep learning technology is currently being used and applied in many different fields. Convolution neural network (CNN) is a method of artificial neural networks in deep learning, which is commonly used for analyzing different types of images through classification. In the conventional classification of histopathology images of prostate carcinomas, the rating of cancer is classified by human subjective observation. However, this approach has produced to some misdiagnosing of cancer grading. To solve this problem, CNN based classification method is proposed in this paper, to train the histological images and classify the prostate cancer grading into two classes of the benign and malignant. The CNN architecture used in this paper is based on the VGG models, which is specialized for image classification. However, color normalization was performed based on the contrast enhancement technique, and the normalized images were used for CNN training, to compare the classification results of both original and normalized images. In all cases, accuracy was over 90%, accuracy of the original was 96%, accuracy of other cases was higher, and loss was the lowest with 9%.

Multi-class Classification of Histopathology Images using Fine-Tuning Techniques of Transfer Learning

  • Ikromjanov, Kobiljon;Bhattacharjee, Subrata;Hwang, Yeong-Byn;Kim, Hee-Cheol;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.849-859
    • /
    • 2021
  • Prostate cancer (PCa) is a fatal disease that occurs in men. In general, PCa cells are found in the prostate gland. Early diagnosis is the key to prevent the spreading of cancers to other parts of the body. In this case, deep learning-based systems can detect and distinguish histological patterns in microscopy images. The histological grades used for the analysis were benign, grade 3, grade 4, and grade 5. In this study, we attempt to use transfer learning and fine-tuning methods as well as different model architectures to develop and compare the models. We implemented MobileNet, ResNet50, and DenseNet121 models and used three different strategies of freezing layers techniques of fine-tuning, to get various pre-trained weights to improve accuracy. Finally, transfer learning using MobileNet with the half-layer frozen showed the best results among the nine models, and 90% accuracy was obtained on the test data set.

ZoomISEG: Interactive Multi-Scale Fusion for Histopathology Whole Slide Image Segmentation (ZoomISEG: 조직 병리학 전체 슬라이드 영상 분할을 위한 대화형 다중스케일 융합)

  • Seonghui Min;Won-Ki Jeong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.127-135
    • /
    • 2023
  • Accurate segmentation of histopathology whole slide images (WSIs) is a crucial task for disease diagnosis and treatment planning. However, conventional automated segmentation algorithms may not always be applicable to WSI segmentation due to their large size and variations in tissue appearance, staining, and imaging conditions. Recent advances in interactive segmentation, which combines human expertise with algorithms, have shown promise to improve efficiency and accuracy in WSI segmentation but also presented us with challenging issues. In this paper, we propose a novel interactive segmentation method, ZoomISEG, that leverages multi-resolution WSIs. We demonstrate the efficacy and performance of the proposed method via comparison with conventional single-scale methods and an ablation study. The results confirm that the proposed method can reduce human interaction while achieving accuracy comparable to that of the brute-force approach using the highest-resolution data.