• Title/Summary/Keyword: Image tissue feature extraction

Search Result 6, Processing Time 0.024 seconds

A Novel Model for Smart Breast Cancer Detection in Thermogram Images

  • Kazerouni, Iman Abaspur;Zadeh, Hossein Ghayoumi;Haddadnia, Javad
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.24
    • /
    • pp.10573-10576
    • /
    • 2015
  • Background: Accuracy in feature extraction is an important factor in image classification and retrieval. In this paper, a breast tissue density classification and image retrieval model is introduced for breast cancer detection based on thermographic images. The new method of thermographic image analysis for automated detection of high tumor risk areas, based on two-directional two-dimensional principal component analysis technique for feature extraction, and a support vector machine for thermographic image retrieval was tested on 400 images. The sensitivity and specificity of the model are 100% and 98%, respectively.

Organ Recognition in Ultrasound images Using Log Power Spectrum (로그 전력 스펙트럼을 이용한 초음파 영상에서의 장기인식)

  • 박수진;손재곤;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9C
    • /
    • pp.876-883
    • /
    • 2003
  • In this paper, we propose an algorithm for organ recognition in ultrasound images using log power spectrum. The main procedure of the algorithm consists of feature extraction and feature classification. In the feature extraction, as a translation invariant feature, log power spectrum is used for extracting the information on echo of the organs tissue from a preprocessed input image. In the feature classification, Mahalanobis distance is used as a measure of the similarity between the feature of an input image and the representative feature of each class. Experimental results for real ultrasound images show that the proposed algorithm yields the improvement of maximum 30% recognition rate than the recognition algorithm using power spectrum and Euclidean distance, and results in better recognition rate of 10-40% than the recognition algorithm using weighted quefrency complex cepstrum.

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

Brain Tumor Detection Based on Amended Convolution Neural Network Using MRI Images

  • Mohanasundari M;Chandrasekaran V;Anitha S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2788-2808
    • /
    • 2023
  • Brain tumors are one of the most threatening malignancies for humans. Misdiagnosis of brain tumors can result in false medical intervention, which ultimately reduces a patient's chance of survival. Manual identification and segmentation of brain tumors from Magnetic Resonance Imaging (MRI) scans can be difficult and error-prone because of the great range of tumor tissues that exist in various individuals and the similarity of normal tissues. To overcome this limitation, the Amended Convolutional Neural Network (ACNN) model has been introduced, a unique combination of three techniques that have not been previously explored for brain tumor detection. The three techniques integrated into the ACNN model are image tissue preprocessing using the Kalman Bucy Smoothing Filter to remove noisy pixels from the input, image tissue segmentation using the Isotonic Regressive Image Tissue Segmentation Process, and feature extraction using the Marr Wavelet Transformation. The extracted features are compared with the testing features using a sigmoid activation function in the output layer. The experimental findings show that the suggested model outperforms existing techniques concerning accuracy, precision, sensitivity, dice score, Jaccard index, specificity, Positive Predictive Value, Hausdorff distance, recall, and F1 score. The proposed ACNN model achieved a maximum accuracy of 98.8%, which is higher than other existing models, according to the experimental results.

Evaluation of Volumetric Texture Features for Computerized Cell Nuclei Grading

  • Kim, Tae-Yun;Choi, Hyun-Ju;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1635-1648
    • /
    • 2008
  • The extraction of important features in cancer cell image analysis is a key process in grading renal cell carcinoma. In this study, we applied three-dimensional (3D) texture feature extraction methods to cell nuclei images and evaluated the validity of them for computerized cell nuclei grading. Individual images of 2,423 cell nuclei were extracted from 80 renal cell carcinomas (RCCs) using confocal laser scanning microscopy (CLSM). First, we applied the 3D texture mapping method to render the volume of entire tissue sections. Then, we determined the chromatin texture quantitatively by calculating 3D gray-level co-occurrence matrices (3D GLCM) and 3D run length matrices (3D GLRLM). Finally, to demonstrate the suitability of 3D texture features for grading, we performed a discriminant analysis. In addition, we conducted a principal component analysis to obtain optimized texture features. Automatic grading of cell nuclei using 3D texture features had an accuracy of 78.30%. Combining 3D textural and 3D morphological features improved the accuracy to 82.19%. As a comparative study, we also performed a stepwise feature selection. Using the 4 optimized features, we could obtain more improved accuracy of 84.32%. Three dimensional texture features have potential for use as fundamental elements in developing a new nuclear grading system with accurate diagnosis and predicting prognosis.

  • PDF

A Classification of Breast Tumor Tissue Images Using SVM (SVM을 이용한 유방 종양 조직 영상의 분류)

  • Hwang, Hae-Gil;Choi, Hyun-Ju;Yoon, Hye-Kyoung;Choi, Heung-Kook
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.178-181
    • /
    • 2005
  • Support vector machines is a powerful learning algorithm and attempt to separate belonging to two given sets in N-dimensional real space by a nonlinear surface, often only implicitly dened by a kernel function. We described breast tissue images analyses using texture features from Haar wavelet transformed images to classify breast lesion of ductal organ Benign, DCIS and CA. The approach for creating a classifier is composed of 2 steps: feature extraction and classification. Therefore, in the feature extraction step, we extracted texture features from wavelet transformed images with $10{\times}$ magnification. In the classification step, we created four classifiers from each image of extracted features using SVM(Support Vector Machines). In this study, we conclude that the best classifier in histological sections of breast tissue in the texture features from second-level wavelet transformed images used in Polynomial function.

  • PDF