• Title/Summary/Keyword: Size Normalization

Search Result 108, Processing Time 0.042 seconds

An Analytical and Experimental Study of Binary Image Normalization for Scale Invariance with Zernike Moments

  • Kim, Whoi-Yul
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.146-155
    • /
    • 1997
  • In order to achieve scale- and rotation-invariance in recognizing unoccluded objects in binary images using Zernike moment features, an image of an object has often been normalized first by its zeroth-order moment (ZOM) or area. With elongated objects such as characters, a stroke width varies with the threshold value used, it becomes one or two pixels wider or thinner. The variations of the total area of the character becomes significant when the character is relatively thin with respect to its overall size, and the resulting normalized moment features are no longer reliable. This dilation/erosion effect is more severe when the object is not focused precisely. In this paper, we analyze the ZOM method and propose as a normalization method, the maximum enclosing circle (MEC) centered at the centroid of the character. We compare both the ZOM and MEC methods in their performance through various experiments.

  • PDF

Ultrasonographic Features of Medullary Thyroid Carcinoma: Do they Correlate with Pre- and Post-Operative Calcitonin Levels?

  • Cho, Kyung Eun;Gweon, Hye Mi;Park, Ah Young;Yoo, Mi Ri;Kim, Jeong-Ah;Youk, Ji Hyun;Park, Young Mi;Son, Eun Ju
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.7
    • /
    • pp.3357-3362
    • /
    • 2016
  • Purpose: To correlate ultrasonographic (US) features of medullary thyroid carcinoma (MTC) with preoperative and post-operative calcitonin levels. Materials and Methods: A total of 130 thyroid nodules diagnosed as MTC were evaluated. Two radiologists retrospectively evaluated preoperative US features according to size, shape, margin, echogenicity, type of calcification, and lymph node status. Postoperative clinical and imaging follow-up (mean duration $31.9 {\pm} 22.5$ months) was performed for detection of tumor recurrence. US features, presence of LN metastasis, and tumor recurrence were compared between MTC nodules with and without elevated preoperative calcitonin (>100 pg/mL). Those with normalized and non-normalized postoperative calcitonin levels groups were also compared. Results: Common US features of MTCs were solid internal content (90.8%), irregular shape (44.6%), circumscribed margin (46.2%), and hypoechogenicity (56.2%). Comparing MTC nodules with and without elevated preoperative calcitonin levels, the size and shape of MTC nodule and lymph node metastasis showed statistical significance (p<0.05). Postoperative calcitonin normalization correlated with US features of tumor size (p=0.002), margin (p=0.034), shape ($p{\leq}0.001$), and presence of calcification (p=0.046). Tumor recurrence and LN metastasis were more prevalent in patients without normalization of postoperative calcitonin than in those with normalization (p=0.001). Conclusions: Serum calcitonin measurement is helpful for early diagnosis and predicting prognosis. Postoperative calcitonin measurement is also important for postoperative US follow up, especially in cases with larger nodule size, presence of calcification, irregular shape, and irregular margin.

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Effect of dimensionless number and analysis of gait pattern by gender -spatiotemporal variables- (보행 분석시 Dimensionless number의 효과 및 성별간 보행패턴 분석 -시공간변인-)

  • Lee, Hyun-Seob
    • 한국체육학회지인문사회과학편
    • /
    • v.53 no.5
    • /
    • pp.521-531
    • /
    • 2014
  • The purposes of this study were to evaluate the effect of normalization by dimensionless number of Hof(1996) and to analysis the gait pattern for 20s Korean males and females. Subjects are selected in accordance with classification system of Korean standard body figure and age. Experimental equipment is the Motion capture system. Subjects who are walked at a self-selected normal walking speed were photographed using the Motion capture system and analyzed using 3D motion analysis method with OrthoTrak, Cortex, Matlab and SPSS for a statistical test. When used to normalize data, there are no differences of statistical significances between gender in all spatiotemporal variables. I concluded that gait research for mutual comparison requires a normalization by dimensionless number to eliminate the effects of the body size and to accurate statistical analysis.

Automatic Pancreas Detection on Abdominal CT Images using Intensity Normalization and Faster R-CNN (복부 CT 영상에서 밝기값 정규화 및 Faster R-CNN을 이용한 자동 췌장 검출)

  • Choi, Si-Eun;Lee, Seong-Eun;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.396-405
    • /
    • 2021
  • In surgery to remove pancreatic cancer, it is important to figure out the shape of a patient's pancreas. However, previous studies have a limit to detect a pancreas automatically in abdominal CT images, because the pancreas varies in shape, size and location by patient. Therefore, in this paper, we propose a method of learning various shapes of pancreas according to the patients and adjacent slices using Faster R-CNN based on Inception V2, and automatically detecting the pancreas from abdominal CT images. Model training and testing were performed using the NIH Pancreas-CT Dataset, and intensity normalization was applied to all data to improve pancreatic detection accuracy. Additionally, according to the shape of the pancreas, the test dataset was classified into top, middle, and bottom slices to evaluate the model's performance on each data. The results show that the top data's mAP@.50IoU achieved 91.7% and the bottom data's mAP@.50IoU achieved 95.4%, and the highest performance was the middle data's mAP@.50IoU, 98.5%. Thus, we have confirmed that the model can accurately detect the pancreas in CT images.

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Image Enhancement for Western Epigraphy Using Local Statistics (국부 통계치를 활용한 서양금석문 영상향상)

  • Hwang, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.80-87
    • /
    • 2007
  • In this paper, we investigate an enhancement method for Western epigraphic images, which is based on local statistics. Image data is partitioned into two regions, background and information. Statistical and functional analyses are proceeded for image modeling. The Western epigraphic images, for the most part, have shown the Gaussian distribution. It is clarified that each region can be differentiated statistically. The local normalization process algorithm is designed on this model. The parameter is extracted and it‘s properties are verified with the size of moving window. The spatial gray-level distribution is modified and regions are differentiated by adjusting parameter and the size of moving window. Local statistics are utilized for realization of the enhancement, so that difference between regions can be enhanced and noise or speckles of region can be smoothed. Experimental results are presented to show the superiority of the proposed algorithm over the conventional methods.

Iris Recognition Using Ridgelets

  • Birgale, Lenina;Kokare, Manesh
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.445-458
    • /
    • 2012
  • Image feature extraction is one of the basic works for biometric analysis. This paper presents the novel concept of application of ridgelets for iris recognition systems. Ridgelet transforms are the combination of Radon transforms and Wavelet transforms. They are suitable for extracting the abundantly present textural data that is in an iris. The technique proposed here uses the ridgelets to form an iris signature and to represent the iris. This paper contributes towards creating an improved iris recognition system. There is a reduction in the feature vector size, which is 1X4 in size. The False Acceptance Rate (FAR) and False Rejection Rate (FRR) were also reduced and the accuracy increased. The proposed method also avoids the iris normalization process that is traditionally used in iris recognition systems. Experimental results indicate that the proposed method achieves an accuracy of 99.82%, 0.1309% FAR, and 0.0434% FRR.

A New Method of Selecting Cohort for Speaker Verification (화자검증을 위한 새로운 코호트 선택 방법)

  • 김성준;계영철
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.383-387
    • /
    • 2003
  • This paper deals with the method of speaker verification based on the conventional cohort of fixed size. In particular, a new cohort of variable size, which makes use of the distance between speaker models, is proposed: The density of neighboring speaker models within the fixed distance from each speaker is taken into account in the proposed method. The high density leads to the increase of cohort size, thus improving the speaker verification rate. On the other hand, the low density leads to its decrease, thus reducing the amount of computations. The simulation results show that the proposed method outperforms the conventional one, achieving a reduction in the EER.