• Title/Summary/Keyword: Image Normalization

Search Result 245, Processing Time 0.03 seconds

Evaluation of Physical Correction in Nuclear Medicine Imaging : Normalization Correction (물리적 보정된 핵의학 영상 평가 : 정규화 보정)

  • Park, Chan Rok;Yoon, Seok Hwan;Lee, Hong Jae;Kim, Jin Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.29-33
    • /
    • 2017
  • Purpose In this study, we evaluated image by applying normalization factor during 30 days to the PET images. Materials and Methods Normalization factor was acquired during 30 days. We compared with 30 normalization factors. We selected 3 clinical case (PNS study). We applied for normalization factor to PET raw data and evaluated SUV and count (kBq/ml) by drawing ROI to liver and lesion. Results There is no significant difference normalization factor. SUV and count are not different for PET image according to normalization factor. Conclusion We can get a lot of information doing the quality assurance such as performance of sinogram and detector. That's why we need to do quality assurance daily.

  • PDF

Optimized Normalization for Unsupervised Learning-based Image Denoising (비지도 학습 기반 영상 노이즈 제거 기술을 위한 정규화 기법의 최적화)

  • Lee, Kanggeun;Jeong, Won-Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.45-54
    • /
    • 2021
  • Recently, deep learning-based denoising approaches have been actively studied. In particular, with the advances of blind denoising techniques, it become possible to train a deep learning-based denoising model only with noisy images in an image domain where it is impossible to obtain a clean image. We no longer require pairs of a clean image and a noisy image to obtain a restored clean image from the observation. However, it is difficult to recover the target using a deep learning-based denoising model trained by only noisy images if the distribution of the noisy image is far from the distribution of the clean image. To address this limitation, unpaired image denoising approaches have recently been studied that can learn the denoising model from unpaired data of the noisy image and the clean image. ISCL showed comparable performance close to that of supervised learning-based models based on pairs of clean and noisy images. In this study, we propose suitable normalization techniques for each purpose of architectures (e.g., generator, discriminator, and extractor) of ISCL. We demonstrate that the proposed method outperforms state-of-the-art unpaired image denoising approaches including ISCL.

Illumination Normalization Method for Robust Eye Detection in Lighting Changing Environment (조명변화에 강인한 눈 검출을 위한 조명 정규화 방법)

  • Xu, Chengzhe;Islam, Ihtesham Ul;Kim, In-Taek
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.955-956
    • /
    • 2008
  • This paper presents a new method for illumination normalization in eye detection. Based on the retinex image formation model, we employ the discrete wavelet transform to remove the lighting effect in face image data. The final result based on the proposed method shows the better performance in detecting eyes compared with previous work.

  • PDF

Line feature extraction in a noisy image

  • Lee, Joon-Woong;Oh, Hak-Seo;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.137-140
    • /
    • 1996
  • Finding line segments in an intensity image has been one of the most fundamental issues in computer vision. In complex scenes, it is hard to detect the locations of point features. Line features are more robust in providing greater positional accuracy. In this paper we present a robust "line features extraction" algorithm which extracts line feature in a single pass without using any assumptions and constraints. Our algorithm consists of five steps: (1) edge scanning, (2) edge normalization, (3) line-blob extraction, (4) line-feature computation, and (5) line linking. By using edge scanning, the computational complexity due to too many edge pixels is drastically reduced. Edge normalization improves the local quantization error induced from the gradient space partitioning and minimizes perturbations on edge orientation. We also analyze the effects of edge processing, and the least squares-based method and the principal axis-based method on the computation of line orientation. We show its efficiency with some real images.al images.

  • PDF

Quantitative Evaluation of Nonlinear Shape Normalization Methods for the Recognition of Large-Set Handwrittern Characters (대용량 필기체 문자 인식을 위한 비선형 형태 정규화 방법의 정량적 평가)

  • 이성환;박정선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.84-93
    • /
    • 1993
  • Recently, several nonlinear shape normalization methods have been proposed in order to compensate for the shape distortions in handwritten characters. In this paper, we review these nonlinear shape normalization methods from the two points of view : feature projection and feature density equalization. The former makes feature projection histogram by projecting a certain feature at each point of input image into horizontal-or vertical-axis and the latter equalizes the feature densities of input image by re-sampling the feature projection histogram. A systematic comparison of these methods has been made based on the following criteria: recognition rate, processing speed, computational complexity and measure of variation. Then, we present the result of quantitative evaluation of each method based on these criteria for a large variety of handwritten Hangul syllables.

  • PDF

New Shot Boundary Detection Using Local $X^2$-Histogram and Normalization (지역적 $X^2$-히스토그램과 정규화를 이용한 새로운 샷 경계 검출)

  • Shin, Seong-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.103-109
    • /
    • 2007
  • In this paper, we detect shot boundaries using $X^2$-histogram comparison method which have enough spatial information that is more robust to the camera or object motion and produce more precise results. Also, we present normalization method to change Log-Formula and constant that is used for contrast enhancement of image in image processing and apply in difference value. And, present shot boundary detection algorithm to detect shot boundary based on general shot and abrupt shot's characteristic.

  • PDF

Meter Numeric Character Recognition Using Illumination Normalization and Hybrid Classifier (조명 정규화 및 하이브리드 분류기를 이용한 계량기 숫자 인식)

  • Oh, Hangul;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.71-77
    • /
    • 2014
  • In this paper, we propose an improved numeric character recognition method which can recognize numeric characters well under low-illuminated and shade-illuminated environment. The LN(Local Normalization) preprocessing method is used in order to enhance low-illuminated and shade-illuminated image quality. The reading area is detected using line segment information extracted from the illumination-normalized meter images, and then the three-phase procedures are performed for segmentation of numeric characters in the reading area. Finally, an efficient hybrid classifier is used to classify the segmented numeric characters. The proposed numeric character classifier is a combination of multi-layered feedforward neural network and template matching module. Robust heuristic rules are applied to classify the numeric characters. Experiments using meter image database were conducted. Meter image database was made using various kinds of meters under low-illuminated and shade-illuminated environment. The experimental results indicates the superiority of the proposed numeric character recognition method.

Improved Haze Removal Algorithm by using Color Normalization and Haze Rate Compensation (색 정규화 및 안개량 보정을 이용한 개선된 안개 제거 알고리즘)

  • Kim, Jong-Hyun;Cha, Hyung-Tai
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.738-747
    • /
    • 2015
  • It is difficult to use a recognition algorithm of an image in a foggy environment because the color and edge information is removed. One of the famous defogging algorithm is haze removal by using 'Dark Channel Prior(DCP)' which is used to predict for transmission rate using color information of an image and eliminates fog from the image. However, in case that the image has factors such as sunset or yellow dust, there is overemphasized problem on the color of certain channel after haze removal. Furthermore, in case that the image includes an object containing high RGB channel, the transmission related to this area causes a misestimated issue. In this paper, we purpose an enhanced fog elimination algorithm by using improved color normalization and haze rate revision which correct mis-estimation haze area on the basis of color information and edge information of an image. By eliminating the color distortion, we can obtain more natural clean image from the haze image.

An Improved Normalization Method for Haar-like Features for Real-time Object Detection (실시간 객체 검출을 위한 개선된 Haar-like Feature 정규화 방법)

  • Park, Ki-Yeong;Hwang, Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.8C
    • /
    • pp.505-515
    • /
    • 2011
  • This paper describes a normalization method of Haar-like features used for object detection. Previous method which performs variance normalization on Haar-like features requires a lot of calculations, since it uses an additional integral image for calculating the standard deviation of intensities of pixels in a candidate window and increases possibility of false detection in the area where variance of brightness is small. The proposed normalization method can be performed much faster than the previous method by not using additional integral image and classifiers which are trained with the proposed normalization method show robust performance in various lighting conditions. Experimental result shows that the object detector which uses the proposed method is 26% faster than the one which uses the previous method. Detection rate is also improved by 5% without increasing false alarm rate and 45% for the samples whose brightness varies significantly.

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF