• Title/Summary/Keyword: background noise

Search Result 965, Processing Time 0.029 seconds

Facial Contour Extraction in Moving Pictures by using DCM mask and Initial Curve Interpolation of Snakes (DCM 마스크와 스네이크의 초기곡선 보간에 의한 동영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.58-66
    • /
    • 2006
  • In this paper, we apply DCM(Dilation of Color and Motion information) mask and Active Contour Models(Snakes) to extract facial outline in moving pictures with complex background. First, we propose DCM mask which is made by applying morphology dilation and AND operation to combine facial color and motion information, and use this mask to detect facial region without complex background and to remove noise in image energy. Also, initial curves are automatically set according to rotational degree estimated with geometric ratio of facial elements to overcome the demerit of Active Contour Models which is sensitive to initial curves. And edge intensity and brightness are both used as image energy of snakes to extract contour at parts with weak edges. For experiments, we acquired total 480 frames with various head-poses of sixteen persons with both eyes shown by taking pictures in inner space and also by capturing broadcasting images. As a result, it showed that more elaborate facial contour is extracted at average processing time of 0.28 seconds when using interpolated initial curves according to facial rotation degree and using combined image energy of edge intensity and brightness.

A Depth-map Coding Method using the Adaptive XOR Operation (적응적 배타적 논리합을 이용한 깊이정보 맵 코딩 방법)

  • Kim, Kyung-Yong;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.274-292
    • /
    • 2011
  • This paper proposes an efficient coding method of the depth-map which is different from the natural images. The depth-map are so smooth in both inner parts of the objects and background, but it has sharp edges on the object-boundaries like a cliff. In addition, when a depth-map block is decomposed into bit planes, the characteristic of perfect matching or inverted matching between bit planes often occurs on the object-boundaries. Therefore, the proposed depth-map coding scheme is designed to have the bit-plane unit coding method using the adaptive XOR method for efficiently coding the depth-map images on the object-boundary areas, as well as the conventional DCT-based coding scheme (for example, H.264/AVC) for efficiently coding the inside area images of the objects or the background depth-map images. The experimental results show that the proposed algorithm improves the average bit-rate savings as 11.8 % ~ 20.8% and the average PSNR (Peak Signal-to-Noise Ratio) gains as 0.9 dB ~ 1.5 dB in comparison with the H.264/AVC coding scheme. And the proposed algorithm improves the average bit-rate savings as 7.7 % ~ 12.2 % and the average PSNR gains as 0.5 dB ~ 0.8 dB in comparison with the adaptive block-based depth-map coding scheme. It can be confirmed that the proposed method improves the subjective quality of synthesized image using the decoded depth-map in comparison with the H.264/AVC coding scheme. And the subjective quality of the proposed method was similar to the subjective quality of the adaptive block-based depth-map coding scheme.

Assessment of Magnetic Resonance Image Quality For Ferromagnetic Artifact Generation: Comparison with 1.5T and 3.0T. (강자성 인공물 발생에 대한 자기공명영상 질 평가: 1.5T와 3.0T 비교)

  • Goo, Eun-Hoe
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.2
    • /
    • pp.193-199
    • /
    • 2018
  • In this research, 15 patients were diagnosed with 1.5T and 3.0T MRI instruments (Philips, Medical System, Achieva) to minize Ferromagnetic artifact and find the optimized Tesla. Based on the theory that the 3.0T, when compared to 1.5T, show relatively high signal-to-ratio(SNR), Scan time can be shortened or adjust the image resolution. However, when using the 3.0T MRI instruments, various artifact due to the magnetic field difference can degrade the diagnostic information. For the analysis condition, area of interest is set at the background of the T1, T2 sagittal image followed by evaluation of L3, L4, L5 SNR, length of 3 parts with Ferromagnetic artifact, and Histogram. The validity evaluation was performed by using the independent t test. As a result, for the SNR evaluation, mere difference in value was observed for L3 between 1.5T and 3.0T, while big differences were observed for both L4, and L5(p<0.05). Shorter length was observed for the 1.5T when observing 3 parts with Ferromagnetic artifact, thus we can conclude that 3.0T can provide more information on about peripheral tissue diagnostic information(p<0.05). Finally, 1.5T showed higher counts values for the Histogram evaluation(p<0.05). As a result, when we have compared the 1.5T and 3.0T with SNR, length of Ferromagnetic artifact, Histogram, we believe that using a Low Tesla for Spine MRI test can achieve the optimal image information for patients with disk operation like PLIF, etc. in the past.

Moving Object Contour Detection Using Spatio-Temporal Edge with a Fixed Camera (고정 카메라에서의 시공간적 경계 정보를 이용한 이동 객체 윤곽선 검출 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.4
    • /
    • pp.474-486
    • /
    • 2010
  • In this paper, we propose a new method for detection moving object contour using spatial and temporal edge. In general, contour pixels of the moving object are likely present around pixels with high gradient value along the time axis and the spatial axis. Therefore, we can detect the contour of the moving objects by finding pixels which have high gradient value in the time axis and spatial axis. In this paper, we introduce a new computation method, termed as temporal edge, to compute an gradient value along the time axis for any pixel on an image. The temporal edge can be computed using two input gray images at time t and t-2 using the Sobel operator. Temporal edge is utilized to detect a candidate region of the moving object contour and then the detected candidate region is used to extract spatial edge information. The final contour of the moving object is detected using the combination of these two edge information, which are temporal edge and spatial edge, and then the post processing such as a morphological operation and a background edge removing procedure are applied to remove noise regions. The complexity of the proposed method is very low because it dose not use any background scene and high complex operation, therefore it can be applied to real-time applications. Experimental results show that the proposed method outperforms the conventional contour extraction methods in term of processing effort and a ghost effect which is occurred in the case of entropy method.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Hangeul detection method based on histogram and character structure in natural image (다양한 배경에서 히스토그램과 한글의 구조적 특징을 이용한 문자 검출 방법)

  • Pyo, Sung-Kook;Park, Young-Soo;Lee, Gang Seung;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.15-22
    • /
    • 2019
  • In this paper, we proposed a Hangeul detection method using structural features of histogram, consonant, and vowel to solve the problem of Hangul which is separated and detected consonant and vowel The proposed method removes background by using DoG (Difference of Gaussian) to remove unnecessary noise in Hangul detection process. In the image with the background removed, we converted it to a binarized image using a cumulative histogram. Then, the horizontal position histogram was used to find the position of the character string, and character combination was performed using the vertical histogram in the found character image. However, words with a consonant vowel such as '가', '라' and '귀' are combined using a structural characteristic of characters because they are difficult to combine into one character. In this experiment, an image composed of alphabets with various backgrounds, an image composed of Korean characters, and an image mixed with alphabets and Hangul were tested. The detection rate of the proposed method is about 2% lower than that of the K-means and MSER character detection method, but it is about 5% higher than that of the character detection method including Hangul.

Quantitative Study of Annular Single-Crystal Brain SPECT (원형단일결정을 이용한 SPECT의 정량화 연구)

  • 김희중;김한명;소수길;봉정균;이종두
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 1998
  • Nuclear medicine emission computed tomography(ECT) can be very useful to diagnose early stage of neuronal diseases and to measure theraputic results objectively, if we can quantitate energy metabolism, blood flow, biochemical processes, or dopamine receptor and transporter using ECT. However, physical factors including attenuation, scatter, partial volume effect, noise, and reconstruction algorithm make it very difficult to quantitate independent of type of SPECT. In this study, we quantitated the effects of attenuation and scatter using brain SPECT and three-dimensional brain phantom with and without applying their correction methods. Dual energy window method was applied for scatter correction. The photopeak energy window and scatter energy window were set to 140ke${\pm}$10% and 119ke${\pm}$6% and 100% of scatter window data were subtracted from the photopeak window prior to reconstruction. The projection data were reconstructed using Butterworth filter with cutoff frequency of 0.95cycles/cm and order of 10. Attenuation correction was done by Chang's method with attenuation coefficients of 0.12/cm and 0.15/cm for the reconstruction data without scatter correction and with scatter correction, respectively. For quantitation, regions of interest (ROIs) were drawn on the three slices selected at the level of the basal ganglia. Without scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.2 and 2.1, respectively. However, the ratios between basal ganglia and background were very similar for with and without attenuation correction. With scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.69 and 2.64, respectively. These results indicate that the attenuation correction is necessary for the quantitation. When true ratios between basal ganglia and background were 6.58, 4.68, 1.86, the measured ratios with scatter and attenuation correction were 76%, 80%, 82% of their true ratios, respectively. The approximate 20% underestimation could be partially due to the effect of partial volume and reconstruction algorithm which we have not investigated in this study, and partially due to imperfect scatter and attenuation correction methods that we have applied in consideration of clinical applications.

  • PDF

Development of Preliminary Quality Assurance Software for $GafChromic^{(R)}$ EBT2 Film Dosimetry ($GafChromic^{(R)}$ EBT2 Film Dosimetry를 위한 품질 관리용 초기 프로그램 개발)

  • Park, Ji-Yeon;Lee, Jeong-Woo;Choi, Kyoung-Sik;Hong, Semie;Park, Byung-Moon;Bae, Yong-Ki;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.1
    • /
    • pp.113-119
    • /
    • 2010
  • Software for GafChromic EBT2 film dosimetry was developed in this study. The software provides film calibration functions based on color channels, which are categorized depending on the colors red, green, blue, and gray. Evaluations of the correction effects for light scattering of a flat-bed scanner and thickness differences of the active layer are available. Dosimetric results from EBT2 films can be compared with those from the treatment planning system ECLIPSE or the two-dimensional ionization chamber array MatriXX. Dose verification using EBT2 films is implemented by carrying out the following procedures: file import, noise filtering, background correction and active layer correction, dose calculation, and evaluation. The relative and absolute background corrections are selectively applied. The calibration results and fitting equation for the sensitometric curve are exported to files. After two different types of dose matrixes are aligned through the interpolation of spatial pixel spacing, interactive translation, and rotation, profiles and isodose curves are compared. In addition, the gamma index and gamma histogram are analyzed according to the determined criteria of distance-to-agreement and dose difference. The performance evaluations were achieved by dose verification in the $60^{\circ}$-enhanced dynamic wedged field and intensity-modulated (IM) beams for prostate cancer. All pass ratios for the two types of tests showed more than 99% in the evaluation, and a gamma histogram with 3 mm and 3% criteria was used. The software was developed for use in routine periodic quality assurance and complex IM beam verification. It can also be used as a dedicated radiochromic film software tool for analyzing dose distribution.

PET/CT SUV Ratios in an Anthropomorphic Torso Phantom (의인화몸통팬텀에서 PET/CT SUV 비율)

  • Yeon, Joon-Ho;Hong, Gun-Chul;Kang, Byung-Hyun;Sin, Ye-Ji;Oh, Uk-Jin;Yoon, Hye-Ran;Hong, Seong-Jong
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • The standard uptake values (SUVs) strongly depend on positron emission tomographs (PETs) and image reconstruction methods. Various image reconstruction algorithms in GE Discovery MIDR (DMIDR) and Discovery Ste (DSte) installed at Department of Nuclear Medicine, Seoul Samsung Medical Center were applied to measure the SUVs in an anthropomorphic torso phantom. The measured SUVs in the heart, liver, and background were compared to the actual SUVs. Applied image reconstruction algorithms were VPFX-S (TOF+PSF), QCFX-S-350 (Q.Clear+TOF+PSF), QCFX-S-50, VPHD-S (OSEM+PSF) for DMIDR, and VUE Point (OSEM) and FORE-FBP for DSte. To reduce the radiation exposure to radiation technologists, only the small amount of radiation source 18F-FDG was mixed with the distilled water: 2.28 MBq in the 52.5 ml heart, 20.3 MBq in the 1,290 ml liver and 45.7 MBq for the 9,590 ml in the background region. SUV values in the heart with the algorithms of VPFX-S, QCFX-S-350, QCFX-S-50, VPHD-S, VUE Point, and FOR-FBP were 27.1, 28.0, 27.1, 26.5, 8.0, and 7.4 with the expected SUV of 5.9, and in the background 4.2, 4.1, 4.2, 4.1, 1.1, and 1.2 with the expected SUV of 0.8, respectively. Although the SUVs in each region were different for the six reconstruction algorithms in two PET/CTs, the SUV ratios between heart and background were found to be relatively consistent; 6.5, 6.8, 6.5, 6.5, 7.3, and 6.2 for the six reconstruction algorithms with the expected ratio of 7.8, respectively. Mean SNRs (Signal to Noise Ratios) in the heart were 8.3, 12.8, 8.3, 8.4, 17.2, and 16.6, respectively. In conclusion, the performance of PETs may be checked by using with the SUV ratios between two regions and a relatively small amount of radioactivity.

Fast Detection of Finger-vein Region for Finger-vein Recognition (지정맥 인식을 위한 고속 지정맥 영역 추출 방법)

  • Kim, Sung-Min;Park, Kang-Roung;Park, Dong-Kwon;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.23-31
    • /
    • 2009
  • Recently, biometric techniques such as face recognition, finger-print recognition and iris recognition have been widely applied for various applications including door access control, finance security and electric passport. This paper presents the method of using finger-vein pattern for the personal identification. In general, when the finger-vein image is acquired from the camera, various conditions such as the penetrating amount of the infrared light and the camera noise make the segmentation of the vein from the background difficult. This in turn affects the system performance of personal identification. To solve this problem, we propose the novel and fast method for extracting the finger-vein region. The proposed method has two advantages compared to the previous methods. One is that we adopt a locally adaptive thresholding method for the binarization of acquired finger-vein image. Another advantage is that the simple morphological opening and closing are used to remove the segmentation noise to finally obtain the finger-vein region from the skeletonization. Experimental results showed that our proposed method could quickly and exactly extract the finger-vein region without using various kinds of time-consuming filters for preprocessing.