• Title/Summary/Keyword: mixed 배경잡음

Search Result 11, Processing Time 0.023 seconds

Non-Stationary/Mixed Noise Estimation Algorithm Based on Minimum Statistics and Codebook Driven Short-Term Predictor Parameter Estimation (최소 통계법과 Short-Term 예측계수 코드북을 이용한 Non-Stationary/Mixed 배경잡음 추정 기법)

  • Lee, Myeong-Seok;Noh, Myung-Hoon;Park, Sung-Joo;Lee, Seok-Pil;Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.200-208
    • /
    • 2010
  • In this work, the minimum statistics (MS) algorithm is combined with the codebook driven short-term predictor parameter estimation (CDSTP) to design a speech enhancement algorithm that is robust against various background noise environments. The MS algorithm functions well for the stationary noise but relatively not for the non-stationary noise. The CDSTP works efficiently for the non-stationary noise, but not for the noise that was not considered in the training stage. Thus, we propose to combine CDSTP and MS. Compared with the single use of MS and CDSTP, the proposed method produces better perceptual evaluation of speech quality (PESQ) score, and especially works excellent for the mixed background noise between stationary and non-stationary noises.

A Study on the P Wave Arrival Time Determination Algorithm of Acoustic Emission (AE) Suitable for P Waves with Low Signal-to-Noise Ratios (낮은 신호 대 잡음비 특성을 지닌 탄성파 신호에 적합한 P파 도달시간 결정 알고리즘 연구)

  • Lee, K.S.;Kim, J.S.;Lee, C.S.;Yoon, C.H.;Choi, J.W.
    • Tunnel and Underground Space
    • /
    • v.21 no.5
    • /
    • pp.349-358
    • /
    • 2011
  • This paper introduces a new P wave arrival time determination algorithm of acoustic emission (AE) suitable to identify P waves with low signal-to-noise ratio generated in rock masses around the high-level radioactive waste disposal repositories. The algorithms adopted for this paper were amplitude threshold picker, Akaike Information Criterion (AIC), two step AIC, and Hinkley criterion. The elastic waves were generated by Pencil Lead Break test on a granite sample, then mixed with white noise to make it difficult to distinguish P wave artificially. The results obtained from amplitude threshold picker, AIC, and Hinkley criterion produced relatively large error due to the low signal-to-noise ratio. On the other hand, two step AIC algorithm provided the correct results regardless of white noise so that the accuracy of source localization was more improved and could be satisfied with the error range.

A Study on the Characteristics of a series of Autoencoder for Recognizing Numbers used in CAPTCHA (CAPTCHA에 사용되는 숫자데이터를 자동으로 판독하기 위한 Autoencoder 모델들의 특성 연구)

  • Jeon, Jae-seung;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.25-34
    • /
    • 2017
  • Autoencoder is a type of deep learning method where input layer and output layer are the same, and effectively extracts and restores characteristics of input vector using constraints of hidden layer. In this paper, we propose methods of Autoencoders to remove a natural background image which is a noise to the CAPTCHA and recover only a numerical images by applying various autoencoder models to a region where one number of CAPTCHA images and a natural background are mixed. The suitability of the reconstructed image is verified by using the softmax function with the output of the autoencoder as an input. And also, we compared the proposed methods with the other method and showed that our methods are superior than others.

Implementation of Environmental Noise Remover for Speech Signals (배경 잡음을 제거하는 음성 신호 잡음 제거기의 구현)

  • Kim, Seon-Il;Yang, Seong-Ryong
    • 전자공학회논문지 IE
    • /
    • v.49 no.2
    • /
    • pp.24-29
    • /
    • 2012
  • The sounds of exhaust emissions of automobiles are independent sound sources which are nothing to do with voices. We have no information for the sources of voices and exhaust sounds. Accordingly, Independent Component Analysis which is one of the Blind Source Separaton methods was used to segregate two source signals from each mixed signals. Maximum Likelyhood Estimation was applied to the signals came through the stereo microphone to segregate the two source signals toward the maximization of independence. Since there is no clue to find whether it is speech signal or not, the coefficients of the slope was calculated by the autocovariances of the signals in frequcency domain. Noise remover for speech signals was implemented by coupling the two algorithms.

Robust Object Detection from Indoor Environmental Factors (다양한 실내 환경변수로부터 강인한 객체 검출)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.41-46
    • /
    • 2010
  • In this paper, we propose a detection method of reduced computational complexity aimed at separating the moving objects from the background in a generic video sequence. In generally, indoor environments, it is difficult to accurately detect the object because environmental factors, such as lighting changes, shadows, reflections on the floor. First, the background image to detect an object is created. If an object exists in video, on a previously created background images for similarity comparison between the current input image and to detect objects through several operations to generate a mixture image. Mixed-use video and video inputs to detect objects. To complement the objects detected through the labeling process to remove noise components and then apply the technique of morphology complements the object area. Environment variable such as, lighting changes and shadows, to the strength of the object is detected. In this paper, we proposed that environmental factors, such as lighting changes, shadows, reflections on the floor, including the system uses mixture images. Therefore, the existing system more effectively than the object region is detected.

An Illumination and Background-Robust Hand Image Segmentation Method Based on the Dynamic Threshold Values (조명과 배경에 강인한 동적 임계값 기반 손 영상 분할 기법)

  • Na, Min-Young;Kim, Hyun-Jung;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.607-613
    • /
    • 2011
  • In this paper, we propose a hand image segmentation method using the dynamic threshold values on input images with various lighting and background attributes. First, a moving hand silhouette is extracted using the camera input difference images, Next, based on the R,G,B histogram analysis of the extracted hand silhouette area, the threshold interval for each R, G, and B is calculated on run-time. Finally, the hand area is segmented using the thresholding and then a morphology operation, a connected component analysis and a flood-fill operation are performed for the noise removal. Experimental results on various input images showed that our hand segmentation method provides high level of accuracy and relatively fast stable results without the need of the fixed threshold values. Proposed methods can be used in the user interface of mixed reality applications.

A Two-color Signal Processing Algorithm Using the Ratio between Two Band Signals (대역간 신호비를 이용한 two-color 신호처리 알고리듬)

  • Oh, Jeong-Su;Doo, Kyoung-Soo;Jahng, Surng-Gabb;Seo, Dong-Sun;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.6
    • /
    • pp.60-69
    • /
    • 2000
  • In this paper we propose a new two-color signal processing algorithm for efficient target tracking under complicated condition including interfernces such as background noises and countermeasures. For the efficient target tracking, we adopt two detection bands, and define the ratio between two band signals which represents the spectral distribution characteristics of a target or interference. The proposed algorithm detects the ratio of interference, and extracts only the target signal from the target and the interference mixed signal by using it. To evaluate the performance of the proposed algorithm, we apply it to a rosette tracker and perform various simulations. The simulation results show that the proposed algorithm extracts the target signal from the mixed signal well. The proposed algorithm is also ready to be applied to a real system since it is simple and adaptive for environment change.

  • PDF

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

Hangeul detection method based on histogram and character structure in natural image (다양한 배경에서 히스토그램과 한글의 구조적 특징을 이용한 문자 검출 방법)

  • Pyo, Sung-Kook;Park, Young-Soo;Lee, Gang Seung;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.15-22
    • /
    • 2019
  • In this paper, we proposed a Hangeul detection method using structural features of histogram, consonant, and vowel to solve the problem of Hangul which is separated and detected consonant and vowel The proposed method removes background by using DoG (Difference of Gaussian) to remove unnecessary noise in Hangul detection process. In the image with the background removed, we converted it to a binarized image using a cumulative histogram. Then, the horizontal position histogram was used to find the position of the character string, and character combination was performed using the vertical histogram in the found character image. However, words with a consonant vowel such as '가', '라' and '귀' are combined using a structural characteristic of characters because they are difficult to combine into one character. In this experiment, an image composed of alphabets with various backgrounds, an image composed of Korean characters, and an image mixed with alphabets and Hangul were tested. The detection rate of the proposed method is about 2% lower than that of the K-means and MSER character detection method, but it is about 5% higher than that of the character detection method including Hangul.

PET/CT SUV Ratios in an Anthropomorphic Torso Phantom (의인화몸통팬텀에서 PET/CT SUV 비율)

  • Yeon, Joon-Ho;Hong, Gun-Chul;Kang, Byung-Hyun;Sin, Ye-Ji;Oh, Uk-Jin;Yoon, Hye-Ran;Hong, Seong-Jong
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • The standard uptake values (SUVs) strongly depend on positron emission tomographs (PETs) and image reconstruction methods. Various image reconstruction algorithms in GE Discovery MIDR (DMIDR) and Discovery Ste (DSte) installed at Department of Nuclear Medicine, Seoul Samsung Medical Center were applied to measure the SUVs in an anthropomorphic torso phantom. The measured SUVs in the heart, liver, and background were compared to the actual SUVs. Applied image reconstruction algorithms were VPFX-S (TOF+PSF), QCFX-S-350 (Q.Clear+TOF+PSF), QCFX-S-50, VPHD-S (OSEM+PSF) for DMIDR, and VUE Point (OSEM) and FORE-FBP for DSte. To reduce the radiation exposure to radiation technologists, only the small amount of radiation source 18F-FDG was mixed with the distilled water: 2.28 MBq in the 52.5 ml heart, 20.3 MBq in the 1,290 ml liver and 45.7 MBq for the 9,590 ml in the background region. SUV values in the heart with the algorithms of VPFX-S, QCFX-S-350, QCFX-S-50, VPHD-S, VUE Point, and FOR-FBP were 27.1, 28.0, 27.1, 26.5, 8.0, and 7.4 with the expected SUV of 5.9, and in the background 4.2, 4.1, 4.2, 4.1, 1.1, and 1.2 with the expected SUV of 0.8, respectively. Although the SUVs in each region were different for the six reconstruction algorithms in two PET/CTs, the SUV ratios between heart and background were found to be relatively consistent; 6.5, 6.8, 6.5, 6.5, 7.3, and 6.2 for the six reconstruction algorithms with the expected ratio of 7.8, respectively. Mean SNRs (Signal to Noise Ratios) in the heart were 8.3, 12.8, 8.3, 8.4, 17.2, and 16.6, respectively. In conclusion, the performance of PETs may be checked by using with the SUV ratios between two regions and a relatively small amount of radioactivity.