• 제목/요약/키워드: normalization

검색결과 1,421건 처리시간 0.031초

A Comparison on the Image Normalizations for Image Information Estimation

  • Kang, Hwan-Il;Lim, Seung-Chul;Kim, Kab-Il;Son, Young-I
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2385-2388
    • /
    • 2005
  • In this paper, we propose the estimation method for the image affine information for computer vision. The first estimation method is given based on the XYS image normalization and the second estimation method is based on the image normalization by Pei and Lin. The XYS normalization method turns out to have better performance than the method by Pei and Lin. In addition, we show that rotation and aspect ratio information can be obtained using the central moments of both the original image and the sensed image. Finally, we propose the modified version of the normalization method so that we may control the size of the image.

  • PDF

고속 처리를 위한 이진 영상 정규화 하드웨어의 설계 및 구현 (Design and Implementation of Binary Image Normalization Hardware for High Speed Processing)

  • 김형구;강선미;김덕진
    • 전자공학회논문지B
    • /
    • 제31B권5호
    • /
    • pp.162-167
    • /
    • 1994
  • The binary image normalization method in image processing can be used in several fields, Especially, its high speed processing method and its hardware implmentation is more useful, A normalization process of each character in character recognition requires a lot of processing time. Therefore, the research was done as a part of high speed process of OCR (optical character reader) implementation as a pipeline structure with host computer in hardware to give temporal parallism. For normalization process, general purpose CPU,MC68000, was used to implement it. As a result of experiment, the normalization speed of the hardware is sufficient to implement high speed OCR which the recognition speed is over 140 characters per second.

  • PDF

핵의학 영상의 물리적 인공산물보정: 정규화보정 및 감쇠보정 (Physical Artifact Correction in Nuclear Medicine Imaging: Normalization and Attenuation Correction)

  • 김진수;이재성;천기정
    • Nuclear Medicine and Molecular Imaging
    • /
    • 제42권2호
    • /
    • pp.112-117
    • /
    • 2008
  • Artifact corrections including normalization and attenuation correction were important for quantitative analysis in Nuclear Medicine Imaging. Normalization is the process of ensuring that all lines of response joining detectors in coincidence have the same effective sensitivity. Failure to account for variations in LOR sensitivity leads to bias and high-frequency artifacts in the reconstructed images. Attenuation correction is the process of the correction of attenuation phenomenon lies in the natural property that photons emitted by the radiopharmaceutical will interact with tissue and other materials as they pass through the body. In this paper, we will review the several approaches for normalization and attenuation correction strategies.

TLC/HPTLC에서 측정된 자외/가시부 스펙트럼의 표준화 및 검색 (Normalization and Search of the UV/VIS Spectra Measured from TLC/HPTLC)

  • 강종성
    • 약학회지
    • /
    • 제38권4호
    • /
    • pp.366-371
    • /
    • 1994
  • To improve the identification power of TLC/HPTLC the in situ reflectance spectra obtained directly from plates with commercial scanner are used. The spectrum normalization should be carried out prior to comparing and searching the spectra from library for the identification of compounds. Because the reflectance does not obey the Lambert-Beer's law, there arise some problems in normalization. These problems could be solved to some extent by normalizing the spectra with regression methods. The spectra are manipulated with the regression function of a curve obtained from the correlation plot. When the parabola was used as the manipulating function, the spectra were identified with the accuracy of 97% and this result was better than that of conventionally used the point and area normalization method.

  • PDF

잡음 환경에서 짧은 발화 인식 성능 향상을 위한 선택적 극점 필터링 기반의 특징 정규화 (Selective pole filtering based feature normalization for performance improvement of short utterance recognition in noisy environments)

  • 최보경;반성민;김형순
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.103-110
    • /
    • 2017
  • The pole filtering concept has been successfully applied to cepstral feature normalization techniques for noise-robust speech recognition. In this paper, it is proposed to apply the pole filtering selectively only to the speech intervals, in order to further improve the recognition performance for short utterances in noisy environments. Experimental results on AURORA 2 task with clean-condition training show that the proposed selectively pole-filtered cepstral mean normalization (SPFCMN) and selectively pole-filtered cepstral mean and variance normalization (SPFCMVN) yield error rate reduction of 38.6% and 45.8%, respectively, compared to the baseline system.

3-level 계층 64QAM 기법의 정규화 인수 (Normalization Factor for Three-Level Hierarchical 64QAM Scheme)

  • 유동호;김동호
    • 한국통신학회논문지
    • /
    • 제41권1호
    • /
    • pp.77-79
    • /
    • 2016
  • 본 논문에서는 디지털 방송시스템의 전송방식에서 널리 사용 되고 있는 계층 변조 (Hierarchical Modulation)를 고려한다. 계층 변조기법은 다수의 독립적인 데이터 스트림의 송신 신호 전력을 조절하여 변조 심볼로 매핑하기 때문에 종래의 M-QAM에서 사용하는 정규화 인수 (Normalization Factor)를 사용할 수 없다. 본 논문에서는 3-level 계층 64QAM 기법의 정확한 정규화 인수를 구하기 위한 방법과 과정을 유도하여 제시한다.

Word Similarity Calculation by Using the Edit Distance Metrics with Consonant Normalization

  • Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • 제11권4호
    • /
    • pp.573-582
    • /
    • 2015
  • Edit distance metrics are widely used for many applications such as string comparison and spelling error corrections. Hamming distance is a metric for two equal length strings and Damerau-Levenshtein distance is a well-known metrics for making spelling corrections through string-to-string comparison. Previous distance metrics seems to be appropriate for alphabetic languages like English and European languages. However, the conventional edit distance criterion is not the best method for agglutinative languages like Korean. The reason is that two or more letter units make a Korean character, which is called as a syllable. This mechanism of syllable-based word construction in the Korean language causes an edit distance calculation to be inefficient. As such, we have explored a new edit distance method by using consonant normalization and the normalization factor.

Local-Based Iterative Histogram Matching for Relative Radiometric Normalization

  • Seo, Dae Kyo;Eo, Yang Dam
    • 한국측량학회지
    • /
    • 제37권5호
    • /
    • pp.323-330
    • /
    • 2019
  • Radiometric normalization with multi-temporal satellite images is essential for time series analysis and change detection. Generally, relative radiometric normalization, which is an image-based method, is performed, and histogram matching is a representative method for normalizing the non-linear properties. However, since it utilizes global statistical information only, local information is not considered at all. Thus, this paper proposes a histogram matching method considering local information. The proposed method divides histograms based on density, mean, and standard deviation of image intensities, and performs histogram matching locally on the sub-histogram. The matched histogram is then further partitioned and this process is performed again, iteratively, controlled with the wasserstein distance. Finally, the proposed method is compared to global histogram matching. The experimental results show that the proposed method is visually and quantitatively superior to the conventional method, which indicates the applicability of the proposed method to the radiometric normalization of multi-temporal images with non-linear properties.

함수에 의한 정규화를 이용한 local alignment 알고리즘 (A Local Alignment Algorithm using Normalization by Functions)

  • 이선호;박근수
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제34권5_6호
    • /
    • pp.187-194
    • /
    • 2007
  • Local alignment 알고리즘은 두 문자열을 비교하여 크기가 l, 유사도 점수가 s인 부분 문자열쌍을 찾는다. 크기가 충분히 크고 유사도 점수도 높은 부분 문자열 쌍을 찾기 위해 단위 길이당 유사도 점수 s/l을 최대화하는 정규화 방법이 제안되어있다. 본 논문에서는 증가함수 f, g를 도입하여 f(s)/g(l)을 최대화하는, 함수에 의한 정규화 방법을 제시한다. 여기서 함수 f, g는 DNA 서열을 비교하는 실험을 통해 정한다. 이러한 실험에서 함수에 의한 정규화 방법이 좋은 local alignment를 찾는다. 또한 유사도 점수의 기준으로 longest common subsequence를 채택한 경우, 기존의 정규화 알고리즘을 이용하면 별다른 시간 손실 없이 함수에 의해 정규화된 점수 f(s)/g(l)을 최대화 할 수 있음을 보인다.