• Title/Summary/Keyword: normalization method

Search Result 640, Processing Time 0.029 seconds

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

A Study on the Channel Normalized Pitch Synchronous Cepstrum for Speaker Recognition (채널에 강인한 화자 인식을 위한 채널 정규화 피치 동기 켑스트럼에 관한 연구)

  • 김유진;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.61-74
    • /
    • 2004
  • In this paper, a contort- and speaker-dependent cepstrum extraction method and a channel normalization method for minimizing the loss of speaker characteristics in the cepstrum were proposed for a robust speaker recognition system over the channel. The proposed extraction method creates a cepstrum based on the pitch synchronous analysis using the inherent pitch of the speaker. Therefore, the cepstrum called the 〃pitch synchronous cepstrum〃 (PSC) represents the impulse response of the vocal tract more accurately in voiced speech. And the PSC can compensate for channel distortion because the pitch is more robust in a channel environment than the spectrum of speech. And the proposed channel normalization method, the 〃formant-broadened pitch synchronous CMS〃 (FBPSCMS), applies the Formant-Broadened CMS to the PSC and improves the accuracy of the intraframe processing. We compared the text-independent closed-set speaker identification on 56 females and 112 males using TIMIT and NTIMIT database, respectively. The results show that pitch synchronous km improves the error reduction rate by up to 7.7% in comparison with conventional short-time cepstrum and the error rates of the FBPSCMS are more stable and lower than those of pole-filtered CMS.

Contactless Biometric Using Thumb Image (엄지손가락 영상을 이용한 비접촉식 바이오인식)

  • Lim, Naeun;Han, Jae Hyun;Lee, Eui Chul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.12
    • /
    • pp.671-676
    • /
    • 2016
  • Recently, according to the limelight of Fintech, simple payment using biometric at smartphone is widely used. In this paper, we propose a new contactless biometric method using thumb image without additional sensors unlike previous biometrics such as fingerprint, iris, and vein recognition. In our method, length, width, and skin texture information are used as features. For that, illumination normalization, skin region segmentation, size normalization and alignment procedures are sequentially performed from the captured thumb image. Then, correlation coefficient is calculated for similarity measurement. To analyze recognition accuracy, genuine and imposter matchings are performed. At result, we confirmed the FAR of 1.68% at the FRR of 1.55%. In here, because the distribution of imposter matching is almost normal distribution, our method has the advantage of low FAR. That is, because 0% FAR can be achieved at the FRR of 15%, the proposed method is enough to 1:1 matching for payment verification.

Combined Normalized and Offset Min-Sum Algorithm for Low-Density Parity-Check Codes (LDPC 부호의 복호를 위한 정규화와 오프셋이 조합된 최소-합 알고리즘)

  • Lee, Hee-ran;Yun, In-Woo;Kim, Joon Tae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-47
    • /
    • 2020
  • The improved belief-propagation-based algorithms, such as normalized min-sum algorithm (NMSA) or offset min-sum algorithm (OMSA), are widely used to decode LDPC(Low-Density Parity-Check) codes because they are less computationally complex and work well even at low SNR(Signal-to-Noise Ratio). However, these algorithms work well only when an appropriate normalization factor or offset value is used. A new method that uses a CMD(Check Node Message Distribution) chart and least-square method, which has been recently proposed, has advantages on computational complexity over other approaches to get optimal coefficients. Furthermore, this method can be used to derive coefficients for each iteration. In this paper, we apply this method and propose an algorithm to derive a combination of normalization factor and offset value for a combined normalized and offset min-sum algorithm to further improve the decoding of LDPC codes. Simulations on the next-generation broadcasting standards, ATSC 3.0 LDPC codes, prove that a combined normalized and offset min-sum algorithm which takes the proposed coefficients as correction coefficients shows the best BER performance among other decoding algorithms.

Hybrid Affine Registration Using Intensity Similarity and Feature Similarity for Pathology Detection

  • June-Sik Kim;Ho-Sung Kim;Jong-Min Lee;Jae-Seok Kim;In-Young Kim;Sun I. Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.1
    • /
    • pp.39-47
    • /
    • 2002
  • The objective of this study is to provide a Precise form of spatial normalization with affine transformation. The quantitative comparison of the brain architecture across different subjects requires a common coordinate system. For the common coordinate system, not only global brain but also a local region of interest should be spatially normalized. Registration using mutual information generally matches the whose brain well. However. a region of interest may not be normalized compared to the feature-based methods with the landmarks. The hybrid method of this Paper utilizes feature information of the local region as well as intensity similarity. Central gray nuclei of a brain including copus callosum, which is used for feature in Schizophrenia detection, is appropriately normalized by the hybrid method. In the results section. our method is compared with mutual information only method and Talairach mapping with schizophrenia Patients. and is shown how it accurately normalizes feature .

Image Signal Denoising by the Soft-Threshold Technique Using Coefficient Normalization in Multiwavelet Transform Domain (멀티웨이블릿 변환영역에서 계수정규화를 이용한 Soft-Threshold 기법의 영상신호 잡음제거)

  • Kim, Jae-Hwan;Woo, Chang-Yong;Park, Nam-Chun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.4
    • /
    • pp.255-265
    • /
    • 2007
  • In case of wavelet coefficients have correlation, in image signal denoising using wavelet shrinkage denoising method, the denoising effect for the image signal is reduced when the wavelet shrinkage denoising method is used. The coefficients of multiwavelet transform have correlation by pre-filters. To solve the degradation problem in multiwavelet transform, V Sterela suggested a new pre-filter for the Universal threshold or weighting factors to the threshold. In this paper, to improve the denoising effect in the multiwavelet transform, the coefficient normalizing method that the coefficient are divided by estimated noise deviation is adopted to the transformed multiwavelet coefficients in the course of wavelet shrinkage technique. And the thresholds of universal, SURE and GCV are estimated using normalized coefficients and tried to denoise by the wavelet shrinkage technique. We compared PSNRs of denoised images for each thresholds and confirmed the efficiency of the proposed method.

  • PDF

Function Embedding and Projective Measurement of Quantum Gate by Probability Amplitude Switch (확률진폭 스위치에 의한 양자게이트의 함수 임베딩과 투사측정)

  • Park, Dong-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.6
    • /
    • pp.1027-1034
    • /
    • 2017
  • In this paper, we propose a new function embedding method that can measure mathematical projections of probability amplitude, probability, average expectation and matrix elements of stationary-state unit matrix at all control operation points of quantum gates. The function embedding method in this paper is to embed orthogonal normalization condition of probability amplitude for each control operating point into a binary scalar operator by using Dirac symbol and Kronecker delta symbol. Such a function embedding method is a very effective means of controlling the arithmetic power function of a unitary gate in a unitary transformation which expresses a quantum gate function as a tensor product of a single quantum. We present the results of evolutionary operation and projective measurement when we apply the proposed function embedding method to the ternary 2-qutrit cNOT gate and compare it with the existing methods.

Real-time Face Detection and Verification Method using PCA and LDA (PCA와 LDA를 이용한 실시간 얼굴 검출 및 검증 기법)

  • 홍은혜;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.213-223
    • /
    • 2004
  • In this paper, we propose a new face detection method for real-time applications. It is based on the template-matching and appearance-based method. At first, we apply Min-max normalization with histogram equalization to the input image according to the variation of intensity. By applying the PCA transform to both the input image and template, PC components are obtained and they are applied to the LDA transform. Then, we estimate the distances between the input image and template, and we select one region which has the smallest distance. SVM is used for final decision whether the candidate face region is a real face or not. Since we detect a face region not the full region but within the $\pm$12 search window, our method shows a good speed and detection rate. Through the experiments with 6 category input videos, our algorithm shows the better performance than the existing methods that use only the PCA transform. and the PCA and LDA transform.

Image Similarity Retrieval using an Scale and Rotation Invariant Region Feature (크기 및 회전 불변 영역 특징을 이용한 이미지 유사성 검색)

  • Yu, Seung-Hoon;Kim, Hyun-Soo;Lee, Seok-Lyong;Lim, Myung-Kwan;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.446-454
    • /
    • 2009
  • Among various region detector and shape feature extraction method, MSER(Maximally Stable Extremal Region) and SIFT and its variant methods are popularly used in computer vision application. However, since SIFT is sensitive to the illumination change and MSER is sensitive to the scale change, it is not easy to apply the image similarity retrieval. In this paper, we present a Scale and Rotation Invariant Region Feature(SRIRF) descriptor using scale pyramid, MSER and affine normalization. The proposed SRIRF method is robust to scale, rotation, illumination change of image since it uses the affine normalization and the scale pyramid. We have tested the SRIRF method on various images. Experimental results demonstrate that the retrieval performance of the SRIRF method is about 20%, 38%, 11%, 24% better than those of traditional SIFT, PCA-SIFT, CE-SIFT and SURF, respectively.

A Normalized Loss Function of Style Transfer Network for More Diverse and More Stable Transfer Results (다양성 및 안정성 확보를 위한 스타일 전이 네트워크 손실 함수 정규화 기법)

  • Choi, Insung;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.980-993
    • /
    • 2020
  • Deep-learning based style transfer has recently attracted great attention, because it provides high quality transfer results by appropriately reflecting the high level structural characteristics of images. This paper deals with the problem of providing more stable and more diverse style transfer results of such deep-learning based style transfer method. Based on the investigation of the experimental results from the wide range of hyper-parameter settings, this paper defines the problem of the stability and the diversity of the style transfer, and proposes a partial loss normalization method to solve the problem. The style transfer using the proposed normalization method not only gives the stability on the control of the degree of style reflection, regardless of the input image characteristics, but also presents the diversity of style transfer results, unlike the existing method, at controlling the weight of the partial style loss, and provides the stability on the difference in resolution of the input image.