• Title/Summary/Keyword: 정규고

Search Result 7,669, Processing Time 0.037 seconds

An Amplitude Warping Approach to Intra-Speaker Normalization for Speech Recognition (음성인식에서 화자 내 정규화를 위한 진폭 변경 방법)

  • Kim Dong-Hyun;Hong Kwang-Seok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.3
    • /
    • pp.9-14
    • /
    • 2003
  • The method of vocal tract normalization is a successful method for improving the accuracy of inter-speaker normalization. In this paper, we present an intra-speaker warping factor estimation based on pitch alteration utterance. The feature space distributions of untransformed speech from the pitch alteration utterance of intra-speaker would vary due to the acoustic differences of speech produced by glottis and vocal tract. The variation of utterance is two types: frequency and amplitude variation. The vocal tract normalization is frequency normalization among inter-speaker normalization methods. Therefore, we have to consider amplitude variation, and it may be possible to determine the amplitude warping factor by calculating the inverse ratio of input to reference pitch. k, the recognition results, the error rate is reduced from 0.4% to 2.3% for digit and word decoding.

  • PDF

Regular Expression Matching Processor Architecture Supporting Character Class Matching (문자클래스 매칭을 지원하는 정규표현식 매칭 프로세서 구조)

  • Yun, SangKyun
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1280-1285
    • /
    • 2015
  • Many hardware-based regular expression matching architectures are proposed for high performance matching. In particular, regular expression processors such as ReCPU and SMPU perform pattern matching in a similar approach to that used in general purpose processors, which provide the flexibility when updating patterns. However, these processors are inefficient in performing class matching since they do not provide character class matching capabilities. This paper proposes an instruction set and architecture of a regular expression matching processor, which can support character class matching. The proposed processor can efficiently perform character class matching since it includes character class, character range, and negated character class matching capabilities.

A Local Alignment Algorithm using Normalization by Functions (함수에 의한 정규화를 이용한 local alignment 알고리즘)

  • Lee, Sun-Ho;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.187-194
    • /
    • 2007
  • A local alignment algorithm does comparing two strings and finding a substring pair with size l and similarity s. To find a pair with both sufficient size and high similarity, existing normalization approaches maximize the ratio of the similarity to the size. In this paper, we introduce normalization by functions that maximizes f(s)/g(l), where f and g are non-decreasing functions. These functions, f and g, are determined by experiments comparing DNA sequences. In the experiments, our normalization by functions finds appropriate local alignments. For the previous algorithm, which evaluates the similarity by using the longest common subsequence, we show that the algorithm can also maximize the score normalized by functions, f(s)/g(l) without loss of time.

Abnormal Work, a Bridge or a Trap? (비정규직, 가교(bridge) 인가 함정(trap) 인가?)

  • Nam, Jaeryang;Kim, Taigi
    • Journal of Labour Economics
    • /
    • v.23 no.2
    • /
    • pp.81-106
    • /
    • 2000
  • This paper examines whether 'abnormal work practices', which have rapidly increased since the mid 1990s, are a 'bridge' for workers to 'normal work practices' or a 'trap' from which they are hard to escape. It provides both the static and dynamic analysis. The former shows they are likely to work as a 'trap'. The latter, which investigates the transition probability during the last 24 months, also supports the same result. It finds out that most of part-time workers paid by an employer are contingent workers or daily workers and that about fifty percent of 'abnormal workers' took them involuntarily.

  • PDF

Wage Differentials between Regular and Irregular Workers (데이터 매칭을 이용한 비정규직의 임금격차 분석)

  • Kim, Sunae;Kim, Jinyoung
    • Journal of Labour Economics
    • /
    • v.34 no.2
    • /
    • pp.53-77
    • /
    • 2011
  • The last decade has witnessed a surge of research interest in differences between regular and irregular workers in employment forms. Recent studies on estimating wage differentials between the two types of workers in employment forms have typically used the linear regression analysis. Our study utilizes a new methodology to estimate wage differentials between the two types of workers: data matching. Our method can perform better than the ordinary regression analysis because it carefully addresses the selection bias problem. Our results indicate that there is no significant difference in wage between regular and irregular workers.

  • PDF

Buy-Sell Strategy with Mean Trend and Volatility Indexes of Normalized Stock Price (정규화된 주식가격의 평균추세-변동성 지표를 이용한 매매전략 -KOSPI200 을 중심으로-)

  • Yoo, Seong-Mo;Kim, Dong-Hyun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.05a
    • /
    • pp.277-283
    • /
    • 2005
  • In general, stock prices do not follow normal distributions and mean trend indexes, volatility indexes, and volume indicators relating to these non-normal stock price are widely used as buy-sell strategies. These general buy-sell strategies are rather intuitive than statistical reasoning. The non-normality problem can be solved by normalizing process and statistical buy-sell strategy can be obtained by using mean trend and volatility indexes together with normalized stock prices. In this paper, buy-sell strategy based on mean trend and volatility index with normalized stock prices are proposed and applied to KOSPI200 data to see the feasibility of the proposed buy-sell strategy.

  • PDF

The Construction of Digital Terrain Models by a Triangulated Irregular Network (비정규삼각망 데이타구조에 의한 수치지형모델의 구성)

  • 이석찬;조규전;이창경;최병길
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.8 no.2
    • /
    • pp.1-8
    • /
    • 1990
  • A regular grid or a triangulated irregular network is generally used as the data structure of digital terrain models. A Regular grid is simple and easy to manipulate, but it can't describe well terrain surface features and requires vast volumes of data. In the meantime, a triangulated irregular network has complex data structure, but it can describe well terrain surface features and can achieve the accuracy suitable to its application with relatively little data. This paper aims at the construction of efficient digital terrain models by the improvment of a triangulated irregular network based on Delaunay triangulation. Regular and irregular data set are sampled from existing contour maps, and the efficiency and the accuracy of the two data structures are compared.

  • PDF

Analysis of normalization effect for earthquake events classification (지진 이벤트 분류를 위한 정규화 기법 분석)

  • Zhang, Shou;Ku, Bonhwa;Ko, Hansoek
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.130-138
    • /
    • 2021
  • This paper presents an effective structure by applying various normalization to Convolutional Neural Networks (CNN) for seismic event classification. Normalization techniques can not only improve the learning speed of neural networks, but also show robustness to noise. In this paper, we analyze the effect of input data normalization and hidden layer normalization on the deep learning model for seismic event classification. In addition an effective model is derived through various experiments according to the structure of the applied hidden layer. As a result of various experiments, the model that applied input data normalization and weight normalization to the first hidden layer showed the most stable performance improvement.

Efficient Subword Segmentation for Korean Language Classification (한국어 분류를 위한 효율적인 서브 워드 분절)

  • Hyunjin Seo;Jeongjae Nam;Minseok Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.535-540
    • /
    • 2022
  • Out of Vocabulary(OOV) 문제는 인공신경망 기계번역(Neural Machine Translation, NMT)에서 빈번히 제기되어 왔다. 이를 해결하기 위해, 기존에는 단어를 효율적인 압축할 수 있는 Byte Pair Encoding(BPE)[1]이 대표적으로 이용되었다. 하지만 BPE는 빈도수를 기반으로 토큰화가 진행되는 결정론적 특성을 취하고 있기에, 다양한 문장에 관한 일반화된 분절 능력을 함양하기 어렵다. 이를 극복하기 위해 최근 서브 워드를 정규화하는 방법(Subword Regularization)이 제안되었다. 서브 워드 정규화는 동일한 단어 안에서 발생할 수 있는 다양한 분절 경우의 수를 고려하도록 설계되어 다수의 실험에서 우수한 성능을 보였다. 그러나 분류 작업, 특히 한국어를 대상으로 한 분류에 있어서 서브 워드 정규화를 적용한 사례는 아직까지 확인된 바가 없다. 이를 위해 본 논문에서는 서브 워드 정규화를 대표하는 두 가지 방법인 유니그램 기반 서브 워드 정규화[2]와 BPE-Dropout[3]을 이용해 한국어 분류 문제에 대한 서브 워드 정규화의 효과성을 제안한다. NMT 뿐만 아니라 분류 문제 역시 단어의 구성성 및 그 의미를 파악하는 것은 각 문장이 속하는 클래스를 결정하는데 유의미한 기여를 한다. 더불어 서브 워드 정규화는 한국어의 문장 구성 요소에 관해 폭넓은 인지능력을 함양할 수 있다. 해당 방법은 본고에서 진행한 한국어 분류 과제 실험에서 기존 BPE 대비 최대 4.7% 높은 성능을 거두었다.

  • PDF

Effects and Evaluations of URL Normalization (URL정규화의 적용 효과 및 평가)

  • Jeong, Hyo-Sook;Kim, Sung-Jin;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.486-494
    • /
    • 2006
  • A web page can be represented by syntactically different URLs. URL normalization is a process of transforming URL strings into canonical form. Through this process, duplicate URL representations for a web page can be reduced significantly. A number of normalization methods have been heuristically developed and used, and there has been no study on analyzing the normalization methods systematically. In this paper, we give a way to evaluate normalization methods in terms of efficiency and effectiveness of web applications, and give users guidelines for selecting appropriate methods. To this end, we examine all the effects that can take place when a normalization method is adopted to web applications, and describe seven metrics for evaluating normalization methods. Lastly, the evaluation results on 12 normalization methods with the 25 million actual URLs are reported.