• Title/Summary/Keyword: word problem

Search Result 522, Processing Time 0.026 seconds

Target Word Selection for English-Korean Machine Translation System using Multiple Knowledge (다양한 지식을 사용한 영한 기계번역에서의 대역어 선택)

  • Lee, Ki-Young;Kim, Han-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.75-86
    • /
    • 2006
  • Target word selection is one of the most important and difficult tasks in English-Korean Machine Translation. It effects on the translation accuracy of machine translation systems. In this paper, we present a new approach to select Korean target word for an English noun with translation ambiguities using multiple knowledge such as verb frame patterns, sense vectors based on collocations, statistical Korean local context information and co-occurring POS information. Verb frame patterns constructed with dictionary and corpus play an important role in resolving the sparseness problem of collocation data. Sense vectors are a set of collocation data when an English word having target selection ambiguities is to be translated to specific Korean target word. Statistical Korean local context Information is an N-gram information generated using Korean corpus. The co-occurring POS information is a statistically significant POS clue which appears with ambiguous word. The experiment showed promising results for diverse sentences from web documents.

  • PDF

A Study on the Isolated word Recognition Using One-Stage DMS/DP for the Implementation of Voice Dialing System

  • Seong-Kwon Lee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1039-1045
    • /
    • 1994
  • The speech recognition systems using VQ have usually the problem decreasing recognition rate, MSVQ assigning the dissimilar vectors to a segment. In this paper, applying One-stage DMS/DP algorithm to the recognition experiments, we can solve these problems to what degree. Recognition experiment is peformed for Korean DDD area names with DMS model of 20 sections and word unit template. We carried out the experiment in speaker dependent and speaker independent, and get a recognition rates of 97.7% and 81.7% respectively.

  • PDF

Semantic-Based Web Information Filtering Using WordNet (어휘사전 워드넷을 활용한 의미기반 웹 정보필터링)

  • Byeon, Yeong-Tae;Hwang, Sang-Gyu;O, Gyeong-Muk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11S
    • /
    • pp.3399-3409
    • /
    • 1999
  • Information filtering for internet search, in which new information retrieval environment is given, is different from traditional methods such as bibliography information filtering, news-group and E-mail filtering. Therefore, we cannot expect high performance from the traditional information filtering models when they are applied to the new environment. To solve this problem, we inspect the characteristics of the new filtering environment, and propose a semantic-based filtering model which includes a new filtering method using WordNet. For extracting keywords from documents, this model uses the SDCC(Semantic Distance for Common Category) algorithm instead of the TF/IDF method usually used by traditional methods. The world sense ambiguation problem, which is one of causes dropping efficiency of internet search, is solved by this method. The semantic-based filtering model can filter web pages selectively with considering a user level and we show in this paper that it is more convenient for users to search information in internet by the proposed method than by traditional filtering methods.

  • PDF

Improvements on Phrase Breaks Prediction Using CRF (Conditional Random Fields) (CRF를 이용한 운율경계추성 성능개선)

  • Kim Seung-Won;Lee Geun-Bae;Kim Byeong-Chang
    • MALSORI
    • /
    • no.57
    • /
    • pp.139-152
    • /
    • 2006
  • In this paper, we present a phrase break prediction method using CRF(Conditional Random Fields), which has good performance at classification problems. The phrase break prediction problem was mapped into a classification problem in our research. We trained the CRF using the various linguistic features which was extracted from POS(Part Of Speech) tag, lexicon, length of word, and location of word in the sentences. Combined linguistic features were used in the experiments, and we could collect some linguistic features which generate good performance in the phrase break prediction. From the results of experiments, we can see that the proposed method shows improved performance on previous methods. Additionally, because the linguistic features are independent of each other in our research, the proposed method has higher flexibility than other methods.

  • PDF

A Study of the Equivalence Problem in $\xi\sum{0}$ Class ($\xi\sum{0}$ 등급에서의 동치문제 연구)

  • Dong-Koo Choi;Sung-Hwan Kim
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.10
    • /
    • pp.1301-1308
    • /
    • 2001
  • In this paper, some interesting aspects of Grzegorczyk classes $\xi\sum{n}$, n$\geq$0 & $\sum$= { 1, 2 } of word-theoretic primitive recursive functions are observed including the classes of its corresponding predicates ($\xi\sum{n}$)* In particular, the small classes $\xi\sum{n}$($n\leq2$) are very incomparable to the corresponding small classes $\xi\sum{n}$ where $\xi\sum{n}$ is the number-theoretic Grzegorczyk classes. As one of some interesting aspects of the small classes, we show that the equivalence problem in $\xi\sum{0}$is undecidable.

  • PDF

Against Pied-Piping

  • Choi, Young-Sik
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.171-185
    • /
    • 2002
  • I claim that the asymmetry of locality effects in wh-questions involving Complex Noun Phrase Island in Korean follows from the proposal for the asymmetric mode of scope taking between way (why) and the other wh-words in Korean as laid out in Choi (2002). 1 will show that the present proposal is superio. to the LF pied-piping approach in Nishigauchi (1990) and WH-structure pied-piping in von Stechow(1996) in that it does not have the fatal problem of wrong semantics in Nishigauchi and Subjacency violation problem in von Stechow. The crossed reading in examples involving Wh-island has an interesting implication for the mechanism of unselective binding, suggesting that Heim's (1982) quantifier indexing mechanism, which requires the local unselective binding of the indefinite by the unselective binder, may be too strong.

  • PDF

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

Key-word Error Correction System using Syllable Restoration Algorithm (음절 복원 알고리즘을 이용한 핵심어 오류 보정 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.165-172
    • /
    • 2010
  • There are two method of error correction in vocabulary recognition system. one error pattern matting base on method other vocabulary mean pattern base on method. They are a failure while semantic of key-word problem for error correction. In improving, in this paper is propose system of key-word error correction using algorithm of syllable restoration. System of key-word error correction by processing of semantic parse through recognized phoneme meaning. It's performed restore by algorithm of syllable restoration phoneme apply fluctuation before word. It's definitely parse of key-word and reduced of unrecognized. Find out error correction rate using phoneme likelihood and confidence for system parse. When vocabulary recognition perform error correction for error proved vocabulary. system performance comparison as a result of recognition improve represent 2.3% by method using error pattern learning and error pattern matting, vocabulary mean pattern base on method.

An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation (한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법)

  • Lee, Ha-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1588-1597
    • /
    • 1996
  • This paper describes an estimation method for lexical probabilities in Korean lexical disambiguation. In the stochastic to lexical disambiguation lexical probabilities and contextual probabilities are generally estimated on the basis of statistical data extracted form corpora. It is desirable to apply lexical probabilities in terms of word phrases for Korean because sentences are spaced in the unit of word phrase. However, Korean word phrases are so multiform that there are more or less chances that lexical probabilities cannot be estimated directly in terms of word phrases though fairly large corpora are used. To overcome this problem, similarity for word phrases is defined from the lexical analysis point of view in this research and an estimation method for Korean lexical probabilities based on the similarity is proposed. In this method, when a lexical probability for a word phrase cannot be estimated directly, it is estimated indirectly through the word phrase similar to the given one. Experimental results show that the proposed approach is effective for Korean lexical disambiguation.

  • PDF

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.5
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.