• Title/Summary/Keyword: 한국어 문법

Search Result 345, Processing Time 0.026 seconds

Japanese-Korean Machine Translation System Using Connection Forms of Neighboring Words (인접 단어들의 접속정보를 이용한 일한 기계번역 시스템)

  • Kim, Jung-In
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.7
    • /
    • pp.998-1008
    • /
    • 2004
  • There are many syntactic similarities between Japanese and Korean languages. Using these similarities, we can make out the Japanese-Korean translation system without most of syntactic analysis and semantic analysis. To improve the translation rates greatly, we have been developing the Japanese-Korean translation system using these similarities from several years ago. However, the system remains some problems such as a translation of inflected words, processing of multi-translatable words and so on. In this paper, we suggest the new method of Japanese-Korean translation by using relations of two neighboring words. To solve the problems, we investigated the connection rules of auxiliary verbs priority. And we design the translation table which is consists of entry tables and connection forms tables. A case of only one translation word, we can translate a Korean to Japanese by direct matching method use of only entry table, otherwise we have to evaluate the connection value by connection forms tables and then we can select the best translation word.

  • PDF

On the Development of a Continuous Speech Recognition System Using Continuous Hidden Markov Model for Korean Language (연속분포 HMM을 이용한 한국어 연속 음성 인식 시스템 개발)

  • Kim, Do-Yeong;Park, Yong-Kyu;Kwon, Oh-Wook;Un, Chong-Kwan;Park, Seong-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.24-31
    • /
    • 1994
  • In this paper, we report on the development of a speaker independent continuous speech recognition system using continuous hidden Markov models. The continuous hidden Markov model consists of mean and covariance matrices and directly models speech signal parameters, therefore does not have quantization error. Filter bank coefficients with their 1st and 2nd-order derivatives are used as feature vectors to represent the dynamic features of speech signal. We use the segmental K-means algorithm as a training algorithm and triphone as a recognition unit to alleviate performance degradation due to coarticulation problems critical in continuous speech recognition. Also, we use the one-pass search algorithm that Is advantageous in speeding-up the recognition time. Experimental results show that the system attains the recognition accuracy of $83\%$ without grammar and $94\%$ with finite state networks in speaker-indepdent speech recognition.

  • PDF

Performance Improvement of Continuous Digits Speech Recognition using the Transformed Successive State Splitting and Demi-syllable pair (반음절쌍과 변형된 연쇄 상태 분할을 이용한 연속 숫자음 인식의 성능 향상)

  • Kim Dong-Ok;Park No-Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1625-1631
    • /
    • 2005
  • This paper describes an optimization of a language model and an acoustic model that improve the ability of speech recognition with Korean nit digit. Recognition errors of the language model are decreasing by analysis of the grammatical feature of korean unit digits, and then is made up of fsn-node with a disyllable. Acoustic model make use of demi-syllable pair to decrease recognition errors by inaccuracy division of a phone, a syllable because of a monosyllable, a short pronunciation and an articulation. we have used the k-means clustering algorithm with the transformed successive state splining in feature level for the efficient modelling of the feature of recognition unit . As a result of experimentations, $10.5\%$ recognition rate is raised in the case of the proposed language model. The demi-syllable pair with an acoustic model increased $12.5\%$ recognition rate and $1.5\%$ recognition rate is improved in transformed successive state splitting.

A Comparative Study of Chinese Translations of 『Who ate all the Shinga?』 - Focusing on the Translation strategy of 4 types of Translations (『그 많던 싱아는 누가 다 먹었을까』의 중국어 번역본 비교 연구 - 4종 번역본의 번역전략을 중심으로)

  • YANG, LEI;MOON, DAE IL
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.403-408
    • /
    • 2022
  • This study analyzed the translation strategies of four Chinese translations of 『Who ate all the Sing a?』. As is well known, Park Wan-seo's works contain many psychological descriptions, abstract vocabulary, idioms, proverbs, dialects, etc., so when translating into Chinese, various translation strategies such as translation, interpretation, and creative translation are required. Although all four types studied in this paper are somewhat different depending on the translator, all translation strategies were used in a comprehensive way. As a result of the study, all four translation strategies used a strategy of direct translation of Chinese characters when translating geographical namesand names of people. The interpretational translation strategy was used for the translation of vocabulary that requires historical, social, cultural, and geography background interpretation. was utilized. The creative translation strategy was used when translating overlapping issues, political and historically sensitive issues, and issues related to Korean pronunciation and grammar. Based on the results of this study, it is expected that translation strategy research on various Chinese translations of Korean modern literature as well as various Chinese translations of Park Wan-seo will be expanded.

A Processing of Progressive Aspect "te-iru" in Japanese-Korean Machine Translation (일한기계번역에서 진행형 "ている"의 번역처리)

  • Kim, Jeong-In;Mun, Gyeong-Hui;Lee, Jong-Hyeok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.685-692
    • /
    • 2001
  • This paper describes how to disambiguate the aspectual meaning of Japanese expression "-te iru" in Japanese-Korean machine translation Due to grammatical similarities of both languages, almost all Japanese- Korean MT systems have been developed under the direct MT strategy, in which the lexical disambiguation is essential to high-quality translation. Japanese has a progressive aspectual marker “-te iru" which is difficult to translate into Korean equivalents because in Korean there are two different progressive aspectual markers: "-ko issta" for "action progressive" and "-e issta" for "state progressive". Moreover, the aspectual system of both languages does not quite coincide with each other, so the Korean progressive aspect could not be determined by Japanese meaning of " te iru" alone. The progressive aspectural meaning may be parially determined by the meaning of predicates and also the semantic meaning of predicates may be partially reshicted by adverbials, so all Japanese predicates are classified into five classes : the 1nd verb is used only for "action progrssive",2nd verb generally for "action progressive" but occasionally for "state progressive", the 3rd verb only for "state progressive", the 4th verb generally for "state progressive", but occasIonally for "action progressive", and the 5th verb for the others. Some heuristic rules are defined for disambiguation of the 2nd and 4th verbs on the basis of adverbs and abverbial phrases. In an experimental evaluation using more than 15,000 sentances from "Asahi newspapers", the proposed method improved the translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.anslation.

  • PDF

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study on Performance Evaluation of Hidden Markov Network Speech Recognition System (Hidden Markov Network 음성인식 시스템의 성능평가에 관한 연구)

  • 오세진;김광동;노덕규;위석오;송민규;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.30-39
    • /
    • 2003
  • In this paper, we carried out the performance evaluation of HM-Net(Hidden Markov Network) speech recognition system for Korean speech databases. We adopted to construct acoustic models using the HM-Nets modified by HMMs(Hidden Markov Models), which are widely used as the statistical modeling methods. HM-Nets are carried out the state splitting for contextual and temporal domain by PDT-SSS(Phonetic Decision Tree-based Successive State Splitting) algorithm, which is modified the original SSS algorithm. Especially it adopted the phonetic decision tree to effectively express the context information not appear in training speech data on contextual domain state splitting. In case of temporal domain state splitting, to effectively represent information of each phoneme maintenance in the state splitting is carried out, and then the optimal model network of triphone types are constructed by in the parameter. Speech recognition was performed using the one-pass Viterbi beam search algorithm with phone-pair/word-pair grammar for phoneme/word recognition, respectively and using the multi-pass search algorithm with n-gram language models for sentence recognition. The tree-structured lexicon was used in order to decrease the number of nodes by sharing the same prefixes among words. In this paper, the performance evaluation of HM-Net speech recognition system is carried out for various recognition conditions. Through the experiments, we verified that it has very superior recognition performance compared with the previous introduced recognition system.

  • PDF

Analysis of Korean Spontaneous Speech Characteristics for Spoken Dialogue Recognition (대화체 연속음성 인식을 위한 한국어 대화음성 특성 분석)

  • 박영희;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.330-338
    • /
    • 2002
  • Spontaneous speech is ungrammatical as well as serious phonological variations, which make recognition extremely difficult, compared with read speech. In this paper, for conversational speech recognition, we analyze the transcriptions of the real conversational speech, and then classify the characteristics of conversational speech in the speech recognition aspect. Reflecting these features, we obtain the baseline system for conversational speech recognition. The classification consists of long duration of silence, disfluencies and phonological variations; each of them is classified with similar features. To deal with these characteristics, first, we update silence model and append a filled pause model, a garbage model; second, we append multiple phonetic transcriptions to lexicon for most frequent phonological variations. In our experiments, our baseline morpheme error rate (WER) is 31.65%; we obtain MER reductions such as 2.08% for silence and garbage model, 0.73% for filled pause model, and 0.73% for phonological variations. Finally, we obtain 27.92% MER for conversational speech recognition, which will be used as a baseline for further study.

Eine typologische Studio zur sprachlichen $Repr\"{a}sentation$ der komitativen und instrumentalen Relationen am Beispiel des Deutschen und Koreanischen (동반관계와 도구관계의 표현양식에 대한 언어유형학적 연구 - 독일어와 한국어를 중심으로 -)

  • Shin Yong-Min
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.10
    • /
    • pp.201-229
    • /
    • 2004
  • In der vorliegenden Albeit wird die funktionale $Dom\"{a}ne$ der Konkomitanz unter funktional-typologischen Gesichtpunkten untersucht. Dabei wird ein funktionales Rahmenwerk fur die Untersuchungen $\"{u}ber$ die verschiedenen komitativen und intrumentalen Relationen entwickelt. Es gibt eine Reihe von Partizipantenrelationen in der funktionalen $Dom\"{a}ne$ der Konkomitanz, $\n\"{a}mlich$ 'Partner', 'Companion', 'Vehicle', 'Tool', 'Material' und 'Manner'. Diese Partizipanten $k\"{o}nnen$ als Konkomitant bezichnet werden. Sie bilden ein Kontinuum $bez\"{u}glich$ der Empathiehierarchie sowie der Kontrollhierarchie. Konkomitanten $k\"{o}nnen$ bei ihrer syntaktischen Kodierung $gem\"{\a}{\ss}$ den spezifischen Typen der konkomitanten Relationen und ihren absoluten Eigenschaften variieren. Es werden sieben verschiedene Typen der Kodierungsstrategien vorgestellt, d.h. konkomitante $Pr\'{a}dikation$, adpositionale Markierung, Kasusmarkierung, Verbderivation, inkorporation, Konversion und lexikale Fusion. In einer gegebenen Sprache kann es $nat\"{u}rlich$ feinere Variation, die z.B. mit den Graden der Grammatikalisierung und Lexikakisierung dieser Kodierungsstrategien zu tun hat, geben. Die Distribution der strukturellen Apparaturen auf der funktionalen $Dom\"{a}ne$ der Konkomitanz ist innerhalb einer Sprache nicht einheitlich. Dies gilt auch $f\"{u}r$ den Sprachen (z.B. Deutsch und Englisch), die gewohnlich als ein typisches Beispiel fur den Synkretismus zwischen Komitativ und Instrumental angesehen werden. Die $Pr\"{a}position$ mit im Deutschen $dr\"{u}ckt$ nur eine Teilmenge der verschiedenen konkomitanten Relationen aus. Tlansportrnittei' und Material' werden also nicht durch die $Pr\"{a}position$ mit $ausgedr\"{u}ckt$. Beim Ausdruck der konkomitanten Relationen verwendet Deutsch konsistent die Strategie der adpositionalen Markierung. Andere $m\"{o}gliche$ Strategien $k\"{o}nnen$ als ein minimaler Sonderfall angesehen werden. Koreanisch $verf\"{u}gt$ hingegen auf zwei Ausdrucksstrategien, $n\"{a}mlich$ konkomitante $Pr\"{a}dikation$ und Kasusmarkieriing. Die Auswahl zwischen den beiden Strategien ist meistens von den Empathieeigenschaften der Konkomitanten $abh\"{a}ngig$. Auch innerhalb einer Ausdrucksstrategie spielt die Empathie der Konkomitanten eine wichtige Rolle bei der Auswahl der Strukturmittel. Einen Synkretismus weisen im Koreanischen 'Partner' & 'Companion' einerseits und 'Vehicle' & 'Tool' & 'Material' andererseits auf, Die Ersteren werden durch die auditive Kasusmarkierung und die Letzteren durch die instrumentale Kasusmarkierung ausgedruckt. Ein Synkretimus beim Ausdruck der verschiedenen konkomitanten Relationen ist in den Sprachen, die nur konkomitante $Pr\"{a}dikation$ aufweisen, nicht zu erwarten, denn ein konkomitantes $Pr\"{a}dikat$ gibt die Art der Beteiligung der Konkomitanten an der Situation lexikalisch mehr oder weniger explizit wieder

  • PDF