• Title/Summary/Keyword: 단어 문맥

Search Result 211, Processing Time 0.024 seconds

A Study on the Construction of the Automatic Summaries - on the basis of Straight News in the Web - (자동요약시스템 구축에 대한 연구 - 웹 상의 보도기사를 중심으로 -)

  • Lee, Tae-Young
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.41-67
    • /
    • 2006
  • The writings frame and various rules based on discourse structure and knowledge-based methods were applied to construct the automatic Ext/sums (extracts & summaries) system from the straight news in web. The frame contains the slot and facet represented by the role of paragraphs, sentences , and clauses in news and the rules determining the type of slot. Rearrangement like Unification, separation, and synthesis of the candidate sentences to summary, maintaining the coherence of meanings, was carried out by using the rules derived from similar degree measurement, syntactic information, discourse structure, and knowledge-based methods and the context plots defined with the syntactic/semantic signature of noun and verb and category of verb suffix. The critic sentence were tried to insert into summary.

Combinatory Categorial Grammar for the Syntactic, Semantic, and Discourse Analyses of Coordinate Constructions in Korean (한국어 병렬문의 통사, 의미, 문맥 분석을 위한 결합범주문법)

  • Cho, Hyung-Joon;Park, Jong-Cheol
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.4
    • /
    • pp.448-462
    • /
    • 2000
  • Coordinate constructions in natural language pose a number of difficulties to natural language processing units, due to the increased complexity of syntactic analysis, the syntactic ambiguity of the involved lexical items, and the apparent deletion of predicates in various places. In this paper, we address the syntactic characteristics of the coordinate constructions in Korean from the viewpoint of constructing a competence grammar, and present a version of combinatory categorial grammar for the analysis of coordinate constructions in Korean. We also show how to utilize a unified lexicon in the proposed grammar formalism in deriving the sentential semantics and associated information structures as well, in order to capture the discourse functions of coordinate constructions in Korean. The presented analysis conforms to the common wisdom that coordinate constructions are utilized in language not simply to reduce multiple sentences to a single sentence, but also to convey the information of contrast. Finally, we provide an analysis of sample corpora for the frequency of coordinate constructions in Korean and discuss some problematic cases.

  • PDF

Lip-Synch System Optimization Using Class Dependent SCHMM (클래스 종속 반연속 HMM을 이용한 립싱크 시스템 최적화)

  • Lee, Sung-Hee;Park, Jun-Ho;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.312-318
    • /
    • 2006
  • The conventional lip-synch system has a two-step process, speech segmentation and recognition. However, the difficulty of speech segmentation procedure and the inaccuracy of training data set due to the segmentation lead to a significant Performance degradation in the system. To cope with that, the connected vowel recognition method using Head-Body-Tail (HBT) model is proposed. The HBT model which is appropriate for handling relatively small sized vocabulary tasks reflects co-articulation effect efficiently. Moreover the 7 vowels are merged into 3 classes having similar lip shape while the system is optimized by employing a class dependent SCHMM structure. Additionally in both end sides of each word which has large variations, 8 components Gaussian mixture model is directly used to improve the ability of representation. Though the proposed method reveals similar performance with respect to the CHMM based on the HBT structure. the number of parameters is reduced by 33.92%. This reduction makes it a computationally efficient method enabling real time operation.

Die Aktualgenese von Nominalkomposita im Deutschen (독일어 '임시복합명사'의 생성과정과 해석)

  • Oh Young-Hun
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.6
    • /
    • pp.1-21
    • /
    • 2002
  • '임시복합명사'는 명사 하나 하나의 의미가 개인의 머릿속에 저장되어 있지만, 이들이 결합해서 생긴 단어가 일반적인 언어사전에 등록되어 있지 않고 문맥에 따라 새로운 의미가 형성되어서 결합된 명사를 의미한다. 따라서 이 논문에서는 사전의 목록에 등록되어 있지 않아서 의미적으로 애매한 복합명사들을 '임시복합명사' ad-hoc Nominal-komposita 라고 지칭하였다. 이때 이러한 '임시복합명사'를 생성하는데 있어서 '임시복합명사'를 구성하는 각 요소들은 새로운 복합명사를 만드는데 필요한 '입력'의 역할을 담당한다. 이 논문에서는 '임시복합명사'를 구성하는데 필요한 일종의 다양한 원칙들을 다루어 보았다. 그러한 원칙들은 순수 언어학적인 논거를 바탕으로 '임시복합명사'를 생산하고 해석해 나가는 과정에 대한 타당성을 입증해 주었다. 그러나 일반적인 지식 Weltwissen과 텍스트 문맥에 맞는 구조를 편입함으로써 그 형태와 해석이 가능한 다른 형태의 복합어는 이 논문에서 자세히 다루지 않았다. 이 논문에서 제시된 복합명사의 생성과 해석과정은 대부분의 경우 복합어 고유의 현상만을 설명한 것이 아니라, 일반적으로 복합어를 생산하고 해석하는 과정을 다룬 것이다. 마찬가지로 이 점은 텍스트 문맥과 상관없이 해석이 가능한 복합어 내지는 텍스트 문맥에 따라 해석이 가능한 복합어에서도 똑같이 적용된다. 텍스트의 문맥을 통해서 자체적으로 해석이 가능하지 않은 복합어를 명확하게 의미를 부여하고 해석하는 과정, 예를 들어 의사소통상에서 일반적인 지식을 이용하여 '임시복합어'를 해석하는 과정은 이후의 연구에 다양하게 다루어 질 테마가 될 것임이 분명하다. 또한 '임시복합명사'를 생산하기 위해 이 논문에서 다룬 전제조건들은 또 다른 새로운 복합어를 생산하는데, 예를 들어 명사로부터 파생된 동사들의 복합어를 연구하는데 밑거름이 될 것이다.학의 강력한 연구가 요구된다.에 기대어 텍스트, 문장, 어휘영역 등이 투입되어 적용되었으며, 이에 상응되게 구체적인 몇몇 방안들이 제시되었다. 학습자들이 텍스트를 읽고 중심내용을 찾아내며, 단락을 구획하고 또한 체계를 파악하는데 있어서 어휘연습은 외국어 교수법 측면에서도 매우 관여적이며 시의적절한 과제라 생각된다. Sd 2) PL - Sn - pS: (1) PL[VPL - Sa] - Sn - pS (2) PL[VPL - pS] - Sn - pS (3) PL(VPL - Sa - pS) - Sn - pS 3) PL[VPL - pS) - Sn -Sa $\cdot$ 3가 동사 관용구: (1) PL[VPL - pS] - Sn - Sd - Sa (2) PL[VPL - pS] - Sn - Sa - pS (3) PL[VPL - Sa] - Sn - Sd - pS 이러한 분류가 보여주듯이, 독일어에는 1가, 2가, 3가의 관용구가 있으며, 구조 외적으로 동일한 통사적 결합가를 갖는다 하더라도 구조 내적 성분구조가 다르다는 것을 알 수 있다. 우리는 이 글이 외국어로서의 독일어를 배우는 이들에게 독일어의 관용구를 보다 올바르게 이해할 수 있는 방법론적인 토대를 제공함은 물론, (관용어) 사전에서 외국인 학습자를 고려하여 관용구를 알기 쉽게 기술하는 데 도움을 줄 수 있기를 바란다.되기 시작하면서 남황해 분지는 구조역전의 현상이 일어났으며, 동시에 발해 분지는 인리형 분지로 발달하게 되었다. 따라서, 올리고세 동안 발해 분지에서는 퇴적작용이, 남황해 분지에서는 심한 구조역전에 의한 분지변형이 동시에 일어났다 올리고세 이후 현재까지, 남황해 분지와 발해 분지들은 간헐적인 해침과 함께 광역적 침강을 유지하면서 안정된 대륙 및 대륙붕 지역으로 전이되었다.

  • PDF

Terminology Tagging System using elements of Korean Encyclopedia (백과사전 기반 전문용어 태깅 시스템)

Korean Coreference Resolution at the Morpheme Level (형태소 수준의 한국어 상호참조해결 )

  • Kyeongbin Jo;Yohan Choi;Changki Lee;Jihee Ryu;Joonho Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.329-333
    • /
    • 2022
  • 상호참조해결은 주어진 문서에서 상호참조해결 대상이 되는 멘션(mention)을 식별하고, 동일한 개체(entity)를 의미하는 멘션들을 찾아 그룹화하는 자연어처리 태스크이다. 최근 상호참조해결에서는 BERT를 이용하여 단어의 문맥 표현을 얻은 후, 멘션 탐지와 상호참조해결을 동시에 진행하는 End-to-End 모델이 주로 연구가 되었다. 그러나 End-to-End 방식으로 모델을 수행하기 위해서는 모든 스팬을 잠재적인 멘션으로 간주해야 되기 때문에 많은 메모리가 필요하고 시간 복잡도가 상승하는 문제가 있다. 본 논문에서는 서브 토큰을 다시 단어 단위로 매핑하여 상호참조해결을 수행하는 워드 레벨 상호참조해결 모델을 한국어에 적용하며, 한국어 상호참조해결의 특징을 반영하기 위해 워드 레벨 상호참조해결 모델의 토큰 표현에 개체명 자질과 의존 구문 분석 자질을 추가하였다. 실험 결과, ETRI 질의응답 도메인 평가 셋에서 F1 69.55%로, 기존 End-to-End 방식의 상호참조해결 모델 대비 0.54% 성능 향상을 보이면서 메모리 사용량은 2.4배 좋아졌고, 속도는 1.82배 빨라졌다.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Named Entity Recognition and Dictionary Construction for Korean Title: Books, Movies, Music and TV Programs (한국어 제목 개체명 인식 및 사전 구축: 도서, 영화, 음악, TV프로그램)

  • Park, Yongmin;Lee, Jae Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.285-292
    • /
    • 2014
  • A named entity recognition method is used to improve the performance of information retrieval systems, question answering systems, machine translation systems and so on. The targets of the named entity recognition are usually PLOs (persons, locations and organizations). They are usually proper nouns or unregistered words, and traditional named entity recognizers use these characteristics to find out named entity candidates. The titles of books, movies and TV programs have different characteristics than PLO entities. They are sometimes multiple phrases, one sentence, or special characters. This makes it difficult to find the named entity candidates. In this paper we propose a method to quickly extract title named entities from news articles and automatically build a named entity dictionary for the titles. For the candidates identification, the word phrases enclosed with special symbols in a sentence are firstly extracted, and then verified by the SVM with using feature words and their distances. For the classification of the extracted title candidates, SVM is used with the mutual information of word contexts.

Phonological development of children aged 3 to 7 under the condition of sentence repetition (문장 따라말하기 과제에서 3~7세 아동의 말소리발달)

  • Kim, Soo-Jin;Park, Na rae;Chang, Moon Soo;Kim, Young Tae;Shin, Moonja;Ha, Ji-Wan
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.85-95
    • /
    • 2020
  • Sentence repetition is a way of evaluating speech sound production to improve the limitation of word tests and spontaneous speech analysis. Speech sounds produced by children can be evaluated using several indicators. This study examined the progression of the percentage of correct consonants-revised (PCC-R) and phonological whole-word measure in different age and gender groups after setting consonants in various vowel contexts and implementing sentence repetition tasks that were designed to give all phonemes the chance to appear at least three times. For this study, 11 sentence repetition tasks were applied to 535 children aged 3 to 7 across the country, after which the resulting PCC-R and whole-word measure were analyzed. The study results showed that all the indicators improved in older age groups and there were significant differences depending on age, however, no significant differences dependent on gender were found. The sentence repetition conditions data used in this study were collected from across the country, and the age difference between each age group was six months. This study is noteworthy because it collected a sufficient amount of data from each group, highlighted the limitation of the word naming and the spontaneous speech analysis, and suggests new criteria of evaluation through the analysis of each whole-word measure in sentence repetition, which was not applied in previous studies.

A Study on Regression Class Generation of MLLR Adaptation Using State Level Sharing (상태레벨 공유를 이용한 MLLR 적응화의 회귀클래스 생성에 관한 연구)

  • 오세진;성우창;김광동;노덕규;송민규;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.727-739
    • /
    • 2003
  • In this paper, we propose a generation method of regression classes for adaptation in the HM-Net (Hidden Markov Network) system. The MLLR (Maximum Likelihood Linear Regression) adaptation approach is applied to the HM-Net speech recognition system for expressing the characteristics of speaker effectively and the use of HM-Net in various tasks. For the state level sharing, the context domain state splitting of PDT-SSS (Phonetic Decision Tree-based Successive State Splitting) algorithm, which has the contextual and time domain clustering, is adopted. In each state of contextual domain, the desired phoneme classes are determined by splitting the context information (classes) including target speaker's speech data. The number of adaptation parameters, such as means and variances, is autonomously controlled by contextual domain state splitting of PDT-SSS, depending on the context information and the amount of adaptation utterances from a new speaker. The experiments are performed to verify the effectiveness of the proposed method on the KLE (The center for Korean Language Engineering) 452 data and YNU (Yeungnam Dniv) 200 data. The experimental results show that the accuracies of phone, word, and sentence recognition system increased by 34∼37%, 9%, and 20%, respectively, Compared with performance according to the length of adaptation utterances, the performance are also significantly improved even in short adaptation utterances. Therefore, we can argue that the proposed regression class method is well applied to HM-Net speech recognition system employing MLLR speaker adaptation.