• Title/Summary/Keyword: Compound Word

Search Result 107, Processing Time 0.028 seconds

A Reverse Segmentation Algorithm of Compound Nouns (복합명사의 역방향 분해 알고리즘)

  • Lee, Hyeon-Min;Park, Hyeok-Ro
    • The KIPS Transactions:PartB
    • /
    • v.8B no.4
    • /
    • pp.357-364
    • /
    • 2001
  • 본 논문에서는 단위명사 사전과 접사 사전을 이용하여 한국어 복합명사를 분해하는 새로운 알고리즘을 제안한다. 한국어 복합명사는 그 구조에 있어서 중심어가 뒤에 나타난다는 점에 착안하여 본 논문에서 제안한 분해 알고리즘은 복합명사를 끝음절에서 첫음절 방향 즉 역방향으로 분해를 시도한다. ETRI의 태깅된 코퍼스로부터 추출한 복합명사 3,230개에 대해 실험한 결과 약 96.6%의 분해 정확도를 얻었다. 미등록어를 포함한 복합명사의 경우는 77.5%의 분해 정확도를 나타냈다. 실험에 사용된 데이터중의 미등록어는 대부분 접사를 포함한 파행어로서, 제안한 복합명사 분해 알고리즘은 접사가 부착된 미등록어 분석에 있어서 보다 높은 분석 정확도를 나타냄을 알 수 있었다.

  • PDF

A Study on Word Concept-based Compound Keyword Extraction (단어개념에 기반 한 한국어 복합키워드의 추출)

  • Kim, Yang-Seon;Lee, Sang-Kon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.477-480
    • /
    • 2003
  • 문서를 읽고 그 내용을 개념상으로 정리해 보면, 그 문서를 대표할 수 있는 적은 수의 복합단어로 이루어진 키워드를 찾을 수 있다. 그러나, 문서 내에 키워드가 존재할 경우는 별 문제가 없지만, 존재하지 않을 때는 적당한 키워드 추출이 불가능해진다. 따라서, 본 논문에서는 문서 본문의 출현단어의 개념정보를 기초로 복합어 생성 규칙을 구축하고, 나아가 문서의미와 관련 있는 요소만을 정제하는 중요도 결정법을 사용하여 이에 대한 유용성을 확인하였다.

  • PDF

Morphological Analysis of the Korean Language (한국어의 형태소해석)

  • Lee, Soo-Hyon;Ozawa, S.;Lee, Joo-Keun
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.4
    • /
    • pp.53-61
    • /
    • 1989
  • A morphological analysis is described to extract the informations which are required in syntactic and semantic analysis of the Korean language. The noun and particle are separated in a noun phrase, the selecting conditions are specified to analyze the compound noun and a restoring rule is represented to process the irregular compound noun. The stem and ending are separated in normal verbals and a logical representive form is proposed to the anomalously inflected word and contracted vowels. The logical representation is composed of the attribute value an analyzing rule. The redundancy of noun is reduced in the dictionary as the verb of a "Nounformed HA-" is processed by "noun" and "HA-", separately and a predicative "IDA" is analyzed by Q parameter. The processing form of negation is also derived and the morpheme and basic structure of compound predicative parts are presented.

  • PDF

Research on the Syntactic-Semantic Analysis System on Compound Sentence for Descriptive-type Grading (서술형 문항 채점을 위한 복합문 구문의미분석 시스템에 대한 연구)

  • Kang, WonSeog
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.6
    • /
    • pp.105-115
    • /
    • 2018
  • The descriptive-type question is appropriate for deep thinking ability evaluation, but it is not easy to grade. Since, even though same grading criterion, the graders produce different scores, we need the objective evaluation system. However, the system needs the Korean analysis. As the descriptive-type answering is described with the compound sentence, the system has to analyze the compound sentence. This paper develops the Korean syntactic-semantic analysis system for compound sentence and evaluates performance of the system. This system selects the modifiee of the word phrase using syntactic-semantic constraint and semantic dictionary. The 93% accurate rate shows that the system is effective. This system will be utilized in descriptive-type grading and Korean processing.

A Word Dictionary Structure for the Postprocessing of Hangul Recognition (한글인식 후처리용 단어사전의 기억구조)

  • ;Yoshinao Aoki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.9
    • /
    • pp.1702-1709
    • /
    • 1994
  • In the postprocessing of Hangul recognition system, the storage structure of contextual information is an important matter for the recognition rate and speed of the entire system. Trie in general is used to represent the context as word dictionary, but the memory space efficiency of the structure is low. Therefore we propose a new structure for word dictionary that has better space efficiency and the equivalent merits of trie. Because Hangul is a compound language, the language can be represented by phonemes or by characters. In the representation by phonemes(P-mode) the retrieval is fast, but the space efficiency is low. In the representation by characters(C-mode) the space efficiency is high, but the retrieval is slow. In this paper the two representation methods are combined to form a hybrid representation(H-mode). At first an optimal level for the combination is selected by two characteristic curves of node utilization and dispersion. Then the input words are represented with trie structure by P-mode from the first to the optimal level, and the rest are represented with sequentially linked list structure by C-mode. The experimental results for the six kinds of word set show that the proposed structure is more efficient. This result is based on the fact that the retrieval for H-mode is as fast as P-mode and the space efficiency is as good as C-mode.

  • PDF

Hot Keyword Extraction of Sci-tech Periodicals Based on the Improved BERT Model

  • Liu, Bing;Lv, Zhijun;Zhu, Nan;Chang, Dongyu;Lu, Mengxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1800-1817
    • /
    • 2022
  • With the development of the economy and the improvement of living standards, the hot issues in the subject area have become the main research direction, and the mining of the hot issues in the subject currently has problems such as a large amount of data and a complex algorithm structure. Therefore, in response to this problem, this study proposes a method for extracting hot keywords in scientific journals based on the improved BERT model.It can also provide reference for researchers,and the research method improves the overall similarity measure of the ensemble,introducing compound keyword word density, combining word segmentation, word sense set distance, and density clustering to construct an improved BERT framework, establish a composite keyword heat analysis model based on I-BERT framework.Taking the 14420 articles published in 21 kinds of social science management periodicals collected by CNKI(China National Knowledge Infrastructure) in 2017-2019 as the experimental data, the superiority of the proposed method is verified by the data of word spacing, class spacing, extraction accuracy and recall of hot keywords. In the experimental process of this research, it can be found that the method proposed in this paper has a higher accuracy than other methods in extracting hot keywords, which can ensure the timeliness and accuracy of scientific journals in capturing hot topics in the discipline, and finally pass Use information technology to master popular key words.

Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning (딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅)

  • Kim, Jungmin;Kang, Seungshik;Kim, Hyeokman
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.

A Method Of Compound Noun Phrase Indexing for Resolving Syntactic Diversity (구문 다양성 해소를 위한 복합명사구 색인 방법)

  • Cho, Min-Hee;Jeong, Do-Heon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.467-476
    • /
    • 2011
  • Compound noun phrase (CNP) is important factor for semantic information process because the meaning of the CNP is more disambiguous than that of single word. However, the CNP can be expressed in various types even though it expresses same meaning. It is called syntactic diversity. It makes information system difficult to grasp sense identity. In order to resolve the syntactic diversity in this research, we propose an indexing method for compound noun phrase. The main purpose is to make identical index term for various types of CNPs which has same meaning. To do so, the research follows next steps. For the first, we make rule template and utilize the template to extract CNPs from set of domestic research papers. In general, the CNP has a unique meaning. Considering the characteristic, we suggest synthesis rules of index terms and apply the rule to CNPs extracted in previous step. For the objective performance evaluation of the research, a test set, HANTEC 2.0, was utilized and the result was compared to baseline model. Through the experiment and the evaluation, we have confirmed that the indexing method suggested in this paper could positively affect retrieval precision and improve performance of the information retrieval.

An Analysis of Korean Word Spacing Errors Made by Chinese Learners (중국인 한국어 학습자의 글쓰기에 나타난 띄어쓰기 오류 양상 및 지도 방향)

  • Wang, Yuan
    • Korean Educational Research Journal
    • /
    • v.40 no.1
    • /
    • pp.59-79
    • /
    • 2019
  • The purpose of this study is to analyze, through questionnaires and interviews, spacing errors in Chinese students' Korean writing and to propose changes for the teaching methods used for Chinese learners by analyzing the causes of errors. By analyzing the learners' writing samples, a total of 148 space errors were found. The rates of errors (77.6%) that were made by combining separate words is much higher than the errors (22.4%) that were made by placing a space within a compound word. Among the error types, "noun + noun," "observer (type) + dependent noun," and postpositional particle errors occur most frequently. In this paper, we propose the direction of spacing starting from the deductive side and the inductive side for nouns and investigations.

  • PDF

A Normalization Method of Distorted Korean SMS Sentences for Spam Message Filtering (스팸 문자 필터링을 위한 변형된 한글 SMS 문장의 정규화 기법)

  • Kang, Seung-Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.271-276
    • /
    • 2014
  • Short message service(SMS) in a mobile communication environment is a very convenient method. However, it caused a serious side effect of generating spam messages for advertisement. Those who send spam messages distort or deform SMS sentences to avoid the messages being filtered by automatic filtering system. In order to increase the performance of spam filtering system, we need to recover the distorted sentences into normal sentences. This paper proposes a method of normalizing the various types of distorted sentence and extracting keywords through automatic word spacing and compound noun decomposition.