• Title/Summary/Keyword: word dictionary

Search Result 276, Processing Time 0.026 seconds

Named Entity Recognition and Dictionary Construction for Korean Title: Books, Movies, Music and TV Programs (한국어 제목 개체명 인식 및 사전 구축: 도서, 영화, 음악, TV프로그램)

  • Park, Yongmin;Lee, Jae Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.285-292
    • /
    • 2014
  • A named entity recognition method is used to improve the performance of information retrieval systems, question answering systems, machine translation systems and so on. The targets of the named entity recognition are usually PLOs (persons, locations and organizations). They are usually proper nouns or unregistered words, and traditional named entity recognizers use these characteristics to find out named entity candidates. The titles of books, movies and TV programs have different characteristics than PLO entities. They are sometimes multiple phrases, one sentence, or special characters. This makes it difficult to find the named entity candidates. In this paper we propose a method to quickly extract title named entities from news articles and automatically build a named entity dictionary for the titles. For the candidates identification, the word phrases enclosed with special symbols in a sentence are firstly extracted, and then verified by the SVM with using feature words and their distances. For the classification of the extracted title candidates, SVM is used with the mutual information of word contexts.

Utilizing Local Bilingual Embeddings on Korean-English Law Data (한국어-영어 법률 말뭉치의 로컬 이중 언어 임베딩)

  • Choi, Soon-Young;Matteson, Andrew Stuart;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.45-53
    • /
    • 2018
  • Recently, studies about bilingual word embedding have been gaining much attention. However, bilingual word embedding with Korean is not actively pursued due to the difficulty in obtaining a sizable, high quality corpus. Local embeddings that can be applied to specific domains are relatively rare. Additionally, multi-word vocabulary is problematic due to the lack of one-to-one word-level correspondence in translation pairs. In this paper, we crawl 868,163 paragraphs from a Korean-English law corpus and propose three mapping strategies for word embedding. These strategies address the aforementioned issues including multi-word translation and improve translation pair quality on paragraph-aligned data. We demonstrate a twofold increase in translation pair quality compared to the global bilingual word embedding baseline.

500+ words Isolated-word Speech Recognition System (500+ 단어 단독어 음성 인식 시스템)

  • 이강성
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06c
    • /
    • pp.83-86
    • /
    • 1998
  • This paper describes an overview of the system designed for 500-word speech recognition. The system is based on the triphone models and uses Dynamic Multisection(DMS) technique for pattern matching. The system is very flexible in the sense of the word-dictionary which is changable spontaneously without any training. The vocabulary selected for the experiments is 561 words of province names, district names of Seoul and Pusan. The experimental results which will be shown here are preliminary because only one speaker was involved in the experiments. But the result is satisfactory when we see the performance is 95.1%. The system is designed on the Windows-95 and works in realtime on the Pentium-133 computer.

  • PDF

Morphological Processing with LR Techniques (LR 테크닉을 이용한 형태소 분석)

  • 이강혁
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.2
    • /
    • pp.115-143
    • /
    • 1994
  • In this paper,I present an extended two-level model using LR parsing techniques.The LR-based two-level model not only guarantees effcient morphological processing but also achieves a higher degree of descriptive adequacy than Koskenniemi's original model.The two-level model is augmented with an independent morphosyntactic module based on feature-based CF word grammar.By adopting a CF word grammar,our model is capable of dealing with complex words with discontinuous dependencies without having duplicate lexicons.It is shown how LR predictions manifested in the parsing table can help the morphological processor to minimize the dictionary lookup process.

A Word Semantic Similarity Measure Model using Korean Open Dictionary (우리말샘 사전을 이용한 단어 의미 유사도 측정 모델 개발)

  • Kim, Hoyong;Lee, Min-Ho;Seo, Dongmin
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.3-4
    • /
    • 2018
  • 단어 의미 유사도 측정은 정보 검색이나 문서 분류와 같이 자연어 처리 분야 문제를 해결하는 데 큰 도움을 준다. 이러한 의미 유사도 측정 문제를 해결하기 위하여 단어의 계층 구조를 사용한 기존 연구들이 있지만 이는 단어의 의미를 고려하고 있지 않아 만족스럽지 못한 결과를 보여주고 있다. 본 논문에서는 국립국어원에서 간행한 표준국어대사전에 50만 어휘가 추가된 우리말샘 사전을 기반으로 하여 한국어 단어에 대한 계층 구조를 파악했다. 그리고 단어의 용례를 word2vec 모델에 학습하여 단어의 문맥적 의미를 파악하고, 단어의 정의문을 sent2vec 모델에 학습하여 단어의 사전적 의미를 파악했다. 또한, 구축된 계층 구조와 학습된 word2vec, sent2vec 모델을 이용하여 한국어 단어 의미 유사도를 측정하는 모델을 제안했다. 마지막으로 성능 평가를 통해 제안하는 모델이 기존 모델보다 향상된 성능을 보임을 입증했다.

  • PDF

Judging Translated Web Document & Constructing Bilingual Corpus (웹 번역문서 판별과 병렬 말뭉치 구축)

  • Jee-hyung, Kim;Yill-byung, Lee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.787-789
    • /
    • 2004
  • People frequently feel the need of a general searching tool that frees from language barrier when they find information through the internet. Therefore, it is necessary to have a multilingual parallel corpus to search with a word that includes a search keyword and has a corresponding word in another language, Multilingual parallel corpus can be built and reused effectively through the several processes which are judgment of the web documents, sentence alignment and word alignment. To build a multilingual parallel corpus, multi-lingual dictionary should be constructed in each language and HTML should be simplified. And by understanding the meaning and the statistics of document structure, judgment on translated web documents will be made and the searched web pages will be aligned in sentence unit.

  • PDF

Intelligent Wordcloud Using Text Mining (텍스트 마이닝을 이용한 지능적 워드클라우드)

  • Kim, Yeongchang;Ji, Sangsu;Park, Dongseo;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.325-326
    • /
    • 2019
  • This paper proposes an intelligent word cloud by improving the existing method of representing word cloud by examining the frequency of nouns with text mining technique. In this paper, we propose a method to visually show word clouds focused on other parts, such as verbs, by effectively adding newly-coined words and the like to a dictionary that extracts noun words in text mining. In the experiment, the KoNLP package was used for extracting the frequency of existing nouns, and 80 new words that were not supported were added manually by examining frequency.

  • PDF

A Study on Harmful Word Filtering System for Education of Information Communication Ethics (정보통신 윤리교육을 위한 유해단어필터링 시스템에 관한 연구)

  • 김응곤;김치민;임창균
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.334-343
    • /
    • 2003
  • This paper suggests the education method of information communication ethics by harmful word filtering on web boards as a way to solve the malfunctioning problem occurring in making informations at the step of their positive activities of information offers. The harmful word filtering system for the education of information communication ethics describes the method to construct a harmful word dictionary by extracting harmful words related with improper doing of writing, sexual insult, abusive language and expressions of criticizing others shown in web boards. Decrease by more than 90% in writing with harmful words and inappropriate writing was shown as the result of application of the harmful word filtering system on school home pages.

A Method of Intonation Modeling for Corpus-Based Korean Speech Synthesizer (코퍼스 기반 한국어 합성기의 억양 구현 방안)

  • Kim, Jin-Young;Park, Sang-Eon;Eom, Ki-Wan;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.193-208
    • /
    • 2000
  • This paper describes a multi-step method of intonation modeling for corpus-based Korean speech synthesizer. We selected 1833 sentences considering various syntactic structures and built a corresponding speech corpus uttered by a female announcer. We detected the pitch using laryngograph signals and manually marked the prosodic boundaries on recorded speech, and carried out the tagging of part-of-speech and syntactic analysis on the text. The detected pitch was separated into 3 frequency bands of low, mid, high frequency components which correspond to the baseline, the word tone, and the syllable tone. We predicted them using the CART method and the Viterbi search algorithm with a word-tone-dictionary. In the collected spoken sentences, 1500 sentences were trained and 333 sentences were tested. In the layer of word tone modeling, we compared two methods. One is to predict the word tone corresponding to the mid-frequency components directly and the other is to predict it by multiplying the ratio of the word tone to the baseline by the baseline. The former method resulted in a mean error of 12.37 Hz and the latter in one of 12.41 Hz, similar to each other. In the layer of syllable tone modeling, it resulted in a mean error rate less than 8.3% comparing with the mean pitch, 193.56 Hz of the announcer, so its performance was relatively good.

  • PDF

A Study on the Analysis of Disaster Safety Lexicon Patterns in Social Media (소셜미디어를 통해 본 재난안전 분야 어휘 사용 양상 분석)

  • Kim, Tae-Young;Lee, Jung-Eun;Oh, Hyo-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.10
    • /
    • pp.85-93
    • /
    • 2017
  • Standardization of disaster safety lexicon is important as the most basic process for successful accident prevention and response. A lack of understanding of disaster safety lexicon leads lack of communication and information sharing, which can be a problem in communicating with appropriate responses in case of a disaster. Currently disaster and safety control agencies produce and manage heterogeneous information and they also develop and use word dictionaries individually. To solve this problem, identifying differences of disaster safety lexicon patterns by the user are essential for standardization. In this paper, we conducted lexicon patterns analysis based on social media and revealed the characteristics according to pattern types. At the result, we proposed the standardization and construction methods of disaster safety word dictionary.