• Title/Summary/Keyword: Word

Search Result 6,322, Processing Time 0.03 seconds

Word Sense Disambiguation Using Embedded Word Space

  • Kang, Myung Yun;Kim, Bogyum;Lee, Jae Sung
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.32-38
    • /
    • 2017
  • Determining the correct word sense among ambiguous senses is essential for semantic analysis. One of the models for word sense disambiguation is the word space model which is very simple in the structure and effective. However, when the context word vectors in the word space model are merged into sense vectors in a sense inventory, they become typically very large but still suffer from the lexical scarcity. In this paper, we propose a word sense disambiguation method using word embedding that makes the sense inventory vectors compact and efficient due to its additive compositionality. Results of experiments with a Korean sense-tagged corpus show that our method is very effective.

Word Embedding using word position information (단어의 위치정보를 이용한 Word Embedding)

  • Hwang, Hyunsun;Lee, Changki;Jang, HyunKi;Kang, Dongho
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.60-63
    • /
    • 2017
  • 자연어처리에 딥 러닝을 적용하기 위해 사용되는 Word embedding은 단어를 벡터 공간상에 표현하는 것으로 차원축소 효과와 더불어 유사한 의미의 단어는 유사한 벡터 값을 갖는다는 장점이 있다. 이러한 word embedding은 대용량 코퍼스를 학습해야 좋은 성능을 얻을 수 있기 때문에 기존에 많이 사용되던 word2vec 모델은 대용량 코퍼스 학습을 위해 모델을 단순화 하여 주로 단어의 등장 비율에 중점적으로 맞추어 학습하게 되어 단어의 위치 정보를 이용하지 않는다는 단점이 있다. 본 논문에서는 기존의 word embedding 학습 모델을 단어의 위치정보를 이용하여 학습 할 수 있도록 수정하였다. 실험 결과 단어의 위치정보를 이용하여 word embedding을 학습 하였을 경우 word-analogy의 syntactic 성능이 크게 향상되며 어순이 바뀔 수 있는 한국어에서 특히 큰 효과를 보였다.

  • PDF

Word Embedding using word position information (단어의 위치정보를 이용한 Word Embedding)

  • Hwang, Hyunsun;Lee, Changki;Jang, HyunKi;Kang, Dongho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.60-63
    • /
    • 2017
  • 자연어처리에 딥 러닝을 적용하기 위해 사용되는 Word embedding은 단어를 벡터 공간상에 표현하는 것으로 차원축소 효과와 더불어 유사한 의미의 단어는 유사한 벡터 값을 갖는다는 장점이 있다. 이러한 word embedding은 대용량 코퍼스를 학습해야 좋은 성능을 얻을 수 있기 때문에 기존에 많이 사용되던 word2vec 모델은 대용량 코퍼스 학습을 위해 모델을 단순화 하여 주로 단어의 등장 비율에 중점적으로 맞추어 학습하게 되어 단어의 위치 정보를 이용하지 않는다는 단점이 있다. 본 논문에서는 기존의 word embedding 학습 모델을 단어의 위치정보를 이용하여 학습 할 수 있도록 수정하였다. 실험 결과 단어의 위치정보를 이용하여 word embedding을 학습 하였을 경우 word-analogy의 syntactic 성능이 크게 향상되며 어순이 바뀔 수 있는 한국어에서 특히 큰 효과를 보였다.

  • PDF

A Methodology for Urdu Word Segmentation using Ligature and Word Probabilities

  • Khan, Yunus;Nagar, Chetan;Kaushal, Devendra S.
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.1
    • /
    • pp.24-31
    • /
    • 2012
  • This paper introduce a technique for Word segmentation for the handwritten recognition of Urdu script. Word segmentation or word tokenization is a primary technique for understanding the sentences written in Urdu language. Several techniques are available for word segmentation in other languages but not much work has been done for word segmentation of Urdu Optical Character Recognition (OCR) System. A method is proposed for word segmentation in this paper. It finds the boundaries of words in a sequence of ligatures using probabilistic formulas, by utilizing the knowledge of collocation of ligatures and words in the corpus. The word identification rate using this technique is 97.10% with 66.63% unknown words identification rate.

Eine methodische Betrachtung fur die Erstellung des koreanisch-deutschen WordNets (한독 워드넷 구축을 위한 기본 방법론 고찰)

  • Nam Yu-Sun
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.9
    • /
    • pp.217-236
    • /
    • 2004
  • Das Ziel dieser Arbeit ist es, als eine methodische Grundlage zur Erstellung des koreanisch-deutschen WordNets das Grundwissen $\"{u}ber$ das WordNet und einige bisherige Untersuchungen des WordNets darzulegen. Ais erster Schritt wurde einige grundlegende Punkte $f\"{u}r$ das WordNet im Rahmen des WordNets fur Englisch in Betracht gebracht. Dabei ging es um lexikalische Hierarchie, und um semantische Relationen zwischen den Synsets(Zusammensetzen der synonymen $W\"{o}rter$) wie Synonymy, Antonymy, Hyponymy, Mronymy, Troponomy und Entailment. $Anschlie{\ss}end$ wurden EuroNet und GermaNet in kurzer Form vorgestellt, die auf dem Princeton WordNet basierten. EuroNet ist eine multilinguale Datenbasis mit WordNets $f\"{u}r$ einige europaische Sprachen (hollandisch, italienisch, spanisch, deutsch, franzasisch, tschechisch und estnisch). Dieses auf das Deutsch bezogenen WordNet kann wichtige Hinweise $f\"{u}r$ die Erstellung des koreanisch-deutschen WordNets geben. In Korea wurden auch verschiedene Untersuchungen uber das WordNet $f\"{u}r$ Koreanisch unternommen. Darunter kann insbesondere KORTERM WordNet $f\"f{u}r$ Koreanisch als ein umfassendes System $erw\"{a}hnt$ werden, in dem Nomen, Verben, Adjektive und Adverbien miteinander interagieren. KORTERM WordNet fur Koreanisch ist eine multilinguale Datenbasis mit WordNets $f\"{u}r$ einige asiatische Sprachen (koreanisch, japanisch und chinesisch) und versucht noch die weiteren Sprachen in diese multilinguale Datenbasis hineinzubringen. Nach diesem WordNet wird das koreanisch-deutsche WordNet erstellt.

  • PDF

Effects of Orthographic Knowledge and Phonological Awareness on Visual Word Decoding and Encoding in Children Aged 5-8 Years (5~8세 아동의 철자지식과 음운인식이 시각적 단어 해독과 부호화에 미치는 영향)

  • Na, Ye-Ju;Ha, Ji-Wan
    • Journal of Digital Convergence
    • /
    • v.14 no.6
    • /
    • pp.535-546
    • /
    • 2016
  • This study examined the relation among orthographic knowledge, phonological awareness, and visual word decoding and encoding abilities. Children aged 5 to 8 years took letter knowledge test, phoneme-grapheme correspondence test, orthographic representation test(regular word and irregular word representation), phonological awareness test(word, syllable and phoneme awareness), word decoding test(regular word and irregular word reading) and word encoding test(regular word and irregular word dictation). The performances of all tasks were significantly different among groups, and there were positive correlations among the tasks. In the word decoding and encoding tests, the variables with the most predictive power were the letter knowledge ability and the orthographic representation ability. It was found that orthographic knowledge more influenced visual word decoding and encoding skills than phonological awareness at these ages.

KR-WordRank : An Unsupervised Korean Word Extraction Method Based on WordRank (KR-WordRank : WordRank를 개선한 비지도학습 기반 한국어 단어 추출 방법)

  • Kim, Hyun-Joong;Cho, Sungzoon;Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.1
    • /
    • pp.18-33
    • /
    • 2014
  • A Word is the smallest unit for text analysis, and the premise behind most text-mining algorithms is that the words in given documents can be perfectly recognized. However, the newly coined words, spelling and spacing errors, and domain adaptation problems make it difficult to recognize words correctly. To make matters worse, obtaining a sufficient amount of training data that can be used in any situation is not only unrealistic but also inefficient. Therefore, an automatical word extraction method which does not require a training process is desperately needed. WordRank, the most widely used unsupervised word extraction algorithm for Chinese and Japanese, shows a poor word extraction performance in Korean due to different language structures. In this paper, we first discuss why WordRank has a poor performance in Korean, and propose a customized WordRank algorithm for Korean, named KR-WordRank, by considering its linguistic characteristics and by improving the robustness to noise in text documents. Experiment results show that the performance of KR-WordRank is significantly better than that of the original WordRank in Korean. In addition, it is found that not only can our proposed algorithm extract proper words but also identify candidate keywords for an effective document summarization.

The Korean Word Length Effect on Auditory Word Recognition (청각 단어 재인에서 나타난 한국어 단어길이 효과)

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

Korean Named Entity Recognition and Classification using Word Embedding Features (Word Embedding 자질을 이용한 한국어 개체명 인식 및 분류)

  • Choi, Yunsu;Cha, Jeongwon
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.678-685
    • /
    • 2016
  • Named Entity Recognition and Classification (NERC) is a task for recognition and classification of named entities such as a person's name, location, and organization. There have been various studies carried out on Korean NERC, but they have some problems, for example lacking some features as compared with English NERC. In this paper, we propose a method that uses word embedding as features for Korean NERC. We generate a word vector using a Continuous-Bag-of-Word (CBOW) model from POS-tagged corpus, and a word cluster symbol using a K-means algorithm from a word vector. We use the word vector and word cluster symbol as word embedding features in Conditional Random Fields (CRFs). From the result of the experiment, performance improved 1.17%, 0.61% and 1.19% respectively for TV domain, Sports domain and IT domain over the baseline system. Showing better performance than other NERC systems, we demonstrate the effectiveness and efficiency of the proposed method.

The exploration of the effects of word frequency and word length on Korean word recognition (한국어 단어재인에 있어서 빈도와 길이 효과 탐색)

  • Lee, Changhwan;Lee, Yoonhyoung;Kim, Tae Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.1
    • /
    • pp.54-61
    • /
    • 2016
  • Because a word is the basic unit of language processing, studies of the word recognition processing and the variables that contribute to word recognition processing are very important. Word frequency and word length are recognized as important factors on word recognition. This study examined the effects of those two variables on the Korean word recognition processing. In Experiment 1, two types of Hangul words, pure Hangul words and Hangul words with Hanja counterparts, were used to explore the frequency effects. A frequency effect was not observed for Hangul words with Hanja counterparts. In Experiment 2, the word length was manipulated to determine if the word length effect appears in Hangul words. Contrary to the expectation, one syllable words were processed more slowly than two syllable words. The possible explanations for these results and future research directions are discussed.