• 제목/요약/키워드: Word space

검색결과 264건 처리시간 0.021초

Word Sense Disambiguation Using Embedded Word Space

  • Kang, Myung Yun;Kim, Bogyum;Lee, Jae Sung
    • Journal of Computing Science and Engineering
    • /
    • 제11권1호
    • /
    • pp.32-38
    • /
    • 2017
  • Determining the correct word sense among ambiguous senses is essential for semantic analysis. One of the models for word sense disambiguation is the word space model which is very simple in the structure and effective. However, when the context word vectors in the word space model are merged into sense vectors in a sense inventory, they become typically very large but still suffer from the lexical scarcity. In this paper, we propose a word sense disambiguation method using word embedding that makes the sense inventory vectors compact and efficient due to its additive compositionality. Results of experiments with a Korean sense-tagged corpus show that our method is very effective.

한국어 단어 공간 모델을 이용한 단어 의미 중의성 해소 (Word Sense Disambiguation using Korean Word Space Model)

  • 박용민;이재성
    • 한국콘텐츠학회논문지
    • /
    • 제12권6호
    • /
    • pp.41-47
    • /
    • 2012
  • 한국어 단어의 의미 중의성 해소 방법들은 주로 소규모의 의미 태그 부착 말뭉치나 사전 정보 등을 이용하여 엔트로피 정보, 조건부 확률, 상호정보 등을 각각 계산하고 이를 중의성 해소에 이용하는 방법 등으로 다양하게 제안되었다. 본 논문에서는 대규모로 구축된 의미 태그 부착 말뭉치를 이용하여 한국어 단어 벡터를 추출하고 이 벡터들 사이의 유사도를 계산하여 단어 의미 중의성을 해소하는 단어 공간 모델 방법을 제안한다. 세종 형태의미분석 말뭉치를 사용하여 학습하고 임의의 200문장(583 단어 종류)에 대해 평가한 결과, 정확도가 94%로 기존의 방법에 비해 매우 우수했다.

지지벡터기계를 이용한 단어 의미 분류 (Word Sense Classification Using Support Vector Machines)

  • 박준혁;이성욱
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권11호
    • /
    • pp.563-568
    • /
    • 2016
  • 단어 의미 분별 문제는 문장에서 어떤 단어가 사전에 가지고 있는 여러 가지 의미 중 정확한 의미를 파악하는 문제이다. 우리는 이 문제를 다중 클래스 분류 문제로 간주하고 지지벡터기계를 이용하여 분류한다. 세종 의미 부착 말뭉치에서 추출한 의미 중의성 단어의 문맥 단어를 두 가지 벡터 공간에 표현한다. 첫 번째는 문맥 단어들로 이뤄진 벡터 공간이고 이진 가중치를 사용한다. 두 번째는 문맥 단어의 윈도우 크기에 따라 문맥 단어를 단어 임베딩 모델로 사상한 벡터 공간이다. 실험결과, 문맥 단어 벡터를 사용하였을 때 약 87.0%, 단어 임베딩을 사용하였을 때 약 86.0%의 정확도를 얻었다.

단어 경계 검출 오류 보정을 위한 수정된 비터비 알고리즘 (A Modified Viterbi Algorithm for Word Boundary Detection Error Compensation)

  • 정훈;정익주
    • The Journal of the Acoustical Society of Korea
    • /
    • 제26권1E호
    • /
    • pp.21-26
    • /
    • 2007
  • In this paper, we propose a modified Viterbi algorithm to compensate for endpoint detection error during the decoding phase of an isolated word recognition task. Since the conventional Viterbi algorithm explores only the search space whose boundaries are fixed to the endpoints of the segmented utterance by the endpoint detector, the recognition performance is highly dependent on the accuracy level of endpoint detection. Inaccurately segmented word boundaries lead directly to recognition error. In order to relax the degradation of recognition accuracy due to endpoint detection error, we describe an unconstrained search of word boundaries and present an algorithm to explore the search space with efficiency. The proposed algorithm was evaluated by performing a variety of simulated endpoint detection error cases on an isolated word recognition task. The proposed algorithm reduced the Word Error Rate (WER) considerably, from 84.4% to 10.6%, while consuming only a little more computation power.

확장된 벡터 공간 모델을 이용한 한국어 문서 분류 방안 (Korean Document Classification Using Extended Vector Space Model)

  • 이상곤
    • 정보처리학회논문지B
    • /
    • 제18B권2호
    • /
    • pp.93-108
    • /
    • 2011
  • 본 논문에서는 한국어 문서의 분류 정밀도 향상을 위해 애매어와 해소어 정보를 이용한 확장된 벡터 공간 모델을 제안하였다. 벡터 공간 모델에 사용된 벡터는 같은 정도의 가중치를 갖는 축이 하나 더 존재하지만, 기존의 방법은 그 축에 아무런 처리가 이루어지지 않았기 때문에 벡터끼리의 비교를 할 때 문제가 발생한다. 같은 가중치를 갖는 축이 되는 단어를 애매어라 정의하고, 단어와 분야 사이의 상호정보량을 계산하여 애매어를 결정하였다. 애매어에 의해 애매성을 해소하는 단어를 해소어라 정의하고, 애매어와 동일한 문서에서 출현하는 단어 중에서 상호정보량을 계산하여 해소어의 세기를 결정하였다. 본 논문에서는 애매어와 해소어를 이용하여 벡터의 차원을 확장하여 문서 분류의 정밀도를 향상시키는 방법을 제안하였다.

Ternary Decomposition and Dictionary Extension for Khmer Word Segmentation

  • Sung, Thaileang;Hwang, Insoo
    • Journal of Information Technology Applications and Management
    • /
    • 제23권2호
    • /
    • pp.11-28
    • /
    • 2016
  • In this paper, we proposed a dictionary extension and a ternary decomposition technique to improve the effectiveness of Khmer word segmentation. Most word segmentation approaches depend on a dictionary. However, the dictionary being used is not fully reliable and cannot cover all the words of the Khmer language. This causes an issue of unknown words or out-of-vocabulary words. Our approach is to extend the original dictionary to be more reliable with new words. In addition, we use ternary decomposition for the segmentation process. In this research, we also introduced the invisible space of the Khmer Unicode (char\u200B) in order to segment our training corpus. With our segmentation algorithm, based on ternary decomposition and invisible space, we can extract new words from our training text and then input the new words into the dictionary. We used an extended wordlist and a segmentation algorithm regardless of the invisible space to test an unannotated text. Our results remarkably outperformed other approaches. We have achieved 88.8%, 91.8% and 90.6% rates of precision, recall and F-measurement.

Empirical Comparison of Word Similarity Measures Based on Co-Occurrence, Context, and a Vector Space Model

  • Kadowaki, Natsuki;Kishida, Kazuaki
    • Journal of Information Science Theory and Practice
    • /
    • 제8권2호
    • /
    • pp.6-17
    • /
    • 2020
  • Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM- and LDA-based similarity measures performed best.

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • 한국멀티미디어학회논문지
    • /
    • 제9권12호
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Development of the Korean Handwriting Assessment for Children Using Digital Image Processing

  • Lee, Cho Hee;Kim, Eun Bin;Lee, Onseok;Kim, Eun Young
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.4241-4254
    • /
    • 2019
  • The efficiency and accuracy of handwriting measurement could be improved by adopting digital image processing. This study developed a computer-based Korean Handwriting Assessment tool. Second graders participated in this study by performing writing tasks of consonants, vowels, words, and sentences. We extracted boundary parameters for each letter using digital image processing and calculated the variables of size, size coefficient of variation (CV), misalignment, inter-letter space, inter-word space, and ratio of inter-letter space to inter-word space. Children were also administered traditional handwriting and visuomotor tests. Digital variables from image processing were correlated with these previous tests. Using these correlations, we established a three-point scoring system that computed test scores for each variable. We analyzed inter-rater reliability between the computer rater and human rater and test-retest reliability between the first and second performances. The validity was examined by analyzing the relationship between the Korean Handwriting Assessment and previous handwriting and visuomotor tests. We suggested the Korean Handwriting Assessment to measure size, size consistency, misalignment, inter-letter space, inter-word space, and space ratio using digital image processing. This Korean Handwriting Assessment tool proved to have reliability and validity. It is expected to be useful for assessing children's handwriting.

음절 bigram를 이용한 띄어쓰기 오류의 자동 교정 (Automatic Correction of Word-spacing Errors using by Syllable Bigram)

  • 강승식
    • 음성과학
    • /
    • 제8권2호
    • /
    • pp.83-90
    • /
    • 2001
  • We proposed a probabilistic approach of using syllable bigrams to the word-spacing problem. Syllable bigrams are extracted and the frequencies are calculated for the large corpus of 12 million words. Based on the syllable bigrams, we performed three experiments: (1) automatic word-spacing, (2) detection and correction of word-spacing errors for spelling checker, and (3) automatic insertion of a space at the end of line in the character recognition system. Experimental results show that the accuracy ratios are 97.7 percent, 82.1 percent, and 90.5%, respectively.

  • PDF