• 제목/요약/키워드: Korean word

검색결과 4,023건 처리시간 0.031초

한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상 (Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex))

  • 이정훈;조상현;권혁철
    • 한국멀티미디어학회논문지
    • /
    • 제25권3호
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

단어 의미와 자질 거울 모델을 이용한 단어 임베딩 (A Word Embedding used Word Sense and Feature Mirror Model)

  • 이주상;신준철;옥철영
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권4호
    • /
    • pp.226-231
    • /
    • 2017
  • 단어 표현은 기계학습을 사용하는 자연어 처리 분야에서 중요하다. 단어 표현은 단어를 텍스트가 아닌 컴퓨터가 분별할 수 있는 심볼로 표현하는 방법이다. 기존 단어 임베딩은 대량의 말뭉치를 이용하여 문장에서 학습할 단어의 주변 단어를 이용하여 학습한다. 하지만 말뭉치 기반의 단어 임베딩은 단어의 등장 빈도수나 학습할 단어의 수를 늘리기 위해서는 많은 양의 말뭉치를 필요로 한다. 본 논문에서는 말뭉치 기반이 아닌 단어의 뜻풀이와 단어의 의미 관계(상위어, 반의어)를 이용하며 기존 Word2Vec의 Skip-Gram을 변형한 자질거울모델을 사용하여 단어를 벡터로 표현하는 방법을 제시한다. 기존 Word2Vec에 비해 적은 데이터로 많은 단어들을 벡터로 표현 가능하였으며 의미적으로 유사한 단어들이 비슷한 벡터를 형성하는 것을 확인할 수 있다. 그리고 반의어 관계에 있는 두 단어의 벡터가 구분되는 것을 확인할 수 있다.

The Analysis of a Causal Relationship of Traditional Korean Restaurant's Well-Bing Attribute Selection on Customers' Re-Visitation and Word-of-Mouth

  • Baek, Hang-Sun;Shin, Chung-Sub;Lee, Sang-Youn
    • 동아시아경상학회지
    • /
    • 제4권2호
    • /
    • pp.48-60
    • /
    • 2016
  • This study analyzes what effects does restaurant's well-being attribute selection have on word-of-mouth intention. Based on the result, this study aims to provide basic data for establishing Korean restaurant's service strategy and marketing strategy. The researchers surveyed 350 customers who visited a Korean restaurant located in Kangbook, Seoul. We encoded gathered data and analyzed them using SPSS 17.0 statistics package program. Following are the analyzed results. First, under hypothesis 1 - Korean restaurant's well-being attribute selection will have a positive influence on re-visitation intention - it is shown that sufficiency, healthiness, and steadiness have similar influence on re-visitation intention. Second, under hypothesis 2 - Korean restaurant's well-being attribute selection will have a positive influence on word-of-mouth intention - it is shown that sufficiency, healthiness, environment, and steadiness have similar influence on word -of-mouth intention. Third, under hypothesis 3 - Korean restaurant's re-visitation intention will have a positive influence on word -of-mouth intention - it is considered that eliciting customer's re-visitation intention also has influence on word-of-mouth intention. We will be necessary to consult how to derive customer's re-visitation intention or word-of-mouth intention by considering factors which customers of traditional Korean restaurant value.

The influence of task demands on the preparation of spoken word production: Evidence from Korean

  • Choi, Tae-Hwan;Oh, Sujin;Han, Jeong-Im
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.1-7
    • /
    • 2017
  • It was shown in speech production studies that the preparation unit of spoken word production is language particular, such as onset phonemes for English and Dutch, syllables for Mandarin Chinese, and morae for Japanese. However, there have been inconsistent results on whether the onset phoneme is a planning unit of spoken word production in Korean. In this study, two sets of experiments investigated possible influences of task demands on the phonological preparation in native Korean adults, namely, implicit priming and word naming with the form preparation paradigm. Only the word naming task, but not the implicit priming task, showed a significant onset priming effect, even though there were significant syllable priming effects in both tasks. Following the attentional theory ($O^{\prime}S{\acute{e}}aghdha$ & Frazer, 2014), these results suggest that task demands might play a role in the absence/presence of onset priming effects in Korean. Native Korean speakers could maintain their attention to the shared onset phonemes in word naming, which is not very demanding, while they have difficulties in allocating their attention to such units in a more cognitive-demanding implicit priming, even though both tasks involve accessing phonological codes. These findings demonstrate that there are cross-linguistic differences in the first selectable unit in preparation of spoken word production, but within a single language, the preparation unit might not be immutable.

말소리 산출에서 단어빈도효과의 위치 : 그림-단어간섭과제에서 나온 증거 (The Locus of the Word Frequency Effect in Speech Production: Evidence from the Picture-word Interference Task)

  • 구민모;남기춘
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.51-68
    • /
    • 2007
  • Two experiments were conducted to determine the exact locus of the frequency effect in speech production. Experiment 1 addressed the question as to whether the word frequency effect arise from the stage of lemma selection. A picture-word interference task was performed to test the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness. There was a significant interaction between the distractor frequency and the semantic relatedness and between the target and the distractor frequency. Experiment 2 examined whether the word frequency effect is attributed to the lexeme level which represent phonological information of words. A methodological logic applied to Experiment 2 was the same as that of Experiment 1. There was no significant interaction between the distractor frequency and the phonological relatedness. These results demonstrate that word frequency has influence on the processes involved in selecting a correct lemma corresponding to an activated lexical concept in speech production.

  • PDF

단어 분류에 기반한 텍스트 영상 워터마킹 알고리즘 (An Algorithm for Text Image Watermarking based on Word Classification)

  • 김영원;오일석
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권8호
    • /
    • pp.742-751
    • /
    • 2005
  • 본 논문은 단어 분류에 기반한 새로운 텍스트 영상 워터마킹 알고리즘을 제안한다. 간단한 특징을 이용하여 단어를 K개로 분류한다. 이웃한 몇 개의 단어들을 조합하여 세그먼트를 구성하고, 세그먼트에 속한 단어들의 부류에 의해 세그먼트 또한 분류된다. 각 세그먼트에 동일한 양의 신호가 삽입된다. 신호 삽입은 세그먼트 부류가 갖는 단어 간 공백의 통계값을 조작함으로써 이루어진다. 몇 가지 기준에 따라 기존 단어 이동 알고리즘과의 주관적인 비교가 제시된다.

Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소 (Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model)

  • 김광호;이동현;임민규;김지환
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

영어어구의 위치에 따른 단어의 음향 변수 측정 (Measuring Acoustical Parameters of English Words by the Position in the Phrases)

  • 양병곤
    • 음성과학
    • /
    • 제14권4호
    • /
    • pp.115-128
    • /
    • 2007
  • The purposes of this paper were to develop an automatic script to collect such acoustic parameters as duration, intensity, pitch and the first two formant values of English words produced by two native Canadian speakers either alone or in a two-word phrase at a normal speed and to compare those values by the position in the phrases. A Praat script was proposed to obtain the comparable parameters at evenly divided time point of the target word. Results showed that the total duration of the word in the phrase was shorter than that of the word produced alone. That was attributed to the pronunciation style of the native speakers generally placing the primary word stress in the first word position. Also, the reduction ratio of the male speaker depended on the word position in the phrase while the female speaker didn't. Moreover, there were different contours of intensity and pitch by the position of the target word in the phrase while almost the same formant patterns were observed. Further studies would be desirable to examine those parameters of the words in the authentic speech materials.

  • PDF

레벤스타인 거리에 기초한 위치 정확도를 이용한 고립 단어 인식 결과의 비유사 후보 단어 제외 (Exclusion of Non-similar Candidates using Positional Accuracy based on Levenstein Distance from N-best Recognition Results of Isolated Word Recognition)

  • 윤영선;강점자
    • 말소리와 음성과학
    • /
    • 제1권3호
    • /
    • pp.109-115
    • /
    • 2009
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. In this paper, we investigate several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. At first, word distance method based on phone and syllable distances are considered. These methods use just Levenstein distance on phones or double Levenstein distance algorithm on syllables of candidates. Next, word similarity approaches are presented that they use characters' position information of word candidates. Each character's position is labeled to inserted, deleted, and correct position after alignment between source and target string. The word similarities are obtained from characters' positional probabilities which mean the frequency ratio of the same characters' observations on the position. From experimental results, we can find that the proposed methods are effective for removing non-similar words without loss of system performance from the N-best recognition candidates of the systems.

  • PDF

핵심어 인식기에서 단어의 음소레벨 로그 우도 비율의 패턴을 이용한 발화검증 방법 (Utterance Verification using Phone-Level Log-Likelihood Ratio Patterns in Word Spotting Systems)

  • 김정현;권석봉;김회린
    • 말소리와 음성과학
    • /
    • 제1권1호
    • /
    • pp.55-62
    • /
    • 2009
  • This paper proposes an improved method to verify a keyword segment that results from a word spotting system. First a baseline word spotting system is implemented. In order to improve performance of the word spotting systems, we use a two-pass structure which consists of a word spotting system and an utterance verification system. Using the basic likelihood ratio test (LRT) based utterance verification system to verify the keywords, there have been certain problems which lead to performance degradation. So, we propose a method which uses phone-level log-likelihood ratios (PLLR) patterns in computing confidence measures for each keyword. The proposed method generates weights according to the PLLR patterns and assigns different weights to each phone in the process of generating confidence measures for the keywords. This proposed method has shown to be more appropriate to word spotting systems and we can achieve improvement in final word spotting accuracy.

  • PDF