• Title/Summary/Keyword: 단어 벡터 생성

Search Result 58, Processing Time 0.022 seconds

Understanding the semantic change of Hangeul using word embedding (단어 임베딩 기법을 이용한 한글의 의미 변화 파악)

  • Sun, Hyunseok;Lee, Yung-Seop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.295-308
    • /
    • 2021
  • In recent years, as many people post their interests on social media or store documents in digital form due to the development of the internet and computer technologies, the amount of text data generated has exploded. Accordingly, the demand for technology to create valuable information from numerous document data is also increasing. In this study, through statistical techniques, we investigate how the meanings of Korean words change over time by using the presidential speech records and newspaper articles public data. Using this, we present a strategy that can be utilized in the study of the synchronic change of Hangeul. The purpose of this study is to deviate from the study of the theoretical language phenomenon of Hangeul, which was studied by the intuition of existing linguists or native speakers, to derive numerical values through public documents that can be used by anyone, and to explain the phenomenon of changes in the meaning of words.

An Implementation of Embedded Speaker Identifier for PDA (PDA를 위한 내장형 화자인증기의 구현)

  • Kim, Dong-Ju;Roh, Yong-Wan;Kim, Dong-Gyu;Chung, Kwang-Woo;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.286-289
    • /
    • 2005
  • 기존의 물리적 인증도구를 이용한 방식이나 패스워드 인증 방식은 분실, 도난, 해킹 등에 취약점을 가지고 있다. 따라서 지문, 서명, 홍채, 음성, 얼굴 등을 이용한 생체 인식기술을 보안 기술로 적용하려는 연구가 진행중이며 일부는 실용화도 되고 있다. 본 논문에서는 최근 널리 보급되어 있는 임베디드 시스템중의 하나인 PDA에 음성 기술을 이용한 내장형 화자 인증기를 구현하였다. 화자 인증기는 음성기술에서 널리 사용되고 있는 벡터 양자화 기술과 은닉 마코프 모델 기술을 사용하였으며, PDA의 하드웨어적인 제약 사항을 고려하여 사용되는 벡터 코드북을 두 가지로 다르게 하여 각각 구현하였다. 처음은 코드북을 화자 등록시에 발성음만을 이용하여 생성하고 화자인증 시에 이용하는 방법이며, 다른 하나는 대용량의 음성 데이터베이스를 이용하여 코드북을 사전에 생성하여 이를 화자 인증시에 이용하는 방법이다. 화자인증기의 성능평가는 5명의 화자가 10번씩 5개의 단어에 대하여 실험하여, 각각 화자종속 코득북을 이용한 인증기는 88.8%, 99.5%, 화자독립 코드북을 이용한 인증기는 85.6%, 95.5%의 인증율과 거절율을 보였으며, 93.5%와 90.0%의 평균 확률을 보였다.. 실험을 통하여 화자독립 인증기의 경우가 화자종속 인증기의 경우보다 낮은 인식율을 보였지만, 화자종속 인증기에서 나타나는 코드북 훈련시에 발생하는 메모리 문제를 해결 할 수 있었다.

  • PDF

Categorization of Korean News Articles Based on Convolutional Neural Network Using Doc2Vec and Word2Vec (Doc2Vec과 Word2Vec을 활용한 Convolutional Neural Network 기반 한국어 신문 기사 분류)

  • Kim, Dowoo;Koo, Myoung-Wan
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.742-747
    • /
    • 2017
  • In this paper, we propose a novel approach to improve the performance of the Convolutional Neural Network(CNN) word embedding model on top of word2vec with the result of performing like doc2vec in conducting a document classification task. The Word Piece Model(WPM) is empirically proven to outperform other tokenization methods such as the phrase unit, a part-of-speech tagger with substantial experimental evidence (classification rate: 79.5%). Further, we conducted an experiment to classify ten categories of news articles written in Korean by feeding words and document vectors generated by an application of WPM to the baseline and the proposed model. From the results of the experiment, we report the model we proposed showed a higher classification rate (89.88%) than its counterpart model (86.89%), achieving a 22.80% improvement. Throughout this research, it is demonstrated that applying doc2vec in the document classification task yields more effective results because doc2vec generates similar document vector representation for documents belonging to the same category.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.

Multi-Document Summarization Method Based on Semantic Relationship using VAE (VAE를 이용한 의미적 연결 관계 기반 다중 문서 요약 기법)

  • Baek, Su-Jin
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.341-347
    • /
    • 2017
  • As the amount of document data increases, the user needs summarized information to understand the document. However, existing document summary research methods rely on overly simple statistics, so there is insufficient research on multiple document summaries for ambiguity of sentences and meaningful sentence generation. In this paper, we investigate semantic connection and preprocessing process to process unnecessary information. Based on the vocabulary semantic pattern information, we propose a multi-document summarization method that enhances semantic connectivity between sentences using VAE. Using sentence word vectors, we reconstruct sentences after learning from compressed information and attribute discriminators generated as latent variables, and semantic connection processing generates a natural summary sentence. Comparing the proposed method with other document summarization methods showed a fine but improved performance, which proved that semantic sentence generation and connectivity can be increased. In the future, we will study how to extend semantic connections by experimenting with various attribute settings.

e-Learning Course Reviews Analysis based on Big Data Analytics (빅데이터 분석을 이용한 이러닝 수강 후기 분석)

  • Kim, Jang-Young;Park, Eun-Hye
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.423-428
    • /
    • 2017
  • These days, various and tons of education information are rapidly increasing and spreading due to Internet and smart devices usage. Recently, as e-Learning usage increasing, many instructors and students (learners) need to set a goal to maximize learners' result of education and education system efficiency based on big data analytics via online recorded education historical data. In this paper, the author applied Word2Vec algorithm (neural network algorithm) to find similarity among education words and classification by clustering algorithm in order to objectively recognize and analyze online recorded education historical data. When the author applied the Word2Vec algorithm to education words, related-meaning words can be found, classified and get a similar vector values via learning repetition. In addition, through experimental results, the author proved the part of speech (noun, verb, adjective and adverb) have same shortest distance from the centroid by using clustering algorithm.

Hierarchical and Incremental Clustering for Semi Real-time Issue Analysis on News Articles (준 실시간 뉴스 이슈 분석을 위한 계층적·점증적 군집화)

  • Kim, Hoyong;Lee, SeungWoo;Jang, Hong-Jun;Seo, DongMin
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.6
    • /
    • pp.556-578
    • /
    • 2020
  • There are many different researches about how to analyze issues based on real-time news streams. But, there are few researches which analyze issues hierarchically from news articles and even a previous research of hierarchical issue analysis make clustering speed slower as the increment of news articles. In this paper, we propose a hierarchical and incremental clustering for semi real-time issue analysis on news articles. We trained siamese neural network based weighted cosine similarity model, applied this model to k-means algorithm which is used to make word clusters and converted news articles to document vectors by using these word clusters. Finally, we initialized an issue cluster tree from document vectors, updated this tree whenever news articles happen, and analyzed issues in semi real-time. Through the experiment and evaluation, we showed that up to about 0.26 performance has been improved in terms of NMI. Also, in terms of speed of incremental clustering, we also showed about 10 times faster than before.

Lossless Coding Scheme for Lattice Vector Quantizer Using Signal Set Partitioning Method (Signal Set Partitioning을 이용한 격자 양자화의 비 손실 부호화 기법)

  • Kim, Won-Ha
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.93-105
    • /
    • 2001
  • In the lossless step of Lattice Vector Quantization(LVQ), the lattice codewords produced at quantization step are enumerated into radius sequence and index sequence. The radius sequence is run-length coded and then entropy coded, and the index sequence is represented by fixed length binary bits. As bit rate increases, the index bit linearly increases and deteriorates the coding performances. To reduce the index bits across the wide range of bit rates, we developed a novel lattice enumeration algorithm adopting the set partitioning method. The proposed enumeration method shifts down large index values to smaller ones and so reduces the index bits. When the proposed lossless coding scheme is applied to a wavelet based image coding, the proposed scheme achieves more than 10% at bit rates higher than 0.3 bits/pixel over the conventional lossless coding method, and yields more improvement as bit rate becomes higher.

  • PDF

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF