• Title/Summary/Keyword: 클래스 임베딩

Search Result 9, Processing Time 0.03 seconds

Automatic Classification of Frequently Asked Questions Using Class Embedding and Attentive Recurrent Neural Network (클래스 임베딩과 주의 집중 순환 신경망을 이용한 자주 묻는 질문의 자동 분류)

  • Jang, Youngjin;Kim, Harksoo;Kim, Sebin;Kang, Dongho;Jang, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.367-370
    • /
    • 2018
  • 웹 또는 모바일 사용자는 고객 센터에 구축된 자주 묻는 질문을 이용하여 원하는 서비스를 제공받는다. 그러나 자주 묻는 질문은 사용자가 직접 핵심어를 입력하여 검색된 결과 중 필요한 정보를 찾아야 하는 어려움이 있다. 이러한 문제를 해결하기 위해 본 논문에서는 사용자 질의를 입력 받아 질의에 해당하는 클래스를 분류해주는 문장 분류 모델을 제안한다. 제안모델은 웹이나 모바일 환경의 오타나 맞춤법 오류에 대한 강건함을 위해 자소 단위 합성곱 신경망을 사용한다. 그리고 기계 번역 이외에도 자연어 처리 부분에서 큰 성능 향상을 보여주는 주의 집중 방법과 클래스 임베딩을 이용한 문장 분류 시스템을 사용한다. 457개의 클래스 분류와 769개의 클래스 분류에 대한 실험 결과 Micro F1 점수 기준 81.32%, 61.11%의 성능을 보였다.

  • PDF

Class Language Model based on Word Embedding and POS Tagging (워드 임베딩과 품사 태깅을 이용한 클래스 언어모델 연구)

  • Chung, Euisok;Park, Jeon-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.315-319
    • /
    • 2016
  • Recurrent neural network based language models (RNN LM) have shown improved results in language model researches. The RNN LMs are limited to post processing sessions, such as the N-best rescoring step of the wFST based speech recognition. However, it has considerable vocabulary problems that require large computing powers for the LM training. In this paper, we try to find the 1st pass N-gram model using word embedding, which is the simplified deep neural network. The class based language model (LM) can be a way to approach to this issue. We have built class based vocabulary through word embedding, by combining the class LM with word N-gram LM to evaluate the performance of LMs. In addition, we propose that part-of-speech (POS) tagging based LM shows an improvement of perplexity in all types of the LM tests.

An Embedding Similarity-based Deep Learning Model for Detecting Displacement in Cultural Asset Images (목조 문화재 영상에서의 크랙을 감지하기 위한 임베딩 유사도 기반 딥러닝 모델)

  • Kang, Jaeyong;Kim, Inki;Lim, Hyunseok;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.133-135
    • /
    • 2021
  • 본 논문에서는 목조 문화재 영상에서의 변위 현상 중 하나인 크랙이 발생하는 영역을 감지하기 위한 임베딩 유사도 기반 모델을 제안한다. 우선 변위가 존재하지 않는 정상으로만 구성된 학습 이미지는 사전 학습된 합성 곱 신경망을 통과하여 임베딩 벡터들을 추출한다. 그 이후 임베딩 벡터들을 가지고 정상 클래스에 대한 분포의 파라미터 값을 구한다. 실제 추론 과정에 사용되는 테스트 이미지에 대해서도 마찬가지로 임베딩 벡터를 구한다. 그런 다음 테스트 이미지의 임베딩 벡터와 이전에 구한 정상 클래스를 대표하는 가우시안 분포 정보와의 거리를 계산하여 이상치 맵을 생성하여 최종적으로 변위가 존재하는 영역을 감지한다. 데이터 셋으로는 충주시 근처의 문화재에 방문해서 수집한 목조 문화재 이미지를 가지고 정상 및 비정상으로 구분한 데이터 셋을 사용하였다. 실험 결과 우리가 제안한 임베딩 유사도 기반 모델이 목조 문화재에서 크랙이 발생하는 변위 영역을 잘 감지함을 확인하였다. 이러한 결과로부터 우리가 제안한 방법이 목재 문화재의 크랙 현상에 대한 변위 영역 검출에 있어서 매우 적합함을 보여준다.

  • PDF

An Efficient Deep Learning Ensemble Using a Distribution of Label Embedding

  • Park, Saerom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.27-35
    • /
    • 2021
  • In this paper, we propose a new stacking ensemble framework for deep learning models which reflects the distribution of label embeddings. Our ensemble framework consists of two phases: training the baseline deep learning classifier, and training the sub-classifiers based on the clustering results of label embeddings. Our framework aims to divide a multi-class classification problem into small sub-problems based on the clustering results. The clustering is conducted on the label embeddings obtained from the weight of the last layer of the baseline classifier. After clustering, sub-classifiers are constructed to classify the sub-classes in each cluster. From the experimental results, we found that the label embeddings well reflect the relationships between classification labels, and our ensemble framework can improve the classification performance on a CIFAR 100 dataset.

Word Sense Classification Using Support Vector Machines (지지벡터기계를 이용한 단어 의미 분류)

  • Park, Jun Hyeok;Lee, Songwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.563-568
    • /
    • 2016
  • The word sense disambiguation problem is to find the correct sense of an ambiguous word having multiple senses in a dictionary in a sentence. We regard this problem as a multi-class classification problem and classify the ambiguous word by using Support Vector Machines. Context words of the ambiguous word, which are extracted from Sejong sense tagged corpus, are represented to two kinds of vector space. One vector space is composed of context words vectors having binary weights. The other vector space has vectors where the context words are mapped by word embedding model. After experiments, we acquired accuracy of 87.0% with context word vectors and 86.0% with word embedding model.

Detection of M:N corresponding class group pairs between two spatial datasets with agglomerative hierarchical clustering (응집 계층 군집화 기법을 이용한 이종 공간정보의 M:N 대응 클래스 군집 쌍 탐색)

  • Huh, Yong;Kim, Jung-Ok;Yu, Ki-Yun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.125-134
    • /
    • 2012
  • In this paper, we propose a method to analyze M:N corresponding relations in semantic matching, especially focusing on feature class matching. Similarities between any class pairs are measured by spatial objects which coexist in the class pairs, and corresponding classes are obtained by clustering with these pairwise similarities. We applied a graph embedding method, which constructs a global configuration of each class in a low-dimensional Euclidean space while preserving the above pairwise similarities, so that the distances between the embedded classes are proportional to the overall degree of similarity on the edge paths in the graph. Thus, the clustering problem could be solved by employing a general clustering algorithm with the embedded coordinates. We applied the proposed method to polygon object layers in a topographic map and land parcel categories in a cadastral map of Suwon area and evaluated the results. F-measures of the detected class pairs were analyzed to validate the results. And some class pairs which would not detected by analysis on nominal class names were detected by the proposed method.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

A Study on Patent Literature Classification Using Distributed Representation of Technical Terms (기술용어 분산표현을 활용한 특허문헌 분류에 관한 연구)

  • Choi, Yunsoo;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.53 no.2
    • /
    • pp.179-199
    • /
    • 2019
  • In this paper, we propose optimal methodologies for classifying patent literature by examining various feature extraction methods, machine learning and deep learning models, and provide optimal performance through experiments. We compared the traditional BoW method and a distributed representation method (word embedding vector) as a feature extraction, and compared the morphological analysis and multi gram as the method of constructing the document collection. In addition, classification performance was verified using traditional machine learning model and deep learning model. Experimental results show that the best performance is achieved when we apply the deep learning model with distributed representation and morphological analysis based feature extraction. In Section, Class and Subclass classification experiments, We improved the performance by 5.71%, 18.84% and 21.53%, respectively, compared with traditional classification methods.

A Deep Learning-based Depression Trend Analysis of Korean on Social Media (딥러닝 기반 소셜미디어 한글 텍스트 우울 경향 분석)

  • Park, Seojeong;Lee, Soobin;Kim, Woo Jung;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.91-117
    • /
    • 2022
  • The number of depressed patients in Korea and around the world is rapidly increasing every year. However, most of the mentally ill patients are not aware that they are suffering from the disease, so adequate treatment is not being performed. If depressive symptoms are neglected, it can lead to suicide, anxiety, and other psychological problems. Therefore, early detection and treatment of depression are very important in improving mental health. To improve this problem, this study presented a deep learning-based depression tendency model using Korean social media text. After collecting data from Naver KonwledgeiN, Naver Blog, Hidoc, and Twitter, DSM-5 major depressive disorder diagnosis criteria were used to classify and annotate classes according to the number of depressive symptoms. Afterwards, TF-IDF analysis and simultaneous word analysis were performed to examine the characteristics of each class of the corpus constructed. In addition, word embedding, dictionary-based sentiment analysis, and LDA topic modeling were performed to generate a depression tendency classification model using various text features. Through this, the embedded text, sentiment score, and topic number for each document were calculated and used as text features. As a result, it was confirmed that the highest accuracy rate of 83.28% was achieved when the depression tendency was classified based on the KorBERT algorithm by combining both the emotional score and the topic of the document with the embedded text. This study establishes a classification model for Korean depression trends with improved performance using various text features, and detects potential depressive patients early among Korean online community users, enabling rapid treatment and prevention, thereby enabling the mental health of Korean society. It is significant in that it can help in promotion.