• Title/Summary/Keyword: text embedding

검색결과 146건 처리시간 0.031초

영상과 문자정보의 통합 부호화에 관한 연구 (A Study on the Integrated Coding of Image and Document Data)

  • 이헌주;박구만;박규태
    • 대한전자공학회논문지
    • /
    • 제26권7호
    • /
    • pp.42-49
    • /
    • 1989
  • 본 연구에서는 영상에 한글 및 영문숫자로 구성된 문서정보를 심을 수 있는 새로운 통합 부호화 방법을 제안하였다. 계조도를 갖는 영상에 대해 임의의 단계로 재양자화한 화소들을 대응하는 마이크로 패턴을 할당하여 영상을 재구성한 후 이진 출력장치에 표시할 수 있다. 그리고 , 각 마이크로 패턴에 문자정보를 할당하여 심을 수 있다. 이러한 개념을 기초로, 고속 부호화 및 복호화 알고리듬을 구현하여 실험을 수행하였다. 실험결과, $64{\times}64$ 화소의 영상을 마이크로 패턴으로 이진화한 영상에 화소 당 평균 약 8.5비트의 문자정보, 즉 한글 2000자 또는 영문자 4000자 이상을 심을 수 있었다. 이를 이용하여 영상과 문서의 통합 개인 신상기록 시스템을 구현하였다.

  • PDF

한국도로공사 VOC 데이터를 이용한 토픽 모형 적용 방안 (Application of a Topic Model on the Korea Expressway Corporation's VOC Data)

  • 김지원;박상민;박성호;정하림;윤일수
    • 한국IT서비스학회지
    • /
    • 제19권6호
    • /
    • pp.1-13
    • /
    • 2020
  • Recently, 80% of big data consists of unstructured text data. In particular, various types of documents are stored in the form of large-scale unstructured documents through social network services (SNS), blogs, news, etc., and the importance of unstructured data is highlighted. As the possibility of using unstructured data increases, various analysis techniques such as text mining have recently appeared. Therefore, in this study, topic modeling technique was applied to the Korea Highway Corporation's voice of customer (VOC) data that includes customer opinions and complaints. Currently, VOC data is divided into the business areas of Korea Expressway Corporation. However, the classified categories are often not accurate, and the ambiguous ones are classified as "other". Therefore, in order to use VOC data for efficient service improvement and the like, a more systematic and efficient classification method of VOC data is required. To this end, this study proposed two approaches, including method using only the latent dirichlet allocation (LDA), the most representative topic modeling technique, and a new method combining the LDA and the word embedding technique, Word2vec. As a result, it was confirmed that the categories of VOC data are relatively well classified when using the new method. Through these results, it is judged that it will be possible to derive the implications of the Korea Expressway Corporation and utilize it for service improvement.

Question Similarity Measurement of Chinese Crop Diseases and Insect Pests Based on Mixed Information Extraction

  • Zhou, Han;Guo, Xuchao;Liu, Chengqi;Tang, Zhan;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.3991-4010
    • /
    • 2021
  • The Question Similarity Measurement of Chinese Crop Diseases and Insect Pests (QSM-CCD&IP) aims to judge the user's tendency to ask questions regarding input problems. The measurement is the basis of the Agricultural Knowledge Question and Answering (Q & A) system, information retrieval, and other tasks. However, the corpus and measurement methods available in this field have some deficiencies. In addition, error propagation may occur when the word boundary features and local context information are ignored when the general method embeds sentences. Hence, these factors make the task challenging. To solve the above problems and tackle the Question Similarity Measurement task in this work, a corpus on Chinese crop diseases and insect pests(CCDIP), which contains 13 categories, was established. Then, taking the CCDIP as the research object, this study proposes a Chinese agricultural text similarity matching model, namely, the AgrCQS. This model is based on mixed information extraction. Specifically, the hybrid embedding layer can enrich character information and improve the recognition ability of the model on the word boundary. The multi-scale local information can be extracted by multi-core convolutional neural network based on multi-weight (MM-CNN). The self-attention mechanism can enhance the fusion ability of the model on global information. In this research, the performance of the AgrCQS on the CCDIP is verified, and three benchmark datasets, namely, AFQMC, LCQMC, and BQ, are used. The accuracy rates are 93.92%, 74.42%, 86.35%, and 83.05%, respectively, which are higher than that of baseline systems without using any external knowledge. Additionally, the proposed method module can be extracted separately and applied to other models, thus providing reference for related research.

Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

  • Jung, Je-Hee;Kim, Jaekwang;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제12권4호
    • /
    • pp.308-318
    • /
    • 2012
  • Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

Discrete Wavelet Transform for Watermarking Three-Dimensional Triangular Meshes from a Kinect Sensor

  • Wibowo, Suryo Adhi;Kim, Eun Kyeong;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.249-255
    • /
    • 2014
  • We present a simple method to watermark three-dimensional (3D) triangular meshes that have been generated from the depth data of the Kinect sensor. In contrast to previous methods, which maintain the shape of 3D triangular meshes and decide the embedding place, requiring calculations of vertices and their neighbors, our method is based on selecting one of the coordinate axes. To maintain shape, we use discrete wavelet transform and constant regularization. We know that the watermarking system needs the information to be embedded; we used a text to provide that information. We used geometry attacks such as rotation, scales, and translation, to test the performance of this watermarking system. Performance parameters in this paper include the vertices error rate (VER) and bit error rate (BER). The results from the VER and BER indicate that using a correction term before the extraction process makes our system robust to geometry attacks.

A Deeping Learning-based Article- and Paragraph-level Classification

  • Kim, Euhee
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권11호
    • /
    • pp.31-41
    • /
    • 2018
  • Text classification has been studied for a long time in the Natural Language Processing field. In this paper, we propose an article- and paragraph-level genre classification system using Word2Vec-based LSTM, GRU, and CNN models for large-scale English corpora. Both article- and paragraph-level classification performed best in accuracy with LSTM, which was followed by GRU and CNN in accuracy performance. Thus, it is to be confirmed that in evaluating the classification performance of LSTM, GRU, and CNN, the word sequential information for articles is better than the word feature extraction for paragraphs when the pre-trained Word2Vec-based word embeddings are used in both deep learning-based article- and paragraph-level classification tasks.

Analysis of Hip-hop Fashion Codes in Contemporary Chinese Fashion

  • Sen, Bin;Haejung, Yum
    • 패션비즈니스
    • /
    • 제26권6호
    • /
    • pp.1-13
    • /
    • 2022
  • The purpose of this study was to find out the type of fashion codes hip-hop fashion has in contemporary Chinese fashion, and the frequency and characteristics of each fashion code. Text mining, which is the most basic analysis method in big data analyticswas used rather than traditional design element analysis. Specific results were as follows. First, hip-hop initially entered China in the late 1970s. The most historical turning point was the American film "Breakin". Second, frequency and word cloud analysis results showed that the "national tide" fashion code was the most notable code. Third, through word embedding analysis, fashion codes were divided into types of "original hip-hop codes", "trendy hip-hop codes", and "hip-hop codes grafted with traditional Chinese culture".

Fine-tuning BERT Models for Keyphrase Extraction in Scientific Articles

  • Lim, Yeonsoo;Seo, Deokjin;Jung, Yuchul
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.45-56
    • /
    • 2020
  • Despite extensive research, performance enhancement of keyphrase (KP) extraction remains a challenging problem in modern informatics. Recently, deep learning-based supervised approaches have exhibited state-of-the-art accuracies with respect to this problem, and several of the previously proposed methods utilize Bidirectional Encoder Representations from Transformers (BERT)-based language models. However, few studies have investigated the effective application of BERT-based fine-tuning techniques to the problem of KP extraction. In this paper, we consider the aforementioned problem in the context of scientific articles by investigating the fine-tuning characteristics of two distinct BERT models - BERT (i.e., base BERT model by Google) and SciBERT (i.e., a BERT model trained on scientific text). Three different datasets (WWW, KDD, and Inspec) comprising data obtained from the computer science domain are used to compare the results obtained by fine-tuning BERT and SciBERT in terms of KP extraction.

Profane or Not: Improving Korean Profane Detection using Deep Learning

  • Woo, Jiyoung;Park, Sung Hee;Kim, Huy Kang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.305-318
    • /
    • 2022
  • Abusive behaviors have become a common issue in many online social media platforms. Profanity is common form of abusive behavior in online. Social media platforms operate the filtering system using popular profanity words lists, but this method has drawbacks that it can be bypassed using an altered form and it can detect normal sentences as profanity. Especially in Korean language, the syllable is composed of graphemes and words are composed of multiple syllables, it can be decomposed into graphemes without impairing the transmission of meaning, and the form of a profane word can be seen as a different meaning in a sentence. This work focuses on the problem of filtering system mis-detecting normal phrases with profane phrases. For that, we proposed the deep learning-based framework including grapheme and syllable separation-based word embedding and appropriate CNN structure. The proposed model was evaluated on the chatting contents from the one of the famous online games in South Korea and generated 90.4% accuracy.

Noisy 텍스트 임베딩을 이용한 한국어 감정 분석 (Korean Sentiment Analysis by using Noisy Text Embedding)

  • 이현영;강승식
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2019년도 제31회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.506-509
    • /
    • 2019
  • 신문기사나 위키피디아와 같이 정보를 전달하는 텍스트와는 달리 사람의 감정 및 의도를 표현하는 텍스트는 다양한 형태의 노이즈를 포함한다. 본 논문에서는 data-driven 방법을 이용하여 노이즈와 단어들 사이의 관계를 LSTM을 이용하여 하나의 벡터로 요약하는 모델을 제안한다. 노이즈 문장 벡터를 표현하는 방식으로는 단방향 LSTM 인코더과 양방향 LSTM 인코더의 두 가지 모델을 이용하여 노이즈를 포함하는 영화 리뷰 데이터를 가지고 감정 분석 실험을 하였고, 실험 결과 단방향 LSTM 인코더보다 양방향 LSTM인 코더가 우수한 성능을 보여주었다.

  • PDF