• Title/Summary/Keyword: 텍스트 데이터

Search Result 1,750, Processing Time 0.028 seconds

100 K-Poison: Poisonous Texts Resistance Test Dataset For Korean Generative Models (100 K-Poison: 한국어 생성 모델을 위한 독성 텍스트 저항력 검증 데이터셋 )

  • Li Fei;Yejee Kang;Seoyoon Park;Yeonji Jang;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.149-154
    • /
    • 2023
  • 본고는 한국어 생성 모델의 독성 텍스트 저항 능력을 검증하기 위해 'CVALUE' 데이터셋에서 추출한 고난도 독성 질문-대답 100쌍을 바탕으로 한국어 생성 모델을 위한 '100 K-Poison' 데이터셋을 시범적으로 구축했다. 이 데이터셋을 토대로 4가지 대표적인 한국어 생성 모델 'ZeroShot TextClassifcation'과 'Text Generation7 실험을 진행함으로써 현재 한국어 생성 모델의 독성 텍스트 식별 및 응답 능력을 종합적으로 고찰했고, 모델 간의 독성 텍스트 저항력 격차 현상을 분석했으며, 앞으로 한국어 생성 모델의 독성 텍스트 식별 및 웅대 성능을 한층 더 강화하기 위한 '이독공독(以毒攻毒)' 학습 전략을 새로 제안하였다.

  • PDF

Interplay of Text Mining and Data Mining for Classifying Web Contents (웹 컨텐츠의 분류를 위한 텍스트마이닝과 데이터마이닝의 통합 방법 연구)

  • 최윤정;박승수
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.33-46
    • /
    • 2002
  • Recently, unstructured random data such as website logs, texts and tables etc, have been flooding in the internet. Among these unstructured data there are potentially very useful data such as bulletin boards and e-mails that are used for customer services and the output from search engines. Various text mining tools have been introduced to deal with those data. But most of them lack accuracy compared to traditional data mining tools that deal with structured data. Hence, it has been sought to find a way to apply data mining techniques to these text data. In this paper, we propose a text mining system which can incooperate existing data mining methods. We use text mining as a preprocessing tool to generate formatted data to be used as input to the data mining system. The output of the data mining system is used as feedback data to the text mining to guide further categorization. This feedback cycle can enhance the performance of the text mining in terms of accuracy. We apply this method to categorize web sites containing adult contents as well as illegal contents. The result shows improvements in categorization performance for previously ambiguous data.

  • PDF

Design and implementation of workbench for spoken language data acquisition (음성 언어 자료 확보를 위한 Workbench의 설계 및 구현)

  • 김태환
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.375-379
    • /
    • 1998
  • 음성 언어 자료의 확보 및 활용을 위해서는 다양한 소프트웨어의 도움이 필요하다. 본 논문에서는 본 연구실에서 설계 및 개발한 PC용 Workbench에 대하여 기술한다. Workbench는 음성 언어 자료의 확보를 위한 텍스트 처리 모듈들과 음성 데이터의 처리를 위한 신호처리 모듈들로 구성되어 있다. Workbench에 포함된 모듈로는 텍스트를 자동 읽기 변환하는 철자 음운 변환기, 발성 목록 선정 모듈, 끝점 검출기를 이용한 음성 데이터 편집 모듈, 끝점 검출기를 이용한 음성 데이터 편집 모듈, 다단계 레이블링 시스템, 텍스트에서 원하는 음운 환경을 포함하고 있는 문자열을 다양한 조건으로 검색할 수 있는 음운 환경 검색기를 포함하고 있다.

  • PDF

Text summarization of dialogue based on BERT

  • Nam, Wongyung;Lee, Jisoo;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose how to implement text summaries for colloquial data that are not clearly organized. For this study, SAMSum data, which is colloquial data, was used, and the BERTSumExtAbs model proposed in the previous study of the automatic summary model was applied. More than 70% of the SAMSum dataset consists of conversations between two people, and the remaining 30% consists of conversations between three or more people. As a result, by applying the automatic text summarization model to colloquial data, a result of 42.43 or higher was derived in the ROUGE Score R-1. In addition, a high score of 45.81 was derived by fine-tuning the BERTSum model, which was previously proposed as a text summarization model. Through this study, the performance of colloquial generation summary has been proven, and it is hoped that the computer will understand human natural language as it is and be used as basic data to solve various tasks.

Text Classification using Cloze Question based on KorBERT (KorBERT 기반 빈칸채우기 문제를 이용한 텍스트 분류)

  • Heo, Jeong;Lee, Hyung-Jik;Lim, Joon-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.486-489
    • /
    • 2021
  • 본 논문에서는 KorBERT 한국어 언어모델에 기반하여 텍스트 분류문제를 빈칸채우기 문제로 변환하고 빈칸에 적합한 어휘를 예측하는 방식의 프롬프트기반 분류모델에 대해서 소개한다. [CLS] 토큰을 이용한 헤드기반 분류와 프롬프트기반 분류는 사전학습의 NSP모델과 MLM모델의 특성을 반영한 것으로, 텍스트의 의미/구조적 분석과 의미적 추론으로 구분되는 텍스트 분류 태스크에서의 성능을 비교 평가하였다. 의미/구조적 분석 실험을 위해 KLUE의 의미유사도와 토픽분류 데이터셋을 이용하였고, 의미적 추론 실험을 위해서 KLUE의 자연어추론 데이터셋을 이용하였다. 실험을 통해, MLM모델의 특성을 반영한 프롬프트기반 텍스트 분류에서는 의미유사도와 토픽분류 태스크에서 우수한 성능을 보였고, NSP모델의 특성을 반영한 헤드기반 텍스트 분류에서는 자연어추론 태스크에서 우수한 성능을 보였다.

  • PDF

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Data Transition Minimization Algorithm for Text Image (텍스트 영상에 대한 데이터 천이 최소화 알고리즘)

  • Hwang, Bo-Hyun;Park, Byoung-Soo;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.371-376
    • /
    • 2012
  • In this paper, we propose a new data coding method and its circuits for minimizing data transition in text image. The proposed circuits can solve the synchronization problem between input data and output data in the modified LVDS algorithm. And the proposed algorithm is allowed to transmit two data signals through additional serial data coding method in order to minimize the data transition in text image and can reduce the operating frequency to a half. Thus, we can solve EMI(Electro-Magnetic Interface) problem and reduce the power consumption. The simulation results show that the proposed algorithm and circuits can provide an enhanced data transition minimization in text image and solve the synchronization problem between input data and output data.

A Machine Learning Based Facility Error Pattern Extraction Framework for Smart Manufacturing (스마트제조를 위한 머신러닝 기반의 설비 오류 발생 패턴 도출 프레임워크)

  • Yun, Joonseo;An, Hyeontae;Choi, Yerim
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.97-110
    • /
    • 2018
  • With the advent of the 4-th industrial revolution, manufacturing companies have increasing interests in the realization of smart manufacturing by utilizing their accumulated facilities data. However, most previous research dealt with the structured data such as sensor signals, and only a little focused on the unstructured data such as text, which actually comprises a large portion of the accumulated data. Therefore, we propose an association rule mining based facility error pattern extraction framework, where text data written by operators are analyzed. Specifically, phrases were extracted and utilized as a unit for text data analysis since a word, which normally used as a unit for text data analysis, is unable to deliver the technical meanings of facility errors. Performances of the proposed framework were evaluated by addressing a real-world case, and it is expected that the productivity of manufacturing companies will be enhanced by adopting the proposed framework.

Text Augmentation Using Hierarchy-based Word Replacement

  • Kim, Museong;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.57-67
    • /
    • 2021
  • Recently, multi-modal deep learning techniques that combine heterogeneous data for deep learning analysis have been utilized a lot. In particular, studies on the synthesis of Text to Image that automatically generate images from text are being actively conducted. Deep learning for image synthesis requires a vast amount of data consisting of pairs of images and text describing the image. Therefore, various data augmentation techniques have been devised to generate a large amount of data from small data. A number of text augmentation techniques based on synonym replacement have been proposed so far. However, these techniques have a common limitation in that there is a possibility of generating a incorrect text from the content of an image when replacing the synonym for a noun word. In this study, we propose a text augmentation method to replace words using word hierarchy information for noun words. Additionally, we performed experiments using MSCOCO data in order to evaluate the performance of the proposed methodology.

Multi-modal Image Processing for Improving Recognition Accuracy of Text Data in Images (이미지 내의 텍스트 데이터 인식 정확도 향상을 위한 멀티 모달 이미지 처리 프로세스)

  • Park, Jungeun;Joo, Gyeongdon;Kim, Chulyun
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.148-158
    • /
    • 2018
  • The optical character recognition (OCR) is a technique to extract and recognize texts from images. It is an important preprocessing step in data analysis since most actual text information is embedded in images. Many OCR engines have high recognition accuracy for images where texts are clearly separable from background, such as white background and black lettering. However, they have low recognition accuracy for images where texts are not easily separable from complex background. To improve this low accuracy problem with complex images, it is necessary to transform the input image to make texts more noticeable. In this paper, we propose a method to segment an input image into text lines to enable OCR engines to recognize each line more efficiently, and to determine the final output by comparing the recognition rates of CLAHE module and Two-step module which distinguish texts from background regions based on image processing techniques. Through thorough experiments comparing with well-known OCR engines, Tesseract and Abbyy, we show that our proposed method have the best recognition accuracy with complex background images.