• Title/Summary/Keyword: 대규모 언어모델

Search Result 90, Processing Time 0.04 seconds

A Named Entity Recognition Platform Based on Semi-Automatically Built NE-annotated Corpora and KoBERT (반자동구축된 개체명 주석코퍼스 DecoNAC과 KoBERT를 이용한 개체명인식 플랫폼 DecoNERO)

  • Kim, Shin-Woo;Hwang, Chang-Hoe;Yoon, Jeong-Woo;Lee, Seong-Hyeon;Choi, Soo-Won;Nam, Jee-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.304-309
    • /
    • 2020
  • 본 연구에서는 한국어 전자사전 DECO(Dictionnaire Electronique du COreen)와 다단어(Multi-Word Expressions: MWE) 개체명을 부분 패턴으로 기술하는 부분문법그래프(Local-Grammar Graph: LGG) 프레임에 기반하여 반자동으로 개체명주석 코퍼스 DecoNAC을 구축한 후, 이를 개체명 분석에 활용하고 또한 기계학습에 필요한 도메인별 학습 데이터로 활용하는 DecoNERO 개체명인식 플랫폼을 소개하는 데에 목적을 두었다. 최근 들어 좋은 성과를 보이는 것으로 보고되고 있는 기계학습 방법론들은 다양한 도메인을 기반으로한 대규모의 학습데이터를 필요로 한다. 본 연구에서는 정교하게 설계된 개체명 사전과 다단어 개체명 시퀀스에 대한 언어자원을 바탕으로 하는 반자동으로 학습데이터를 생성하는 방법론을 제안하였다. 본 연구에서 제안된 개체명주석 코퍼스 DecoNAC 기반 접근법의 성능을 실험하기 위해 온라인 뉴스 기사 텍스트를 바탕으로 실험을 진행하였다. 이 실험에서 DecoNAC을 적용한 경우, KoBERT 모델만으로 개체명을 인식한 결과에 비해 약 7.49%의 성능향상을 기대할 수 있음을 확인하였다.

  • PDF

Vocabulary Recognition Retrieval Optimized System using MLHF Model (MLHF 모델을 적용한 어휘 인식 탐색 최적화 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.217-223
    • /
    • 2009
  • Vocabulary recognition system of Mobile terminal is executed statistical method for vocabulary recognition and used statistical grammar recognition system using N-gram. If limit arithmetic processing capacity in memory of vocabulary to grow then vocabulary recognition algorithm complicated and need a large scale search space and many processing time on account of impossible to process. This study suggest vocabulary recognition optimize using MLHF System. MLHF separate acoustic search and lexical search system using FLaVoR. Acoustic search feature vector of speech signal extract using HMM, lexical search recognition execution using Levenshtein distance algorithm. System performance as a result of represent vocabulary dependence recognition rate of 98.63%, vocabulary independence recognition rate of 97.91%, represent recognition speed of 1.61 second.

Large-Scale Hangul Font Recognition Using Deep Learning (딥러닝을 이용한 대규모 한글 폰트 인식)

  • Yang, Jin-Hyeok;Kwak, Hyo-Bin;Kim, In-Jung
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.8-12
    • /
    • 2017
  • 본 연구에서는 딥러닝을 이용해 3300종에 이르는 다양한 한글 폰트를 인식하였다. 폰트는 디자인 분야에 있어서 필수적인 요소이며 문화적으로도 중요하다. 한글은 영어권 언어에 비해 훨씬 많은 문자를 포함하고 있기 때문에 한글 폰트 인식은 영어권 폰트 인식보다 어렵다. 본 연구에서는 최근 다양한 영상 인식 분야에서 좋은 성능을 보이고 있는 CNN을 이용해 한글 폰트 인식을 수행하였다. 과거에 이루어진 대부분의 폰트 인식 연구에서는 불과 수 십 종의 폰트 만을 대상으로 하였다. 최근에 이르러서야 2000종 이상의 대용량 폰트 인식에 대한 연구결과가 발표되었으나, 이들은 주로 문자의 수가 적은 영어권 문자들을 대상으로 하고 있다. 본 연구에서는 CNN을 이용해 3300종에 이르는 다양한 한글 폰트를 인식하였다. 많은 수의 폰트를 인식하기 위해 두 가지 구조의 CNN을 이용해 폰트인식기를 구성하고, 실험을 통해 이들을 비교 평가하였다. 특히, 본 연구에서는 3300종의 한글 폰트를 효과적으로 인식하면서도 학습 시간과 파라미터의 수를 줄이고 구조를 단순화하는 방향으로 모델을 개선하였다. 제안하는 모델은 3300종의 한글 폰트에 대하여 상위 1위 인식률 94.55%, 상위 5위 인식률 99.91%의 성능을 보였다.

  • PDF

A Comparative Research on End-to-End Clinical Entity and Relation Extraction using Deep Neural Networks: Pipeline vs. Joint Models (심층 신경망을 활용한 진료 기록 문헌에서의 종단형 개체명 및 관계 추출 비교 연구 - 파이프라인 모델과 결합 모델을 중심으로 -)

  • Sung-Pil Choi
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.1
    • /
    • pp.93-114
    • /
    • 2023
  • Information extraction can facilitate the intensive analysis of documents by providing semantic triples which consist of named entities and their relations recognized in the texts. However, most of the research so far has been carried out separately for named entity recognition and relation extraction as individual studies, and as a result, the effective performance evaluation of the entire information extraction systems was not performed properly. This paper introduces two models of end-to-end information extraction that can extract various entity names in clinical records and their relationships in the form of semantic triples, namely pipeline and joint models and compares their performances in depth. The pipeline model consists of an entity recognition sub-system based on bidirectional GRU-CRFs and a relation extraction module using multiple encoding scheme, whereas the joint model was implemented with a single bidirectional GRU-CRFs equipped with multi-head labeling method. In the experiments using i2b2/VA 2010, the performance of the pipeline model was 5.5% (F-measure) higher. In addition, through a comparative experiment with existing state-of-the-art systems using large-scale neural language models and manually constructed features, the objective performance level of the end-to-end models implemented in this paper could be identified properly.

An Analysis of Fuzzy Survey Data Based on the Maximum Entropy Principle (최대 엔트로피 분포를 이용한 퍼지 관측데이터의 분석법에 관한 연구)

  • 유재휘;유동일
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.2
    • /
    • pp.131-138
    • /
    • 1998
  • In usual statistical data analysis, we describe statistical data by exact values. However, in modem complex and large-scale systems, it is difficult to treat the systems using only exact data. In this paper, we define these data as fuzzy data(ie. Linguistic variable applied to make the member-ship function.) and Propose a new method to get an analysis of fuzzy survey data based on the maximum entropy Principle. Also, we propose a new method of discrimination by measuring distance between a distribution of the stable state and estimated distribution of the present state using the Kullback - Leibler information. Furthermore, we investigate the validity of our method by computer simulations under realistic situations.

  • PDF

Resampling Feedback Documents Using Overlapping Clusters (중첩 클러스터를 이용한 피드백 문서의 재샘플링 기법)

  • Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.247-256
    • /
    • 2009
  • Typical pseudo-relevance feedback methods assume the top-retrieved documents are relevant and use these pseudo-relevant documents to expand terms. The initial retrieval set can, however, contain a great deal of noise. In this paper, we present a cluster-based resampling method to select better pseudo-relevant documents based on the relevance model. The main idea is to use document clusters to find dominant documents for the initial retrieval set, and to repeatedly feed the documents to emphasize the core topics of a query. Experimental results on large-scale web TREC collections show significant improvements over the relevance model. For justification of the resampling approach, we examine relevance density of feedback documents. The resampling approach shows higher relevance density than the baseline relevance model on all collections, resulting in better retrieval accuracy in pseudo-relevance feedback. This result indicates that the proposed method is effective for pseudo-relevance feedback.

Fine-tuning Method to Improve Sentiment Classification Perfoimance of Review Data (리뷰 데이터 감성 분류 성능 향상을 위한 Fine-tuning 방법)

  • Jung II Park;Myimg Jin Lim;Pan Koo Kim
    • Smart Media Journal
    • /
    • v.13 no.6
    • /
    • pp.44-53
    • /
    • 2024
  • Companies in modern society are increasingly recognizing sentiment classification as a crucial task, emphasizing the importance of accurately understanding consumer opinions opinions across various platforms such as social media, product reviews, and customer feedback for competitive success. Extensive research is being conducted on sentiment classification as it helps improve products or services by identifying the diverse opinions and emotions of consumers. In sentiment classification, fine-tuning with large-scale datasets and pre-trained language models is essential for enhancing performance. Recent advancements in artificial intelligence have led to high-performing sentiment classification models, with the ELECTRA model standing out due to its efficient learning methods and minimal computing resource requirements. Therefore, this paper proposes a method to enhance sentiment classification performance through efficient fine-tuning of various datasets using the KoELECTRA model, specifically trained for Korean.

Development of Dental Consultation Chatbot using Retrieval Augmented LLM (검색 증강 LLM을 이용한 치과 상담용 챗봇 개발)

  • Jongjin Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.87-92
    • /
    • 2024
  • In this paper, a RAG system was implemented using an existing Large Language Model (LLM) and Langchain library to develop a dental consultation chatbot. For this purpose, we collected contents from the webpage bulletin boards of domestic dental university hospitals and constructed consultation data with the advice and supervision of dental specialists. In order to divide the input consultation data into appropriate sizes, the chunk size and the size of the overlapping text in each chunk were set to 1001 and 100, respectively. As a result of the simulation, the Retrieval Augmented LLM searched for and output the consultation content that was most similar to the user input. It was confirmed that the accessibility of dental consultation and the accuracy of consultation content could be improved through the built chatbot.

A Study on the Web Building Assistant System Using GUI Object Detection and Large Language Model (웹 구축 보조 시스템에 대한 GUI 객체 감지 및 대규모 언어 모델 활용 연구)

  • Hyun-Cheol Jang;Hyungkuk Jang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.830-833
    • /
    • 2024
  • As Large Language Models (LLM) like OpenAI's ChatGPT[1] continue to grow in popularity, new applications and services are expected to emerge. This paper introduces an experimental study on a smart web-builder application assistance system that combines Computer Vision with GUI object recognition and the ChatGPT (LLM). First of all, the research strategy employed computer vision technology in conjunction with Microsoft's "ChatGPT for Robotics: Design Principles and Model Abilities"[2] design strategy. Additionally, this research explores the capabilities of Large Language Model like ChatGPT in various application design tasks, specifically in assisting with web-builder tasks. The study examines the ability of ChatGPT to synthesize code through both directed prompts and free-form conversation strategies. The researchers also explored ChatGPT's ability to perform various tasks within the builder domain, including functions and closure loop inferences, basic logical and mathematical reasoning. Overall, this research proposes an efficient way to perform various application system tasks by combining natural language commands with computer vision technology and LLM (ChatGPT). This approach allows for user interaction through natural language commands while building applications.

An Automated Approach for Exception Suggestion in Python-based AI Projects (Python 기반 AI 프로젝트에서 예외 제안을 위한 자동화 접근 방식)

  • Kang, Mingu;Kim, Suntae;Ryu, Duksan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.73-79
    • /
    • 2022
  • The Python language widely used in artificial intelligence (AI) projects is an interpreter language, and errors occur at runtime. In order to prevent project failure due to errors, it is necessary to handle exceptions in code that can cause exceptional situations in advance. In particular, in AI projects that require a lot of resources, exceptions that occur after long execution lead to a large waste of resources. However, since exception handling depends on the developer's experience, developers have difficulty determining the appropriate exception to catch. To solve this need, we propose an approach that recommends exceptions to catch to developers during development by learning the existing exception handling statements. The proposed method receives the source code of the try block as input and recommends exceptions to be handled in the except block. We evaluate our approach for a large project consisting of two frameworks. According to our evaluation results, the average AUPRC is 0.92 or higher when performing exception recommendation. The study results show that the proposed method can support the developer's exception handling with exception recommendation performance that outperforms the comparative models.