• Title/Summary/Keyword: large-language model

Search Result 266, Processing Time 0.026 seconds

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model (언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델)

  • Donghwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2023
  • With the increase in the spread of smart devices and the impact of COVID-19, the consumption of media contents through smart devices has significantly increased. Along with this trend, the amount of media contents viewed through OTT platforms is increasing, that makes contents recommendations on these platforms more important. Previous contents-based recommendation researches have mostly utilized metadata that describes the characteristics of the contents, with a shortage of researches that utilize the contents' own descriptive metadata. In this paper, various text data including titles and synopses that describe the contents were used to recommend similar contents. KLUE-RoBERTa-large, a Korean language model with excellent performance, was used to train the model on the text data. A dataset of over 20,000 contents metadata including titles, synopses, composite genres, directors, actors, and hash tags information was used as training data. To enter the various text features into the language model, the features were concatenated using special tokens that indicate each feature. The test set was designed to promote the relative and objective nature of the model's similarity classification ability by using the three contents comparison method and applying multiple inspections to label the test set. Genres classification and hash tag classification prediction tasks were used to fine-tune the embeddings for the contents meta text data. As a result, the hash tag classification model showed an accuracy of over 90% based on the similarity test set, which was more than 9% better than the baseline language model. Through hash tag classification training, it was found that the language model's ability to classify similar contents was improved, which demonstrated the value of using a language model for the contents-based filtering.

Domain Adaptive Fruit Detection Method based on a Vision-Language Model for Harvest Automation (작물 수확 자동화를 위한 시각 언어 모델 기반의 환경적응형 과수 검출 기술)

  • Changwoo Nam;Jimin Song;Yongsik Jin;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.2
    • /
    • pp.73-81
    • /
    • 2024
  • Recently, mobile manipulators have been utilized in agriculture industry for weed removal and harvest automation. This paper proposes a domain adaptive fruit detection method for harvest automation, by utilizing OWL-ViT model which is an open-vocabulary object detection model. The vision-language model can detect objects based on text prompt, and therefore, it can be extended to detect objects of undefined categories. In the development of deep learning models for real-world problems, constructing a large-scale labeled dataset is a time-consuming task and heavily relies on human effort. To reduce the labor-intensive workload, we utilized a large-scale public dataset as a source domain data and employed a domain adaptation method. Adversarial learning was conducted between a domain discriminator and feature extractor to reduce the gap between the distribution of feature vectors from the source domain and our target domain data. We collected a target domain dataset in a real-like environment and conducted experiments to demonstrate the effectiveness of the proposed method. In experiments, the domain adaptation method improved the AP50 metric from 38.88% to 78.59% for detecting objects within the range of 2m, and we achieved 81.7% of manipulation success rate.

Development of a Regulatory Q&A System for KAERI Utilizing Document Search Algorithms and Large Language Model (거대언어모델과 문서검색 알고리즘을 활용한 한국원자력연구원 규정 질의응답 시스템 개발)

  • Hongbi Kim;Yonggyun Yu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.31-39
    • /
    • 2023
  • The evolution of Natural Language Processing (NLP) and the rise of large language models (LLM) like ChatGPT have paved the way for specialized question-answering (QA) systems tailored to specific domains. This study outlines a system harnessing the power of LLM in conjunction with document search algorithms to interpret and address user inquiries using documents from the Korea Atomic Energy Research Institute (KAERI). Initially, the system refines multiple documents for optimized search and analysis, breaking the content into managable paragraphs suitable for the language model's processing. Each paragraph's content is converted into a vector via an embedding model and archived in a database. Upon receiving a user query, the system matches the extracted vectors from the question with the stored vectors, pinpointing the most pertinent content. The chosen paragraphs, combined with the user's query, are then processed by the language generation model to formulate a response. Tests encompassing a spectrum of questions verified the system's proficiency in discerning question intent, understanding diverse documents, and delivering rapid and precise answers.

Enhancement of a language model using two separate corpora of distinct characteristics

  • Cho, Sehyeong;Chung, Tae-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.357-362
    • /
    • 2004
  • Language models are essential in predicting the next word in a spoken sentence, thereby enhancing the speech recognition accuracy, among other things. However, spoken language domains are too numerous, and therefore developers suffer from the lack of corpora with sufficient sizes. This paper proposes a method of combining two n-gram language models, one constructed from a very small corpus of the right domain of interest, the other constructed from a large but less adequate corpus, resulting in a significantly enhanced language model. This method is based on the observation that a small corpus from the right domain has high quality n-grams but has serious sparseness problem, while a large corpus from a different domain has more n-gram statistics but incorrectly biased. With our approach, two n-gram statistics are combined by extending the idea of Katz's backoff and therefore is called a dual-source backoff. We ran experiments with 3-gram language models constructed from newspaper corpora of several million to tens of million words together with models from smaller broadcast news corpora. The target domain was broadcast news. We obtained significant improvement (30%) by incorporating a small corpus around one thirtieth size of the newspaper corpus.

Deep Learning-based Target Masking Scheme for Understanding Meaning of Newly Coined Words

  • Nam, Gun-Min;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.157-165
    • /
    • 2021
  • Recently, studies using deep learning to analyze a large amount of text are being actively conducted. In particular, a pre-trained language model that applies the learning results of a large amount of text to the analysis of a specific domain text is attracting attention. Among various pre-trained language models, BERT(Bidirectional Encoder Representations from Transformers)-based model is the most widely used. Recently, research to improve the performance of analysis is being conducted through further pre-training using BERT's MLM(Masked Language Model). However, the traditional MLM has difficulties in clearly understands the meaning of sentences containing new words such as newly coined words. Therefore, in this study, we newly propose NTM(Newly coined words Target Masking), which performs masking only on new words. As a result of analyzing about 700,000 movie reviews of portal 'N' by applying the proposed methodology, it was confirmed that the proposed NTM showed superior performance in terms of accuracy of sensitivity analysis compared to the existing random masking.

Compressing intent classification model for multi-agent in low-resource devices (저성능 자원에서 멀티 에이전트 운영을 위한 의도 분류 모델 경량화)

  • Yoon, Yongsun;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • Recently, large-scale language models (LPLM) have been shown state-of-the-art performances in various tasks of natural language processing including intent classification. However, fine-tuning LPLM requires much computational cost for training and inference which is not appropriate for dialog system. In this paper, we propose compressed intent classification model for multi-agent in low-resource like CPU. Our method consists of two stages. First, we trained sentence encoder from LPLM then compressed it through knowledge distillation. Second, we trained agent-specific adapter for intent classification. The results of three intent classification datasets show that our method achieved 98% of the accuracy of LPLM with only 21% size of it.

Design of a Korean Speech Recognition Platform (한국어 음성인식 플랫폼의 설계)

  • Kwon Oh-Wook;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • MALSORI
    • /
    • no.51
    • /
    • pp.151-165
    • /
    • 2004
  • For educational and research purposes, a Korean speech recognition platform is designed. It is based on an object-oriented architecture and can be easily modified so that researchers can readily evaluate the performance of a recognition algorithm of interest. This platform will save development time for many who are interested in speech recognition. The platform includes the following modules: Noise reduction, end-point detection, met-frequency cepstral coefficient (MFCC) and perceptually linear prediction (PLP)-based feature extraction, hidden Markov model (HMM)-based acoustic modeling, n-gram language modeling, n-best search, and Korean language processing. The decoder of the platform can handle both lexical search trees for large vocabulary speech recognition and finite-state networks for small-to-medium vocabulary speech recognition. It performs word-dependent n-best search algorithm with a bigram language model in the first forward search stage and then extracts a word lattice and restores each lattice path with a trigram language model in the second stage.

  • PDF

A Technique for Improving Relation Extraction Performance using Entity Information in Language Model (언어모델에서 엔티티 정보를 이용한 관계 추출 성능 향상 기법)

  • Hur, Yuna;Oh, Dongsuk;Whang, Taesun;Lee, Seolhwa;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.124-127
    • /
    • 2020
  • 관계 추출은 문장에서 두 개의 엔티티가 주어졌을 때 두 개의 엔티티에 대한 의미적 이해를 통해 관계를 분류하는 작업이다. 이와 같이 관계 추출에서 관계를 분류하기 위해서는 두 개의 엔티티에 대한 정보가 필요하다. 본 연구에서는 관계 추출을 하기 위해 문장에서 엔티티들의 표현을 다르게하여 관계 추출의 성능을 비교 실험하였다. 첫번째로는 문장에서 [CLS] 토큰(Token)으로 관계를 분류하는 Standard 엔티티 정보 표현과 두번째로는 엔티티의 앞과 뒤에 Special Token을 추가하여 관계를 분류하는 Entity-Markers 엔티티 정보 표현했다. 이를 기반으로 문장의 문맥 정보를 학습한 사전 학습(Pre-trained)모델인 BERT-Large와 ALBERT-Large를 적용하여 실험을 진행하였다. 실험 결과 Special Token을 추가한 Entity-Markers의 성능이 높았으며, BERT-Large에서 더 높은 성능 결과를 확인하였다.

  • PDF

Design and Implementation of a Network Programming Language (네트워크를 고려한 프로그래밍언어의 설계와 구현)

  • Won, Yu-Hun;Han, Tae-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1359-1371
    • /
    • 1999
  • 대규모 네트워크 상에서 동작하는 분산 시스템의 구현을 위해 제시된 방법 중의 하나인 이동 코드 개념은 네트워크 공유 자원에 접근할 수 있는 효과적인 방법을 제시하였고 이 개념을 지원하는 많은 언어들의 개발을 가져왔다. 개발된 언어들이 가지고 있는 이동 코드를 지원하기 위한 언어 구문과 적용하려는 문제 영역의 특성을 반영한 언어 구문은 네트워크 프로그래밍을 하는데 있어서 효율과 문제 중심의 프로그램의 두 가지를 모두 가능하게 하고 있다. 본 논문에서는 현재 분산 컴퓨팅 환경에서 가장 많이 사용되고 있는 클라이언트-서버 모델을 확장하여 서버의 자원에 접근할 수 있는 또 다른 방법을 가진 모델을 제시하고, 이 모델을 표현할 수 있는 언어를 설계하였다. 설계된 언어는 이동 코드의 개념을 지원함으로써 대규모 네트워크에서 수행되는 프로그램의 작성을 가능하게 하고, 분산 범위 규칙을 채택함으로써 이동 코드의 기술을 일반 함수를 기술하듯 명확한 관점에서 할 수 있도록 하였다. 또한 네트워크 관련 자원들을 언어 구문으로 채택하여 네트워크 프로그래밍을 언어 수준에서 할 수 있도록 하였다. 언어의 이론적인 설계에 그치지 않고 설계된 언어를 수행할 수 있는 실행 시간 지원 시스템을 구현하였다. 실행 시간 지원 시스템은 언어를 해석하고 실행하는 코드 해석기와 이동 코드를 지원하는 네트워크 감독으로 구성되며 설계된 언어를 사용하여 실제로 네트워크 응용 프로그램을 작성하고 테스트 해 볼 수 있다.Abstract Some studies bring up a concept of code mobility as an innovative way to access network resources in order to develop distributed systems working on a large scale network. After that, many languages are suggested to support this concept. In these languages, language constructors for their particular application domains and mobile codes provide both problem-oriented views to the programmer and reasonable performance to the system. In this thesis, we extend the client-server model that is the most popular model in developing distributed systems these days. We propose a model to have another method to access server's resources and extend the C language to implement the proposed model for the large scale network. The new language has capability to build a software working on a large scale network by supporting mobile code and gives a consistent network programming view to the programmer by adapting distributed semantics. The language also makes network programming easy by providing network primitives at the language level. We implement a prototype of run-time system to support this language. The run-time system is composed of two major parts: code-interpreter that interprets and executes the language and network-daemon that supports mobile codes.