• Title/Summary/Keyword: large-language model

Search Result 341, Processing Time 0.027 seconds

Large Language Models: A Comprehensive Guide for Radiologists (대형 언어 모델: 영상의학 전문가를 위한 종합 안내서)

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.5
    • /
    • pp.861-882
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond the field of natural language processing. Owing to their extensive pre-training using vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without the need for additional fine-tuning. Importantly, LLMs are on a trajectory of rapid evolution, addressing challenges such as hallucination, bias in training data, high training costs, performance drift, and privacy issues, along with the inclusion of multimodal inputs. The concept of small, on-premise open source LLMs has garnered growing interest, as fine-tuning to medical domain knowledge, addressing efficiency and privacy issues, and managing performance drift can be effectively and simultaneously achieved. This review provides conceptual knowledge, actionable guidance, and an overview of the current technological landscape and future directions in LLMs for radiologists.

Application of Domain-specific Thesaurus to Construction Documents based on Flow Margin of Semantic Similarity

  • Youmin PARK;Seonghyeon MOON;Jinwoo KIM;Seokho CHI
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.375-382
    • /
    • 2024
  • Large Language Models (LLMs) still encounter challenges in comprehending domain-specific expressions within construction documents. Analogous to humans acquiring unfamiliar expressions from dictionaries, language models could assimilate domain-specific expressions through the use of a thesaurus. Numerous prior studies have developed construction thesauri; however, a practical issue arises in effectively leveraging these resources for instructing language models. Given that the thesaurus primarily outlines relationships between terms without indicating their relative importance, language models may struggle in discerning which terms to retain or replace. This research aims to establish a robust framework for guiding language models using the information from the thesaurus. For instance, a term would be associated with a list of similar terms while also being included in the lists of other related terms. The relative significance among terms could be ascertained by employing similarity scores normalized according to relevance ranks. Consequently, a term exhibiting a positive margin of normalized similarity scores (termed a pivot term) could semantically replace other related terms, thereby enabling LLMs to comprehend domain-specific terms through these pivotal terms. The outcome of this research presents a practical methodology for utilizing domain-specific thesauri to train LLMs and analyze construction documents. Ongoing evaluation involves validating the accuracy of the thesaurus-applied LLM (e.g., S-BERT) in identifying similarities within construction specification provisions. This outcome holds potential for the construction industry by enhancing LLMs' understanding of construction documents and subsequently improving text mining performance and project management efficiency.

Building Specialized Language Model for National R&D through Knowledge Transfer Based on Further Pre-training (추가 사전학습 기반 지식 전이를 통한 국가 R&D 전문 언어모델 구축)

  • Yu, Eunji;Seo, Sumin;Kim, Namgyu
    • Knowledge Management Research
    • /
    • v.22 no.3
    • /
    • pp.91-106
    • /
    • 2021
  • With the recent rapid development of deep learning technology, the demand for analyzing huge text documents in the national R&D field from various perspectives is rapidly increasing. In particular, interest in the application of a BERT(Bidirectional Encoder Representations from Transformers) language model that has pre-trained a large corpus is growing. However, the terminology used frequently in highly specialized fields such as national R&D are often not sufficiently learned in basic BERT. This is pointed out as a limitation of understanding documents in specialized fields through BERT. Therefore, this study proposes a method to build an R&D KoBERT language model that transfers national R&D field knowledge to basic BERT using further pre-training. In addition, in order to evaluate the performance of the proposed model, we performed classification analysis on about 116,000 R&D reports in the health care and information and communication fields. Experimental results showed that our proposed model showed higher performance in terms of accuracy compared to the pure KoBERT model.

An Empirical Study of Topic Classification for Korean Newspaper Headlines (한국어 뉴스 헤드라인의 토픽 분류에 대한 실증적 연구)

  • Park, Jeiyoon;Kim, Mingyu;Oh, Yerim;Lee, Sangwon;Min, Jiung;Oh, Youngdae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.287-292
    • /
    • 2021
  • 좋은 자연어 이해 시스템은 인간과 같이 텍스트에서 단순히 단어나 문장의 형태를 인식하는 것 뿐만 아니라 실제로 그 글이 의미하는 바를 정확하게 추론할 수 있어야 한다. 이 논문에서 우리는 뉴스 헤드라인으로 뉴스의 토픽을 분류하는 open benchmark인 KLUE(Korean Language Understanding Evaluation)에 대하여 기존에 비교 실험이 진행되지 않은 시중에 공개된 다양한 한국어 라지스케일 모델들의 성능을 비교하고 결과에 대한 원인을 실증적으로 분석하려고 한다. KoBERT, KoBART, KoELECTRA, 그리고 KcELECTRA 총 네가지 베이스라인 모델들을 주어진 뉴스 헤드라인을 일곱가지 클래스로 분류하는 KLUE-TC benchmark에 대해 실험한 결과 KoBERT가 86.7 accuracy로 가장 좋은 성능을 보여주었다.

  • PDF

Data Augmentation using Large Language Model for English Education (영어 교육을 위한 거대 언어 모델 활용 말뭉치 확장 프레임워크)

  • Jinwoo Jung;Sangkeun Jung
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.698-703
    • /
    • 2023
  • 최근 ChatGPT와 같은 사전학습 생성모델은 자연어 이해 (natural language understanding)에서 좋은 성능을 보이고 있다. 또한 코드 작업을 도와주고 대학수학능력시험, 중고등학교 수준의 문제를 풀거나 도와주는 다양한 분야에서 활용되고 있다. 본 논문은 사전학습 생성모델을 이용하여 영어 교육을 위해 말뭉치를 확장하는 프레임 워크를 제시한다. 이를 위해 ChatGPT를 사용해 말뭉치를 확장 한 후 의미 유사도, 상황 유사도, 문장 교육 난이도를 사용해 생성된 문장의 교육적 효과를 검증한다.

  • PDF

A Study on the Evaluation of LLM's Gameplay Capabilities in Interactive Text-Based Games (대화형 텍스트 기반 게임에서 LLM의 게임플레이 기능 평가에 관한 연구)

  • Dongcheul Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.87-94
    • /
    • 2024
  • We investigated the feasibility of utilizing Large Language Models (LLMs) to perform text-based games without training on game data in advance. We adopted ChatGPT-3.5 and its state-of-the-art, ChatGPT-4, as the systems that implemented LLM. In addition, we added the persistent memory feature proposed in this paper to ChatGPT-4 to create three game player agents. We used Zork, one of the most famous text-based games, to see if the agents could navigate through complex locations, gather information, and solve puzzles. The results showed that the agent with persistent memory had the widest range of exploration and the best score among the three agents. However, all three agents were limited in solving puzzles, indicating that LLM is vulnerable to problems that require multi-level reasoning. Nevertheless, the proposed agent was still able to visit 37.3% of the total locations and collect all the items in the locations it visited, demonstrating the potential of LLM.

Exploring the feasibility of developing an education tool for pattern identification using a large language model: focusing on the case of a simulated patient with fatigue symptom and dual deficiency of the heart-spleen pattern (거대언어모델을 활용한 변증 교육도구 개발 가능성 탐색: 피로주증의 심비양허형 모의환자에 대한 사례구축을 중심으로)

  • Won-Yung Lee;Sang Yun Han;Seungho Lee
    • Herbal Formula Science
    • /
    • v.32 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Objective : This study aims to assess the potential of utilizing large language models in pattern identification education by developing a simulated patient with fatigue and dual deficiency of the heart-spleen pattern. Methods : A simulated patient dataset was constructed using the clinical practice examination module provided by the National Institute for Korean Medicine Development. The dataset was divided into patient characteristics, sample questions, and responses, and utilized to design the system, assistant, and user prompts, respectively. A web-based interface was developed using the Django framework and WebSocket. Results : We developed a simulated fatigue patient representing dual deficiency of the heart-spleen pattern through prompt engineering. To make practical tools, we further implemented web-based interfaces for the examinee's and evaluator's roles. The interface for examinees allows one to examine the simulated patient and provides access to a personalized number for future access. In addition, the interface for evaluators included a page that provided an overview of each examinees' chat history and evaluation criteria in real-time. Conclusion : This study is the first development of an educational tool integrated with a large language model for pattern identification education, which is expected to be widely applied to Korean medicine education.

Analysis Method Study of Film Text using Word Vectors of Language Model (언어모델의 단어벡터를 이용한 영화 텍스트 분석 기법 연구)

  • Kwangho Ko;Juryeon Paik
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.6
    • /
    • pp.703-708
    • /
    • 2024
  • LSTM, a deep learning technique for building language models, can be easily trained on systems with small computing resources, unlike large language models. In this paper, we propose a convergent technique to train LSTM-based language models on small-scale texts and perform objective semantic and relational analysis on the main topic words of the text using the word vectors of the vocabulary comprising the text. Using the word vectors of a small language model trained on the English script of the 2021 movie "Green Knight" directed by David Lowery as a text, we proposed a technique that can analyze the meaning and relationship of the main topic words. Through the similarity operation of the word vector, the meaning and symbolism of each theme word can be objectively analyzed with the similarity scores between the words. The relationship between each theme word can be intuitively recognized by displaying the dimensionality-reduced two-dimensional word vector. By using a small-scale language model of the LSTM method, we proposed a method to analyze complex texts using word vectors while minimizing the cost of learning.

Design and Implementation of BADA-IV/XML Query Processor Supporting Efficient Structure Querying (효율적 구조 질의를 지원하는 바다-IV/XML 질의처리기의 설계 및 구현)

  • 이명철;김상균;손덕주;김명준;이규철
    • The Journal of Information Technology and Database
    • /
    • v.7 no.2
    • /
    • pp.17-32
    • /
    • 2000
  • As XML emerging as the Internet electronic document language standard of the next generation, the number of XML documents which contain vast amount of Information is increasing substantially through the transformation of existing documents to XML documents or the appearance of new XML documents. Consequently, XML document retrieval system becomes extremely essential for searching through a large quantity of XML documents that are storied in and managed by DBMS. In this paper we describe the design and implementation of BADA-IV/XML query processor that supports content-based, structure-based and attribute-based retrieval. We design XML query language based upon XQL (XML Query Language) of W3C and tightly-coupled with OQL (a query language for object-oriented database). XML document is stored and maintained in BADA-IV, which is an object-oriented database management system developed by ETRI (Electronics and Telecommunications Research Institute) The storage data model is based on DOM (Document Object Model), therefore the retrieval of XML documents is executed basically using DOM tree traversal. We improve the search performance using Node ID which represents node's hierarchy information in an XML document. Assuming that DOW tree is a complete k-ary tree, we show that Node ID technique is superior to DOM tree traversal from the viewpoint of node fetch counts.

  • PDF

Context-Based Prompt Selection Methodology to Enhance Performance in Prompt-Based Learning

  • Lib Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.9-21
    • /
    • 2024
  • Deep learning has been developing rapidly in recent years, with many researchers working to utilize large language models in various domains. However, there are practical difficulties that developing and utilizing language models require massive data and high-performance computing resources. Therefore, in-context learning, which utilizes prompts to learn efficiently, has been introduced, but there needs to be clear criteria for effective prompts for learning. In this study, we propose a methodology for enhancing prompt-based learning performance by improving the PET technique, which is one of the contextual learning methods, to select PVPs that are similar to the context of existing data. To evaluate the performance of the proposed methodology, we conducted experiments with 30,100 restaurant review datasets collected from Yelp, an online business review platform. We found that the proposed methodology outperforms traditional PET in all aspects of accuracy, stability, and learning efficiency.