• Title/Summary/Keyword: LLMs (Large Language Models)

Search Result 33, Processing Time 0.024 seconds

A Survey on Open Source based Large Language Models (오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이)

  • Ha-Young Joo;Hyeontaek Oh;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.193-202
    • /
    • 2023
  • In recent years, the outstanding performance of large language models (LLMs) trained on extensive datasets has become a hot topic. Since studies on LLMs are available on open-source approaches, the ecosystem is expanding rapidly. Models that are task-specific, lightweight, and high-performing are being actively disseminated using additional training techniques using pre-trained LLMs as foundation models. On the other hand, the performance of LLMs for Korean is subpar because English comprises a significant proportion of the training dataset of existing LLMs. Therefore, research is being carried out on Korean-specific LLMs that allow for further learning with Korean language data. This paper identifies trends of open source based LLMs and introduces research on Korean specific large language models; moreover, the applications and limitations of large language models are described.

Large Language Models: A Guide for Radiologists

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.2
    • /
    • pp.126-133
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond natural language processing. Owing to their extensive pre-training on vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without additional fine-tuning. General-purpose chatbots based on LLMs can optimize the efficiency of radiologists in terms of their professional work and research endeavors. Importantly, these LLMs are on a trajectory of rapid evolution, wherein challenges such as "hallucination," high training cost, and efficiency issues are addressed, along with the inclusion of multimodal inputs. In this review, we aim to offer conceptual knowledge and actionable guidance to radiologists interested in utilizing LLMs through a succinct overview of the topic and a summary of radiology-specific aspects, from the beginning to potential future directions.

The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions

  • Sangzin Ahn
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.28 no.5
    • /
    • pp.393-401
    • /
    • 2024
  • Large language models (LLMs) are rapidly transforming medical writing and publishing. This review article focuses on experimental evidence to provide a comprehensive overview of the current applications, challenges, and future implications of LLMs in various stages of academic research and publishing process. Global surveys reveal a high prevalence of LLM usage in scientific writing, with both potential benefits and challenges associated with its adoption. LLMs have been successfully applied in literature search, research design, writing assistance, quality assessment, citation generation, and data analysis. LLMs have also been used in peer review and publication processes, including manuscript screening, generating review comments, and identifying potential biases. To ensure the integrity and quality of scholarly work in the era of LLM-assisted research, responsible artificial intelligence (AI) use is crucial. Researchers should prioritize verifying the accuracy and reliability of AI-generated content, maintain transparency in the use of LLMs, and develop collaborative human-AI workflows. Reviewers should focus on higher-order reviewing skills and be aware of the potential use of LLMs in manuscripts. Editorial offices should develop clear policies and guidelines on AI use and foster open dialogue within the academic community. Future directions include addressing the limitations and biases of current LLMs, exploring innovative applications, and continuously updating policies and practices in response to technological advancements. Collaborative efforts among stakeholders are necessary to harness the transformative potential of LLMs while maintaining the integrity of medical writing and publishing.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

Application of Domain-specific Thesaurus to Construction Documents based on Flow Margin of Semantic Similarity

  • Youmin PARK;Seonghyeon MOON;Jinwoo KIM;Seokho CHI
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.375-382
    • /
    • 2024
  • Large Language Models (LLMs) still encounter challenges in comprehending domain-specific expressions within construction documents. Analogous to humans acquiring unfamiliar expressions from dictionaries, language models could assimilate domain-specific expressions through the use of a thesaurus. Numerous prior studies have developed construction thesauri; however, a practical issue arises in effectively leveraging these resources for instructing language models. Given that the thesaurus primarily outlines relationships between terms without indicating their relative importance, language models may struggle in discerning which terms to retain or replace. This research aims to establish a robust framework for guiding language models using the information from the thesaurus. For instance, a term would be associated with a list of similar terms while also being included in the lists of other related terms. The relative significance among terms could be ascertained by employing similarity scores normalized according to relevance ranks. Consequently, a term exhibiting a positive margin of normalized similarity scores (termed a pivot term) could semantically replace other related terms, thereby enabling LLMs to comprehend domain-specific terms through these pivotal terms. The outcome of this research presents a practical methodology for utilizing domain-specific thesauri to train LLMs and analyze construction documents. Ongoing evaluation involves validating the accuracy of the thesaurus-applied LLM (e.g., S-BERT) in identifying similarities within construction specification provisions. This outcome holds potential for the construction industry by enhancing LLMs' understanding of construction documents and subsequently improving text mining performance and project management efficiency.

Framework for evaluating code generation ability of large language models

  • Sangyeop Yeo;Yu-Seung Ma;Sang Cheol Kim;Hyungkook Jun;Taeho Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.106-117
    • /
    • 2024
  • Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric, pass-ratio@n, which captures the granularity of accuracy according to the pass rate of test cases. The framework is intended to be fully automatic to handle the repetitive work involved in generating prompts, conducting inferences, and executing the generated codes. A preliminary evaluation focusing on the prompt detail, problem publication date, and difficulty level demonstrates the successful integration of our framework with the LeetCode coding platform and highlights the applicability of the pass-ratio@n metric.

Large Language Models: A Comprehensive Guide for Radiologists (대형 언어 모델: 영상의학 전문가를 위한 종합 안내서)

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.5
    • /
    • pp.861-882
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond the field of natural language processing. Owing to their extensive pre-training using vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without the need for additional fine-tuning. Importantly, LLMs are on a trajectory of rapid evolution, addressing challenges such as hallucination, bias in training data, high training costs, performance drift, and privacy issues, along with the inclusion of multimodal inputs. The concept of small, on-premise open source LLMs has garnered growing interest, as fine-tuning to medical domain knowledge, addressing efficiency and privacy issues, and managing performance drift can be effectively and simultaneously achieved. This review provides conceptual knowledge, actionable guidance, and an overview of the current technological landscape and future directions in LLMs for radiologists.

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.

Instruction Fine-tuning and LoRA Combined Approach for Optimizing Large Language Models (대규모 언어 모델의 최적화를 위한 지시형 미세 조정과 LoRA 결합 접근법)

  • Sang-Gook Kim;Kyungran Noh;Hyuk Hahn;Boong Kee Choi
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.134-146
    • /
    • 2024
  • This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource-intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.

Technical Trends in On-device Small Language Model Technology Development (온디바이스 소형언어모델 기술개발 동향)

  • G. Kim;K. Yoon;R. Kim;J. H. Ryu;S. C. Kim
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.4
    • /
    • pp.82-92
    • /
    • 2024
  • This paper introduces the technological development trends in on-device SLMs (Small Language Models). Large Language Models (LLMs) based on the transformer model have gained global attention with the emergence of ChatGPT, providing detailed and sophisticated responses across various knowledge domains, thereby increasing their impact across society. While major global tech companies are continuously announcing new LLMs or enhancing their capabilities, the development of SLMs, which are lightweight versions of LLMs, is intensely progressing. SLMs have the advantage of being able to run as on-device AI on smartphones or edge devices with limited memory and computing resources, enabling their application in various fields from a commercialization perspective. This paper examines the technical features for developing SLMs, lightweight technologies, semiconductor technology development trends for on-device AI, and potential applications across various industries.