• 제목/요약/키워드: LLMs (Large Language Models)

검색결과 33건 처리시간 0.021초

오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이 (A Survey on Open Source based Large Language Models)

  • 주하영;오현택;양진홍
    • 한국정보전자통신기술학회논문지
    • /
    • 제16권4호
    • /
    • pp.193-202
    • /
    • 2023
  • 최근 대규모 데이터 세트로 학습된 거대 언어 모델들의 뛰어난 성능이 공개되면서 큰 화제가 되고 있다. 하지만 거대 언어 모델을 학습하고 활용하기 위해서는 초대용량의 컴퓨팅 및 메모리 자원이 필요하므로, 대부분의 연구는 빅테크 기업들을 중심으로 폐쇄적인 환경에서 진행되고 있었다. 하지만, Meta의 거대 언어 모델 LLaMA가 공개되면서 거대 언어 모델 연구들은 기존의 폐쇄적인 환경에서 벗어나 오픈 소스화되었고, 관련 생태계가 급격히 확장되어 가고 있다. 이러한 배경하에 사전 학습된 거대 언어 모델을 추가 학습시켜 특정 작업에 특화되거나 가벼우면서도 성능이 뛰어난 모델들이 활발히 공유되고 있다. 한편, 사전 학습된 거대 언어 모델의 학습데이터는 영어가 큰 비중을 차지하기 때문에 한국어의 성능이 비교적 떨어지며, 이러한 한계를 극복하기 위해 한국어 데이터로 추가 학습을 시키는 한국어 특화 언어 모델 연구들이 이루어지고 있다. 본 논문에서는 오픈 소스 기반의 거대 언어 모델의 생태계 동향을 파악하고 영어 및 한국어 특화 거대 언어 모델에 관한 연구를 소개하며, 거대 언어 모델의 활용 방안과 한계점을 파악한다.

Large Language Models: A Guide for Radiologists

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Korean Journal of Radiology
    • /
    • 제25권2호
    • /
    • pp.126-133
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond natural language processing. Owing to their extensive pre-training on vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without additional fine-tuning. General-purpose chatbots based on LLMs can optimize the efficiency of radiologists in terms of their professional work and research endeavors. Importantly, these LLMs are on a trajectory of rapid evolution, wherein challenges such as "hallucination," high training cost, and efficiency issues are addressed, along with the inclusion of multimodal inputs. In this review, we aim to offer conceptual knowledge and actionable guidance to radiologists interested in utilizing LLMs through a succinct overview of the topic and a summary of radiology-specific aspects, from the beginning to potential future directions.

The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions

  • Sangzin Ahn
    • The Korean Journal of Physiology and Pharmacology
    • /
    • 제28권5호
    • /
    • pp.393-401
    • /
    • 2024
  • Large language models (LLMs) are rapidly transforming medical writing and publishing. This review article focuses on experimental evidence to provide a comprehensive overview of the current applications, challenges, and future implications of LLMs in various stages of academic research and publishing process. Global surveys reveal a high prevalence of LLM usage in scientific writing, with both potential benefits and challenges associated with its adoption. LLMs have been successfully applied in literature search, research design, writing assistance, quality assessment, citation generation, and data analysis. LLMs have also been used in peer review and publication processes, including manuscript screening, generating review comments, and identifying potential biases. To ensure the integrity and quality of scholarly work in the era of LLM-assisted research, responsible artificial intelligence (AI) use is crucial. Researchers should prioritize verifying the accuracy and reliability of AI-generated content, maintain transparency in the use of LLMs, and develop collaborative human-AI workflows. Reviewers should focus on higher-order reviewing skills and be aware of the potential use of LLMs in manuscripts. Editorial offices should develop clear policies and guidelines on AI use and foster open dialogue within the academic community. Future directions include addressing the limitations and biases of current LLMs, exploring innovative applications, and continuously updating policies and practices in response to technological advancements. Collaborative efforts among stakeholders are necessary to harness the transformative potential of LLMs while maintaining the integrity of medical writing and publishing.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • 제24권10호
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

Application of Domain-specific Thesaurus to Construction Documents based on Flow Margin of Semantic Similarity

  • Youmin PARK;Seonghyeon MOON;Jinwoo KIM;Seokho CHI
    • 국제학술발표논문집
    • /
    • The 10th International Conference on Construction Engineering and Project Management
    • /
    • pp.375-382
    • /
    • 2024
  • Large Language Models (LLMs) still encounter challenges in comprehending domain-specific expressions within construction documents. Analogous to humans acquiring unfamiliar expressions from dictionaries, language models could assimilate domain-specific expressions through the use of a thesaurus. Numerous prior studies have developed construction thesauri; however, a practical issue arises in effectively leveraging these resources for instructing language models. Given that the thesaurus primarily outlines relationships between terms without indicating their relative importance, language models may struggle in discerning which terms to retain or replace. This research aims to establish a robust framework for guiding language models using the information from the thesaurus. For instance, a term would be associated with a list of similar terms while also being included in the lists of other related terms. The relative significance among terms could be ascertained by employing similarity scores normalized according to relevance ranks. Consequently, a term exhibiting a positive margin of normalized similarity scores (termed a pivot term) could semantically replace other related terms, thereby enabling LLMs to comprehend domain-specific terms through these pivotal terms. The outcome of this research presents a practical methodology for utilizing domain-specific thesauri to train LLMs and analyze construction documents. Ongoing evaluation involves validating the accuracy of the thesaurus-applied LLM (e.g., S-BERT) in identifying similarities within construction specification provisions. This outcome holds potential for the construction industry by enhancing LLMs' understanding of construction documents and subsequently improving text mining performance and project management efficiency.

Framework for evaluating code generation ability of large language models

  • Sangyeop Yeo;Yu-Seung Ma;Sang Cheol Kim;Hyungkook Jun;Taeho Kim
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.106-117
    • /
    • 2024
  • Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric, pass-ratio@n, which captures the granularity of accuracy according to the pass rate of test cases. The framework is intended to be fully automatic to handle the repetitive work involved in generating prompts, conducting inferences, and executing the generated codes. A preliminary evaluation focusing on the prompt detail, problem publication date, and difficulty level demonstrates the successful integration of our framework with the LeetCode coding platform and highlights the applicability of the pass-ratio@n metric.

대형 언어 모델: 영상의학 전문가를 위한 종합 안내서 (Large Language Models: A Comprehensive Guide for Radiologists)

  • 김선규;이충근;김승섭
    • 대한영상의학회지
    • /
    • 제85권5호
    • /
    • pp.861-882
    • /
    • 2024
  • 대형 언어 모델은 자연어 처리 분야에 국한되지 않고 기술 산업의 거의 모든 분야에서부터 일상생활에 이르기까지, 전 지구적인 혁신을 가져왔다. 방대한 데이터셋에 대한 광범위한 사전 훈련 덕분에 현대의 대형 언어 모델들은 일반적인 작업뿐 아니라 의료 영상과 같은 전문적인 분야의 작업까지 수행 가능하게 되었다. 업체들은 매우 빠른 속도로 버전 업데이트 및 신규 모델 출시를 발표하고 있고, 그로 인해 초기에 지적되었던 여러 문제점과 한계점들이 하나씩 해결되어 가고 있다. 또한 초기의 스케일링 업 방식의 발전 방향성에서 탈피하여 최근에는 작아진, 온프레미스 오픈 소스 대형 언어 모델의 개념이 주목받고 있고, 이로 인해 전문 의료지식에 대한 미세조정, 훈련 효율성 제고, 개인정보 문제 해결, 성능 변동 관리 등의 이슈들이 해결되어 가고 있다. 본 종설은 대형 언어 모델을 활용하려는 영상의학 전문가에게, 관련 기술에 대한 개념적 지식과 실용적인 지침, 그리고 현재의 기술 지형과 미래 방향성 등을 통합적으로 제공하고자 작성되었다.

연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언 (A Proposal of Evaluation of Large Language Models Built Based on Research Data)

  • 한나은;서수정;엄정호
    • 정보관리학회지
    • /
    • 제40권3호
    • /
    • pp.77-98
    • /
    • 2023
  • 본 연구는 지금까지 제안된 거대언어모델 가운데 LLaMA 및 LLaMA 기반 모델과 같이 연구데이터를 주요 사전학습데이터로 활용한 모델의 데이터 품질에 중점을 두어 현재의 평가 기준을 분석하고 연구데이터의 관점에서 품질 평가 기준을 제안하였다. 이를 위해 데이터 품질 평가 요인 중 유효성, 기능성, 신뢰성을 중심으로 품질 평가를 논의하였으며, 거대언어모델의 특성 및 한계점을 이해하기 위해 LLaMA, Alpaca, Vicuna, ChatGPT 모델을 비교하였다. 현재 광범위하게 활용되는 거대언어모델의 평가 기준을 분석하기 위해 Holistic Evaluation for Language Models를 중심으로 평가 기준을 살펴본 후 한계점을 논의하였다. 이를 바탕으로 본 연구는 연구데이터를 주요 사전학습데이터로 활용한 거대언어모델을 대상으로 한 품질 평가 기준을 제시하고 추후 개발 방향을 논의하였으며, 이는 거대언어모델의 발전 방향을 위한 지식 기반을 제공하는데 의의를 갖는다.

대규모 언어 모델의 최적화를 위한 지시형 미세 조정과 LoRA 결합 접근법 (Instruction Fine-tuning and LoRA Combined Approach for Optimizing Large Language Models)

  • 김상국;노경란;한혁;최붕기
    • 산업경영시스템학회지
    • /
    • 제47권2호
    • /
    • pp.134-146
    • /
    • 2024
  • This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource-intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.

온디바이스 소형언어모델 기술개발 동향 (Technical Trends in On-device Small Language Model Technology Development)

  • 김근용;윤기하;김량수;류지형;김성창
    • 전자통신동향분석
    • /
    • 제39권4호
    • /
    • pp.82-92
    • /
    • 2024
  • This paper introduces the technological development trends in on-device SLMs (Small Language Models). Large Language Models (LLMs) based on the transformer model have gained global attention with the emergence of ChatGPT, providing detailed and sophisticated responses across various knowledge domains, thereby increasing their impact across society. While major global tech companies are continuously announcing new LLMs or enhancing their capabilities, the development of SLMs, which are lightweight versions of LLMs, is intensely progressing. SLMs have the advantage of being able to run as on-device AI on smartphones or edge devices with limited memory and computing resources, enabling their application in various fields from a commercialization perspective. This paper examines the technical features for developing SLMs, lightweight technologies, semiconductor technology development trends for on-device AI, and potential applications across various industries.