• Title/Summary/Keyword: Large Language Models (LLM)

Search Result 36, Processing Time 0.02 seconds

The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions

  • Sangzin Ahn
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.28 no.5
    • /
    • pp.393-401
    • /
    • 2024
  • Large language models (LLMs) are rapidly transforming medical writing and publishing. This review article focuses on experimental evidence to provide a comprehensive overview of the current applications, challenges, and future implications of LLMs in various stages of academic research and publishing process. Global surveys reveal a high prevalence of LLM usage in scientific writing, with both potential benefits and challenges associated with its adoption. LLMs have been successfully applied in literature search, research design, writing assistance, quality assessment, citation generation, and data analysis. LLMs have also been used in peer review and publication processes, including manuscript screening, generating review comments, and identifying potential biases. To ensure the integrity and quality of scholarly work in the era of LLM-assisted research, responsible artificial intelligence (AI) use is crucial. Researchers should prioritize verifying the accuracy and reliability of AI-generated content, maintain transparency in the use of LLMs, and develop collaborative human-AI workflows. Reviewers should focus on higher-order reviewing skills and be aware of the potential use of LLMs in manuscripts. Editorial offices should develop clear policies and guidelines on AI use and foster open dialogue within the academic community. Future directions include addressing the limitations and biases of current LLMs, exploring innovative applications, and continuously updating policies and practices in response to technological advancements. Collaborative efforts among stakeholders are necessary to harness the transformative potential of LLMs while maintaining the integrity of medical writing and publishing.

Predicting Steel Structure Product Weight Ratios using Large Language Model-Based Neural Networks (대형 언어 모델 기반 신경망을 활용한 강구조물 부재 중량비 예측)

  • Jong-Hyeok Park;Sang-Hyun Yoo;Soo-Hee Han;Kyeong-Jun Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.119-126
    • /
    • 2024
  • In building information model (BIM), it is difficult to train an artificial intelligence (AI) model due to the lack of sufficient data about individual projects in an architecture firm. In this paper, we present a methodology to correctly train an AI neural network model based on a large language model (LLM) to predict the steel structure product weight ratios in BIM. The proposed method, with the aid of the LLM, can overcome the inherent problem of limited data availability in BIM and handle a combination of natural language and numerical data. The experimental results showed that the proposed method demonstrated significantly higher accuracy than methods based on a smaller language model. The potential for effectively applying large language models in BIM is confirmed, leading to expectations of preventing building accidents and efficiently managing construction costs.

Application of ChatGPT text extraction model in analyzing rhetorical principles of COVID-19 pandemic information on a question-and-answer community

  • Hyunwoo Moon;Beom Jun Bae;Sangwon Bae
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.205-213
    • /
    • 2024
  • This study uses a large language model (LLM) to identify Aristotle's rhetorical principles (ethos, pathos, and logos) in COVID-19 information on Naver Knowledge-iN, South Korea's leading question-and-answer community. The research analyzed the differences of these rhetorical elements in the most upvoted answers with random answers. A total of 193 answer pairs were randomly selected, with 135 pairs for training and 58 for testing. These answers were then coded in line with the rhetorical principles to refine GPT 3.5-based models. The models achieved F1 scores of .88 (ethos), .81 (pathos), and .69 (logos). Subsequent analysis of 128 new answer pairs revealed that logos, particularly factual information and logical reasoning, was more frequently used in the most upvoted answers than the random answers, whereas there were no differences in ethos and pathos between the answer groups. The results suggest that health information consumers value information including logos while ethos and pathos were not associated with consumers' preference for health information. By utilizing an LLM for the analysis of persuasive content, which has been typically conducted manually with much labor and time, this study not only demonstrates the feasibility of using an LLM for latent content but also contributes to expanding the horizon in the field of AI text extraction.

LLaMA2 Models with Feedback for Improving Document-Grounded Dialogue System (피드백 기법을 이용한 LLama2 모델 기반의 Zero-Shot 문서 그라운딩된 대화 시스템 성능 개선)

  • Min-Kyo Jung;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.275-280
    • /
    • 2023
  • 문서 그라운딩된 대화 시스템의 응답 성능 개선을 위한 방법론을 제안한다. 사전 학습된 거대 언어 모델 LLM(Large Language Model)인 Llama2 모델에 Zero-Shot In-Context learning을 적용하여 대화 마지막 유저 질문에 대한 응답을 생성하는 태스크를 수행하였다. 본 연구에서 제안한 응답 생성은 검색된 top-1 문서와 대화 기록을 참조해 초기 응답을 생성하고, 생성된 초기 응답을 기반으로 검색된 문서를 대상으로 재순위화를 수행한다. 이 후, 특정 순위의 상위 문서들을 이용해 최종 응답을 생성하는 과정으로 이루어진다. 검색된 상위 문서를 이용하는 응답 생성 방식을 Baseline으로 하여 본 연구에서 제안한 방식과 비교하였다. 그 결과, 본 연구에서 제안한 방식이 검색된 결과에 기반한 실험에서 Baseline 보다 F1, Bleu, Rouge, Meteor Score가 향상한 것을 확인 하였다.

  • PDF

Technical Trends in On-device Small Language Model Technology Development (온디바이스 소형언어모델 기술개발 동향)

  • G. Kim;K. Yoon;R. Kim;J. H. Ryu;S. C. Kim
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.4
    • /
    • pp.82-92
    • /
    • 2024
  • This paper introduces the technological development trends in on-device SLMs (Small Language Models). Large Language Models (LLMs) based on the transformer model have gained global attention with the emergence of ChatGPT, providing detailed and sophisticated responses across various knowledge domains, thereby increasing their impact across society. While major global tech companies are continuously announcing new LLMs or enhancing their capabilities, the development of SLMs, which are lightweight versions of LLMs, is intensely progressing. SLMs have the advantage of being able to run as on-device AI on smartphones or edge devices with limited memory and computing resources, enabling their application in various fields from a commercialization perspective. This paper examines the technical features for developing SLMs, lightweight technologies, semiconductor technology development trends for on-device AI, and potential applications across various industries.

Inducing Harmful Speech in Large Language Models through Korean Malicious Prompt Injection Attacks (한국어 악성 프롬프트 주입 공격을 통한 거대 언어 모델의 유해 표현 유도)

  • Ji-Min Suh;Jin-Woo Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.451-461
    • /
    • 2024
  • Recently, various AI chatbots based on large language models have been released. Chatbots have the advantage of providing users with quick and easy information through interactive prompts, making them useful in various fields such as question answering, writing, and programming. However, a vulnerability in chatbots called "prompt injection attacks" has been proposed. This attack involves injecting instructions into the chatbot to violate predefined guidelines. Such attacks can be critical as they may lead to the leakage of confidential information within large language models or trigger other malicious activities. However, the vulnerability of Korean prompts has not been adequately validated. Therefore, in this paper, we aim to generate malicious Korean prompts and perform attacks on the popular chatbot to analyze their feasibility. To achieve this, we propose a system that automatically generates malicious Korean prompts by analyzing existing prompt injection attacks. Specifically, we focus on generating malicious prompts that induce harmful expressions from large language models and validate their effectiveness in practice.

Instruction Fine-tuning and LoRA Combined Approach for Optimizing Large Language Models (대규모 언어 모델의 최적화를 위한 지시형 미세 조정과 LoRA 결합 접근법)

  • Sang-Gook Kim;Kyungran Noh;Hyuk Hahn;Boong Kee Choi
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.134-146
    • /
    • 2024
  • This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource-intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.

Proposal for the Utilization and Refinement Techniques of LLMs for Automated Research Generation (관련 연구 자동 생성을 위한 LLM의 활용 및 정제 기법 제안)

  • Seung-min Choi;Yu-chul, Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.275-287
    • /
    • 2024
  • Research on the integration of Knowledge Graphs (KGs) and Language Models (LMs) has been consistently explored over the years. However, studies focusing on the automatic generation of text using the structured knowledge from KGs have not been as widely developed. In this study, we propose a methodology for automatically generating specific domain-related research items (Related Work) at a level comparable to existing papers. This methodology involves: 1) selecting optimal prompts, 2) extracting triples through a four-step refinement process, 3) constructing a knowledge graph, and 4) automatically generating related research. The proposed approach utilizes GPT-4, one of the large language models (LLMs), and is desigend to automatically generate related research by applying the four-step refinement process. The model demonstrated performance metrics of 17.3, 14.1, and 4.2 in Triple extraction across #Supp, #Cont, and Fluency, respectively. According to the GPT-4 automatic evaluation criteria, the model's performamce improved from 88.5 points vefore refinement to 96.5 points agter refinement out of 100, indicating a significant capability to automatically generate related research at a level similar to that of existing papers.

Evaluation of Large Language Models' Korean-Text to SQL Capability (대형 언어 모델의 한국어 Text-to-SQL 변환 능력 평가)

  • Jooyoung Choi;Kyungkoo Min;Myoseop Sim;Haemin Jung;Minjun Park;Stanley Jungkyu Choi
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.171-176
    • /
    • 2023
  • 최근 등장한 대규모 데이터로 사전학습된 자연어 생성 모델들은 대화 능력 및 코드 생성 태스크등에서 인상적인 성능을 보여주고 있어, 본 논문에서는 대형 언어 모델 (LLM)의 한국어 질문을 SQL 쿼리 (Text-to-SQL) 변환하는 성능을 평가하고자 한다. 먼저, 영어 Text-to-SQL 벤치마크 데이터셋을 활용하여 영어 질의문을 한국어 질의문으로 번역하여 한국어 Text-to-SQL 데이터셋으로 만들었다. 대형 생성형 모델 (GPT-3 davinci, GPT-3 turbo) 의 few-shot 세팅에서 성능 평가를 진행하며, fine-tuning 없이도 대형 언어 모델들의 경쟁력있는 한국어 Text-to-SQL 변환 성능을 확인한다. 또한, 에러 분석을 수행하여 한국어 문장을 데이터베이스 쿼리문으로 변환하는 과정에서 발생하는 다양한 문제와 프롬프트 기법을 활용한 가능한 해결책을 제시한다.

  • PDF

Comparing the performance of Supervised Fine-tuning, Reinforcement Learning, and Chain-of-Hindsight with Llama and OPT models (Llama, OPT 모델을 활용한 Supervised Fine Tuning, Reinforcement Learning, Chain-of-Hindsight 성능 비교)

  • Hyeon Min Lee;Seung Hoon Na;Joon Ho Lim;Tae Hyeong Kim;Hwi Jung Ryu;Du Seong Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.217-221
    • /
    • 2023
  • 최근 몇 년 동안, Large Language Model(LLM)의 발전은 인공 지능 연구 분야에서 주요 도약을 이끌어 왔다. 이러한 모델들은 복잡한 자연어처리 작업에서 뛰어난 성능을 보이고 있다. 특히 Human Alignment를 위해 Supervised Fine Tuning, Reinforcement Learning, Chain-of-Hindsight 등을 적용한 언어모델이 관심 받고 있다. 본 논문에서는 위에 언급한 3가지 지시학습 방법인 Supervised Fine Tuning, Reinforcement Learning, Chain-of-Hindsight 를 Llama, OPT 모델에 적용하여 성능을 측정 및 비교한다.

  • PDF