• 제목/요약/키워드: Language Models

검색결과 868건 처리시간 0.025초

초거대 언어모델과 수학추론 연구 동향 (Research Trends in Large Language Models and Mathematical Reasoning)

  • 권오욱;신종훈;서영애;임수종;허정;이기영
    • 전자통신동향분석
    • /
    • 제38권6호
    • /
    • pp.1-11
    • /
    • 2023
  • Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multistep problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.

Towards a small language model powered chain-of-reasoning for open-domain question answering

  • Jihyeon Roh;Minho Kim;Kyoungman Bae
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.11-21
    • /
    • 2024
  • We focus on open-domain question-answering tasks that involve a chain-of-reasoning, which are primarily implemented using large language models. With an emphasis on cost-effectiveness, we designed EffiChainQA, an architecture centered on the use of small language models. We employed a retrieval-based language model to address the limitations of large language models, such as the hallucination issue and the lack of updated knowledge. To enhance reasoning capabilities, we introduced a question decomposer that leverages a generative language model and serves as a key component in the chain-of-reasoning process. To generate training data for our question decomposer, we leveraged ChatGPT, which is known for its data augmentation ability. Comprehensive experiments were conducted using the HotpotQA dataset. Our method outperformed several established approaches, including the Chain-of-Thoughts approach, which is based on large language models. Moreover, our results are on par with those of state-of-the-art Retrieve-then-Read methods that utilize large language models.

거대언어모델 기반 로봇 인공지능 기술 동향 (Technical Trends in Artificial Intelligence for Robotics Based on Large Language Models)

  • 이준기;박상준;김낙우;김에덴;고석갑
    • 전자통신동향분석
    • /
    • 제39권1호
    • /
    • pp.95-105
    • /
    • 2024
  • In natural language processing, large language models such as GPT-4 have recently been in the spotlight. The performance of natural language processing has advanced dramatically driven by an increase in the number of model parameters related to the number of acceptable input tokens and model size. Research on multimodal models that can simultaneously process natural language and image data is being actively conducted. Moreover, natural-language and image-based reasoning capabilities of large language models is being explored in robot artificial intelligence technology. We discuss research and related patent trends in robot task planning and code generation for robot control using large language models.

정보검색에서의 언어모델 적용에 관한 분석 (An Analysis of the Applications of the Language Models for Information Retrieval)

  • 김희섭;정영미
    • 한국도서관정보학회지
    • /
    • 제36권2호
    • /
    • pp.49-68
    • /
    • 2005
  • 본 연구의 목적은 정보검색 분야에서의 언어모델의 적용에 관한 연구동향을 개관하고 이 분야의 선행연구 결과들을 분석해 보는 것이다. 선행연구들은 (1)전통적인 모델 기반 정보검색과 언어모델링 정보검색의 성능 비교 실험에 초점을 두고 있는 1세대 언어모델링 정보검색(LMIR)과 (2)기본적인 언어모델링 정보검색과 확장된 언어모델링 정보검색의 성능 비교를 통해 보다 우수한 언어모델링 확장기법을 찾아내는 것에 초점을 두고 있는 2세대 LMIR로 구분하여 분석하였다. 선행연구들의 실험결과를 분석해 본 결과 첫째, 언어모델링 정보검색은 확률모델, 벡터모델 정보검색보다 그 성능이 뛰어나고 둘째 확장된 언어모델들은 기본적인 언어 모델 정보검색보다 그 성능이 우수한 것으로 나타났다.

  • PDF

생성형 거대언어모델의 의학 적용 현황과 방향 - 동아시아 의학을 중심으로 - (Current Status and Direction of Generative Large Language Model Applications in Medicine - Focusing on East Asian Medicine -)

  • 강봉수;이상연;배효진;김창업
    • 동의생리병리학회지
    • /
    • 제38권2호
    • /
    • pp.49-58
    • /
    • 2024
  • The rapid advancement of generative large language models has revolutionized various real-life domains, emphasizing the importance of exploring their applications in healthcare. This study aims to examine how generative large language models are implemented in the medical domain, with the specific objective of searching for the possibility and potential of integration between generative large language models and East Asian medicine. Through a comprehensive current state analysis, we identified limitations in the deployment of generative large language models within East Asian medicine and proposed directions for future research. Our findings highlight the essential need for accumulating and generating structured data to improve the capabilities of generative large language models in East Asian medicine. Additionally, we tackle the issue of hallucination and the necessity for a robust model evaluation framework. Despite these challenges, the application of generative large language models in East Asian medicine has demonstrated promising results. Techniques such as model augmentation, multimodal structures, and knowledge distillation have the potential to significantly enhance accuracy, efficiency, and accessibility. In conclusion, we expect generative large language models to play a pivotal role in facilitating precise diagnostics, personalized treatment in clinical fields, and fostering innovation in education and research within East Asian medicine.

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • 제44권5호
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

An XML-Based Modeling Language for the Open Trading of Decision Models

  • Kim, Hyoung-Do
    • 경영과학
    • /
    • 제17권3호
    • /
    • pp.147-160
    • /
    • 2000
  • These days, a modeling tool or environment has to know about the others on the market and build bridges to them with which their customers insist on sharing models and data. When it is based on a closed architecture, a tangle of import/export point translators is required. Using an exchange standard, we can design an open architecture for the interchange of models and data. XML(Extensible Markup Language) provides a framework for describing the syntax for creating and exchanging data structures. The explosive growth of XML-based business proposals and standards reflects the urgent requirements and its strength. This paper proposes an XML-based language for sharing decision models within the MSOR/DSS community. The language is able to allow applications and on-line analytic processing tools to models obtained from multiple sources without having to deal with individual differences between those sources. It is expected to be a medium for B2B integration by supporting flexible interchange of decision models.

  • PDF

사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구 (A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development)

  • 이치훈;이연지;이동희
    • 한국IT서비스학회지
    • /
    • 제19권5호
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

Chain-of-Thought와 Program-aided Language Models을 이용한 전제-가설-라벨 삼중항 자동 생성 (Generating Premise-Hypothesis-Label Triplet Using Chain-of-Thought and Program-aided Language Models)

  • 조희진;이창기;배경만
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2023년도 제35회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.352-357
    • /
    • 2023
  • 자연어 추론은 두 문장(전제, 가설)간의 관계를 이해하고 추론하여 함의, 모순, 중립 세 가지 범주로 분류하며, 전제-가설-라벨(PHL) 데이터셋을 활용하여 자연어 추론 모델을 학습한다. 그러나, 새로운 도메인에 자연어 추론을 적용할 경우 학습 데이터가 존재하지 않거나 이를 구축하는 데 많은 시간과 자원이 필요하다는 문제가 있다. 본 논문에서는 자연어 추론을 위한 학습 데이터인 전제-가설-라벨 삼중항을 자동 생성하기 위해 [1]에서 제안한 문장 변환 규칙 대신에 거대 언어 모델과 Chain-of-Thought(CoT), Program-aided Language Models(PaL) 등의 프롬프팅(Prompting) 방법을 이용하여 전제-가설-라벨 삼중항을 자동으로 생성하는 방법을 제안한다. 실험 결과, CoT와 PaL 프롬프팅 방법으로 자동 생성된 데이터의 품질이 기존 규칙이나 기본 프롬프팅 방법보다 더 우수하였다.

  • PDF