• Title/Summary/Keyword: Generative Model

Search Result 340, Processing Time 0.031 seconds

A Study on Evaluating Summarization Performance using Generative Al Model (생성형 AI 모델을 활용한 요약 성능 평가 연구 )

  • Gyuri Choi;Seoyoon Park;Yejee Kang;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.228-233
    • /
    • 2023
  • 인간의 수동 평가 시 시간과 비용의 소모, 주석자 간의 의견 불일치, 평가 결과의 품질 등 불가피한 한계가 발생한다. 본 논문에서는 맥락을 고려하고 긴 문장 입출력이 가능한 ChatGPT를 활용한 한국어 요약문 평가가 인간 평가를 대체하거나 보조하는 것이 가능한가에 대해 살펴보았다. 이를 위해 ChatGPT가 생성한 요약문에 정량적 평가와 정성적 평가를 진행하였으며 정량적 지표로 BERTScore, 정성적 지표로는 일관성, 관련성, 문법성, 유창성을 사용하였다. 평가 결과 ChatGPT4의 경우 인간 수동 평가를 보조할 수 있는 가능성이 있음을 확인하였다. ChatGPT가 영어 기반으로 학습된 모델임을 고려하여 오류 발견 성능을 검증하고자 한국어 오류 요약문으로 추가 평가를 진행하였다. 그 결과 ChatGPT3.5와 ChatGPT4의 오류 요약 평가 성능은 불안정하여 인간을 보조하기에는 아직 어려움이 있음을 확인하였다.

  • PDF

Generative Model Utilizing Multi-Level Attention for Persona-Grounded Long-Term Conversations (페르소나 기반의 장기 대화를 위한 다각적 어텐션을 활용한 생성 모델)

  • Bit-Na Keum;Hong-Jin Kim;Jin-Xia Huang;Oh-Woog Kwon;Hark-Soo Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.281-286
    • /
    • 2023
  • 더욱 사람같은 대화 모델을 실현하기 위해, 페르소나 메모리를 활용하여 응답을 생성하는 연구들이 활발히 진행되고 있다. 다수의 기존 연구들에서는 메모리로부터 관련된 페르소나를 찾기 위해 별도의 검색 모델을 이용한다. 그러나 이는 전체 시스템에 속도 저하를 일으키고 시스템을 무겁게 만드는 문제가 있다. 또한, 기존 연구들은 페르소나를 잘 반영해 응답하는 능력에만 초점을 두는데, 그 전에 페르소나 참조의 필요성 여부를 판별하는 능력이 선행되어야 한다. 따라서, 우리의 제안 모델은 검색 모델을 활용하지 않고 생성 모델의 내부적인 연산을 통해 페르소나 메모리의 참조가 필요한지를 판별한다. 참조가 필요하다고 판단한 경우에는 관련된 페르소나를 반영하여 응답하며, 그렇지 않은 경우에는 대화 컨텍스트에 집중하여 응답을 생성한다. 실험 결과를 통해 제안 모델이 장기적인 대화에서 효과적으로 동작함을 확인하였다.

  • PDF

Predictive Model for Real Estate Prices Using Sentiment Index of news articles based on Generative AI (생성 AI 기반 뉴스 기사 심리지수를 활용한 부동산 가격 예측 모델)

  • Kim Sua;Kwon Miju;Cho Soobin;Kim Eunsoo;Hyon Hee Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1198-1199
    • /
    • 2023
  • 부동산 시장은 다양한 요인에 의해 가격이 결정되며 거시경제 변수뿐 만 아니라 뉴스 기사, SNS 등 다양한 비정형 데이터의 영향을 받는다. 특히 뉴스 기사는 국민들이 느끼는 경제 심리를 반영하고 있어 부동산 가격에 영향을 크게 미치는 변수라고 판단된다. 본 연구에서는 뉴스 기사의 세분화된 감정 분석을 통해 전통적인 분석 방법보다 더 의미 있는 결과를 얻을 수 있는 부동산 가격 예측 모델을 생성하였으며 뉴스 기사로부터 심리 지수를 산출하기 위해 생성 AI 를 활용하였다. 제안하는 매매가격지수 예측 모델을 통해 부동산 시장과 뉴스 기사와의 관계성에 대해 파악할 수 있으며, 사회/경제적 동향을 반영한 부동산 가격 변동을 예측할 수 있을 것으로 보인다.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

Best Practice on Automatic Toon Image Creation from JSON File of Message Sequence Diagram via Natural Language based Requirement Specifications

  • Hyuntae Kim;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • In AI image generation tools, most general users must use an effective prompt to craft queries or statements to elicit the desired response (image, result) from the AI model. But we are software engineers who focus on software processes. At the process's early stage, we use informal and formal requirement specifications. At this time, we adapt the natural language approach into requirement engineering and toon engineering. Most Generative AI tools do not produce the same image in the same query. The reason is that the same data asset is not used for the same query. To solve this problem, we intend to use informal requirement engineering and linguistics to create a toon. Therefore, we propose a sequence diagram and image generation mechanism by analyzing and applying key objects and attributes as an informal natural language requirement analysis. Identify morpheme and semantic roles by analyzing natural language through linguistic methods. Based on the analysis results, a sequence diagram and an image are generated through the diagram. We expect consistent image generation using the same image element asset through the proposed mechanism.

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

A comparison of synthetic data approaches using utility and disclosure risk measures (유용성과 노출 위험성 지표를 이용한 재현자료 기법 비교 연구)

  • Seongbin An;Trang Doan;Juhee Lee;Jiwoo Kim;Yong Jae Kim;Yunji Kim;Changwon Yoon;Sungkyu Jung;Dongha Kim;Sunghoon Kwon;Hang J Kim;Jeongyoun Ahn;Cheolwoo Park
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.2
    • /
    • pp.141-166
    • /
    • 2023
  • This paper investigates synthetic data generation methods and their evaluation measures. There have been increasing demands for releasing various types of data to the public for different purposes. At the same time, there are also unavoidable concerns about leaking critical or sensitive information. Many synthetic data generation methods have been proposed over the years in order to address these concerns and implemented in some countries, including Korea. The current study aims to introduce and compare three representative synthetic data generation approaches: Sequential regression, nonparametric Bayesian multiple imputations, and deep generative models. Several evaluation metrics that measure the utility and disclosure risk of synthetic data are also reviewed. We provide empirical comparisons of the three synthetic data generation approaches with respect to various evaluation measures. The findings of this work will help practitioners to have a better understanding of the advantages and disadvantages of those synthetic data methods.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

Users' Attachment Styles and ChatGPT Interaction: Revealing Insights into User Experiences

  • I-Tsen Hsieh;Chang-Hoon Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.21-41
    • /
    • 2024
  • This study explores the relationship between users' attachment styles and their interactions with ChatGPT (Chat Generative Pre-trained Transformer), an advanced language model developed by OpenAI. As artificial intelligence (AI) becomes increasingly integrated into everyday life, it is essential to understand how individuals with different attachment styles engage with AI chatbots in order to build a better user experience that meets specific user needs and interacts with users in the most ideal way. Grounded in attachment theory from psychology, we are exploring the influence of attachment style on users' interaction with ChatGPT, bridging a significant gap in understanding human-AI interaction. Contrary to expectations, attachment styles did not have a significant impact on ChatGPT usage or reasons for engagement. Regardless of their attachment styles, hesitated to fully trust ChatGPT with critical information, emphasizing the need to address trust issues in AI systems. Additionally, this study uncovers complex patterns of attachment styles, demonstrating their influence on interaction patterns between users and ChatGPT. By focusing on the distinctive dynamics between users and ChatGPT, our aim is to uncover how attachment styles influence these interactions, guiding the development of AI chatbots for personalized user experiences. The introduction of the Perceived Partner Responsiveness Scale serves as a valuable tool to evaluate users' perceptions of ChatGPT's role, shedding light on the anthropomorphism of AI. This study contributes to the wider discussion on human-AI relationships, emphasizing the significance of incorporating emotional intelligence into AI systems for a user-centered future.

What Concerns Does ChatGPT Raise for Us?: An Analysis Centered on CTM (Correlated Topic Modeling) of YouTube Video News Comments (ChatGPT는 우리에게 어떤 우려를 초래하는가?: 유튜브 영상 뉴스 댓글의 CTM(Correlated Topic Modeling) 분석을 중심으로)

  • Song, Minho;Lee, Soobum
    • Informatization Policy
    • /
    • v.31 no.1
    • /
    • pp.3-31
    • /
    • 2024
  • This study aimed to examine public concerns in South Korea considering the country's unique context, triggered by the advent of generative artificial intelligence such as ChatGPT. To achieve this, comments from 102 YouTube video news related to ethical issues were collected using a Python scraper, and morphological analysis and preprocessing were carried out using Textom on 15,735 comments. These comments were then analyzed using a Correlated Topic Model (CTM). The analysis identified six primary topics within the comments: "Legal and Ethical Considerations"; "Intellectual Property and Technology"; "Technological Advancement and the Future of Humanity"; "Potential of AI in Information Processing"; "Emotional Intelligence and Ethical Regulations in AI"; and "Human Imitation."Structuring these topics based on a correlation coefficient value of over 10% revealed 3 main categories: "Legal and Ethical Considerations"; "Issues Related to Data Generation by ChatGPT (Intellectual Property and Technology, Potential of AI in Information Processing, and Human Imitation)"; and "Fear for the Future of Humanity (Technological Advancement and the Future of Humanity, Emotional Intelligence, and Ethical Regulations in AI)."The study confirmed the coexistence of various concerns along with the growing interest in generative AI like ChatGPT, including worries specific to the historical and social context of South Korea. These findings suggest the need for national-level efforts to ensure data fairness.