• Title/Summary/Keyword: text-generation

Search Result 358, Processing Time 0.024 seconds

Development of ChatGPT-based Medical Text Augmentation Tool for Synthetic Text Generation (합성 텍스트 생성을 위한 ChatGPT 기반 의료 텍스트 증강 도구 개발)

  • Jin-Woo Kong;Gi-Youn Kim;Yu-Seop Kim;Byoung-Doo Oh
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.3-4
    • /
    • 2023
  • 자연어처리는 수많은 정보가 수집된 전자의무기록의 비정형 데이터에서 유의미한 정보나 패턴 등을 추출해 의료진의 의사결정을 지원하고, 환자에게 더 나은 진단이나 치료 등을 지원할 수 있어 큰 잠재력을 가지고 있다. 그러나 전자의무기록은 개인정보와 같은 민감한 정보가 다수 포함되어 있어 접근하기 어렵고, 이로 인해 충분한 양의 데이터를 확보하기 어렵다. 따라서 본 논문에서는 신뢰할 수 있는 의료 합성 텍스트를 생성하기 위해 ChatGPT 기반 의료 텍스트 증강 도구를 개발하였다. 이는 사용자가 입력한 실제 의료 텍스트로 의료 합성 데이터를 생성한다. 이를 위해, 적합한 프롬프트와 의료 텍스트에 대한 전처리 방법을 탐색하였다. ChatGPT 기반 의료 텍스트 증강 도구는 입력 텍스트의 핵심 키워드를 잘 유지하였고, 사실에 기반한 의료 합성 텍스트를 생성할 수 있다는 것을 확인할 수 있었다.

  • PDF

Automatic Weblog Generation from Mobile Context using Bayesian Network and Petri Net (베이지안 네트워크와 페트리넷을 이용한 모바일 상황정보로부터의 블로그 자동 생성)

  • Lee, Young-Seol;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.467-471
    • /
    • 2010
  • Weblog is one of the most spread web services. The content of the weblog includes daily events and emotions. If we collect personal information using mobile devices and create a weblog, user can create their own weblog easily. Some researchers already developed systems that created weblog in mobile environment. In this paper, user's activity is inferred from personal information in mobile device. The inferred activities and story generation engine are used to generate text for creating a weblog. Finally, the text, photographs and user's movement in Google Map are integrated into a weblog.

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2E
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.

Deep Learning-based Text Summarization Model for Explainable Personalized Movie Recommendation Service (설명 가능한 개인화 영화 추천 서비스를 위한 딥러닝 기반 텍스트 요약 모델)

  • Chen, Biyao;Kang, KyungMo;Kim, JaeKyeong
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.109-126
    • /
    • 2022
  • The number and variety of products and services offered by companies have increased dramatically, providing customers with more choices to meet their needs. As a solution to this information overload problem, the provision of tailored services to individuals has become increasingly important, and the personalized recommender systems have been widely studied and used in both academia and industry. Existing recommender systems face important problems in practical applications. The most important problem is that it cannot clearly explain why it recommends these products. In recent years, some researchers have found that the explanation of recommender systems may be very useful. As a result, users are generally increasing conversion rates, satisfaction, and trust in the recommender system if it is explained why those particular items are recommended. Therefore, this study presents a methodology of providing an explanatory function of a recommender system using a review text left by a user. The basic idea is not to use all of the user's reviews, but to provide them in a summarized form using only reviews left by similar users or neighbors involved in recommending the item as an explanation when providing the recommended item to the user. To achieve this research goal, this study aims to provide a product recommendation list using user-based collaborative filtering techniques, combine reviews left by neighboring users with each product to build a model that combines text summary methods among deep learning-based natural language processing methods. Using the IMDb movie database, text reviews of all target user neighbors' movies are collected and summarized to present descriptions of recommended movies. There are several text summary methods, but this study aims to evaluate whether the review summary is well performed by training the Sequence-to-sequence+attention model, which is a representative generation summary method, and the BertSum model, which is an extraction summary model.

Text summarization of dialogue based on BERT

  • Nam, Wongyung;Lee, Jisoo;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose how to implement text summaries for colloquial data that are not clearly organized. For this study, SAMSum data, which is colloquial data, was used, and the BERTSumExtAbs model proposed in the previous study of the automatic summary model was applied. More than 70% of the SAMSum dataset consists of conversations between two people, and the remaining 30% consists of conversations between three or more people. As a result, by applying the automatic text summarization model to colloquial data, a result of 42.43 or higher was derived in the ROUGE Score R-1. In addition, a high score of 45.81 was derived by fine-tuning the BERTSum model, which was previously proposed as a text summarization model. Through this study, the performance of colloquial generation summary has been proven, and it is hoped that the computer will understand human natural language as it is and be used as basic data to solve various tasks.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Studies on the Character of Silkworm, Bombyx mori L., Which Bred from Double Copulation. (About the effect of copulating time and sperm activity in the double copulating) (동품종 교배와 이품종 교배를 교번한 이중교배의 차대잠 형질에 관한 연구(II) (교미시간과 정자의 활동성이 이중교배에 미치는 영향))

  • 김윤식
    • Journal of Sericultural and Entomological Science
    • /
    • v.6
    • /
    • pp.9-17
    • /
    • 1966
  • The ratio of form and character in the text generation of silkworms which were double copulated between home race copulation and hetero race copulation in crossing with two males of different races for female(double crossing) are different according to the copulating time, copulating order and sperm activities. But the general tendencies are as follows; 1. During two hour's double copulation, sufficiently ejaculating time, the fertilization percentage of hetero lace copulation are higher than that of homo race, but in case of double copulation with plain and normal marked silkworms showed opposite results. The fertilization percentage of homo race copulation are equal or higher compare with that of hetero race copulation. 2. The form and character of the next generation were largely effected by copulating order, so the primary copulating moths are more effected in the next generation than the secondary moths. 3. The active sperms were more fertilized than non-active sperms in the double copulation.

  • PDF

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.