• Title/Summary/Keyword: text generation

Search Result 353, Processing Time 0.043 seconds

Using Text Network Analysis for Analyzing Academic Papers in Nursing (간호학 학술논문의 주제 분석을 위한 텍스트네크워크분석방법 활용)

  • Park, Chan Sook
    • Perspectives in Nursing Science
    • /
    • v.16 no.1
    • /
    • pp.12-24
    • /
    • 2019
  • Purpose: This study examined the suitability of using text network analysis (TNA) methodology for topic analysis of academic papers related to nursing. Methods: TNA background theories, software programs, and research processes have been described in this paper. Additionally, the research methodology that applied TNA to the topic analysis of the academic nursing papers was analyzed. Results: As background theories for the study, we explained information theory, word co-occurrence analysis, graph theory, network theory, and social network analysis. The TNA procedure was described as follows: 1) collection of academic articles, 2) text extraction, 3) preprocessing, 4) generation of word co-occurrence matrices, 5) social network analysis, and 6) interpretation and discussion. Conclusion: TNA using author-keywords has several advantages. It can utilize recognized terms such as MeSH headings or terms chosen by professionals, and it saves time and effort. Additionally, the study emphasizes the necessity of developing a sophisticated research design that explores nursing research trends in a multidimensional method by applying TNA methodology.

Design of Image Generation System for DCGAN-Based Kids' Book Text

  • Cho, Jaehyeon;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1437-1446
    • /
    • 2020
  • For the last few years, smart devices have begun to occupy an essential place in the life of children, by allowing them to access a variety of language activities and books. Various studies are being conducted on using smart devices for education. Our study extracts images and texts from kids' book with smart devices and matches the extracted images and texts to create new images that are not represented in these books. The proposed system will enable the use of smart devices as educational media for children. A deep convolutional generative adversarial network (DCGAN) is used for generating a new image. Three steps are involved in training DCGAN. Firstly, images with 11 titles and 1,164 images on ImageNet are learned. Secondly, Tesseract, an optical character recognition engine, is used to extract images and text from kids' book and classify the text using a morpheme analyzer. Thirdly, the classified word class is matched with the latent vector of the image. The learned DCGAN creates an image associated with the text.

A Study on Exceptional Pronunciations For Automatic Korean Pronunciation Generator (한국어 자동 발음열 생성 시스템을 위한 예외 발음 연구)

  • Kim Sunhee
    • MALSORI
    • /
    • no.48
    • /
    • pp.57-67
    • /
    • 2003
  • This paper presents a systematic description of exceptional pronunciations for automatic Korean pronunciation generation. An automatic pronunciation generator in Korean is an essential part of a Korean speech recognition system and a TTS (Text-To-Speech) system. It is composed of a set of regular rules and an exceptional pronunciation dictionary. The exceptional pronunciation dictionary is created by extracting the words that have exceptional pronunciations, based on the characteristics of the words of exceptional pronunciation through phonological research and the systematic analysis of the entries of Korean dictionaries. Thus, the method contributes to improve performance of automatic pronunciation generator in Korean as well as the performance of speech recognition system and TTS system in Korean.

  • PDF

A Study on the COntour Machining of Text using CNC Laser Machine (CNC레이저 가공기를 이용한 활자체 가공에 관한 연구)

  • 구영회
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.554-559
    • /
    • 1999
  • The purpose of this study is the machining of texture shapes by the contour fitting data. The hardware of the system comprises PC and scanning system, CO2 laser machine. There are four steps, (1) text image loading using scanning shapes or 2D image files, (2) generation of contour fitting data by the line and arc, cubic Bezier curve, (3) generation of NC code from the contouring fitting data, (4) machining by the DNC system. It is developed a software package, with which can conduct a micro CAM system of CNC laser machine in the PC without economical burden.

  • PDF

Instruction Tuning for Controlled Text Generation in Korean Language Model (Instruction Tuning을 통한 한국어 언어 모델 문장 생성 제어)

  • Jinhee Jang;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.289-294
    • /
    • 2023
  • 대형 언어 모델(Large Language Model)은 방대한 데이터와 파라미터를 기반으로 문맥 이해에서 높은 성능을 달성하였지만, Human Alignment를 위한 문장 생성 제어 연구는 아직 활발한 도전 과제로 남아있다. 본 논문에서는 Instruction Tuning을 통한 문장 생성 제어 실험을 진행한다. 자연어 처리 도구를 사용하여 단일 혹은 다중 제약 조건을 포함하는 Instruction 데이터 셋을 자동으로 구축하고 한국어 언어 모델인 Polyglot-Ko 모델에 fine-tuning 하여 모델 생성이 제약 조건을 만족하는지 검증하였다. 실험 결과 4개의 제약 조건에 대해 평균 0.88의 accuracy를 보이며 효과적인 문장 생성 제어가 가능함을 확인하였다.

  • PDF

Text-To-Vision Player - Converting Text to Vision Based on TVML Technology -

  • Hayashi, Masaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.799-802
    • /
    • 2009
  • We have been studying the next generation of video creation solution based on TVML (TV program Making Language) technology. TVML is a well-known scripting language for computer animation and a TVML Player interprets the script to create video content using real-time 3DCG and synthesized voices. TVML has a long history proposed back in 1996 by NHK, however, the only available Player has been the one made by NHK for years. We have developed a new TVML Player from scratch and named it T2V (Text-To-Vision) Player. Due to the development from scratch, the code is compact, light and fast, and extendable and portable. Moreover, the new T2V Player performs not only a playback of TVML script but also a Text-To-Vision conversion from input written in XML format or just a mere plane text to videos by using 'Text-filter' that can be added as a plug-in of the Player. We plan to make it public as freeware from early 2009 in order to stimulate User-Generated-Content and a various kinds of services running on the Internet and media industry. We think that our T2V Player would be a key technology for upcoming new movement.

  • PDF