• Title/Summary/Keyword: Text to Image Synthesis

Search Result 10, Processing Time 0.033 seconds

Text Augmentation Using Hierarchy-based Word Replacement

  • Kim, Museong;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.57-67
    • /
    • 2021
  • Recently, multi-modal deep learning techniques that combine heterogeneous data for deep learning analysis have been utilized a lot. In particular, studies on the synthesis of Text to Image that automatically generate images from text are being actively conducted. Deep learning for image synthesis requires a vast amount of data consisting of pairs of images and text describing the image. Therefore, various data augmentation techniques have been devised to generate a large amount of data from small data. A number of text augmentation techniques based on synonym replacement have been proposed so far. However, these techniques have a common limitation in that there is a possibility of generating a incorrect text from the content of an image when replacing the synonym for a noun word. In this study, we propose a text augmentation method to replace words using word hierarchy information for noun words. Additionally, we performed experiments using MSCOCO data in order to evaluate the performance of the proposed methodology.

Knowledge based Text to Facial Sequence Image System for Interaction of Lecturer and Learner in Cyber Universities (가상대학에서 교수자와 학습자간 상호작용을 위한 지식기반형 문자-얼굴동영상 변환 시스템)

  • Kim, Hyoung-Geun;Park, Chul-Ha
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.179-188
    • /
    • 2008
  • In this paper, knowledge based text to facial sequence image system for interaction of lecturer and learner in cyber universities is studied. The system is defined by the synthesis of facial sequence image which is synchronized the lip according to the text information based on grammatical characteristic of hangul. For the implementation of the system, the transformation method that the text information is transformed into the phoneme code, the deformation rules of mouse shape which can be changed according to the code of phonemes, and the synthesis method of facial sequence image by using deformation rules of mouse shape are proposed. In the proposed method, all syllables of hangul are represented 10 principal mouse shape and 78 compound mouse shape according to the pronunciation characteristics of the basic consonants and vowels, and the characteristics of the articulation rules, respectively. To synthesize the real time facial sequence image able to realize the PC, the 88 mouth shape stored data base are used without the synthesis of mouse shape in each frame. To verify the validity of the proposed method the various synthesis of facial sequence image transformed from the text information is accomplished, and the system that can be applied the PC is implemented using the proposed method.

Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality (가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현)

  • 최장석;이기영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF

Injection of Cultural-based Subjects into Stable Diffusion Image Generative Model

  • Amirah Alharbi;Reem Alluhibi;Maryam Saif;Nada Altalhi;Yara Alharthi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.1-14
    • /
    • 2024
  • While text-to-image models have made remarkable progress in image synthesis, certain models, particularly generative diffusion models, have exhibited a noticeable bias to- wards generating images related to the culture of some developing countries. This paper introduces an empirical investigation aimed at mitigating the bias of image generative model. We achieve this by incorporating symbols representing Saudi culture into a stable diffusion model using the Dreambooth technique. CLIP score metric is used to assess the outcomes in this study. This paper also explores the impact of varying parameters for instance the quantity of training images and the learning rate. The findings reveal a substantial reduction in bias-related concerns and propose an innovative metric for evaluating cultural relevance.

Keypoints-Based 2D Virtual Try-on Network System

  • Pham, Duy Lai;Ngyuen, Nhat Tan;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.186-203
    • /
    • 2020
  • Image-based Virtual Try-On Systems are among the most potential solution for virtual fitting which tries on a target clothes into a model person image and thus have attracted considerable research efforts. In many cases, current solutions for those fails in achieving naturally looking virtual fitted image where a target clothes is transferred into the body area of a model person of any shape and pose while keeping clothes context like texture, text, logo without distortion and artifacts. In this paper, we propose a new improved image-based virtual try-on network system based on keypoints, which we name as KP-VTON. The proposed KP-VTON first detects keypoints in the target clothes and reliably predicts keypoints in the clothes of a model person image by utilizing a dense human pose estimation. Then, through TPS transformation calculated by utilizing the keypoints as control points, the warped target clothes image, which is matched into the body area for wearing the target clothes, is obtained. Finally, a new try-on module adopting Attention U-Net is applied to handle more detailed synthesis of virtual fitted image. Extensive experiments on a well-known dataset show that the proposed KP-VTON performs better the state-of-the-art virtual try-on systems.

Text-to-Face Generation Using Multi-Scale Gradients Conditional Generative Adversarial Networks (다중 스케일 그라디언트 조건부 적대적 생성 신경망을 활용한 문장 기반 영상 생성 기법)

  • Bui, Nguyen P.;Le, Duc-Tai;Choo, Hyunseung
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.764-767
    • /
    • 2021
  • While Generative Adversarial Networks (GANs) have seen huge success in image synthesis tasks, synthesizing high-quality images from text descriptions is a challenging problem in computer vision. This paper proposes a method named Text-to-Face Generation Using Multi-Scale Gradients for Conditional Generative Adversarial Networks (T2F-MSGGANs) that combines GANs and a natural language processing model to create human faces has features found in the input text. The proposed method addresses two problems of GANs: model collapse and training instability by investigating how gradients at multiple scales can be used to generate high-resolution images. We show that T2F-MSGGANs converge stably and generate good-quality images.

Learning-based Super-resolution for Text Images (글자 영상을 위한 학습기반 초고해상도 기법)

  • Heo, Bo-Young;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.175-183
    • /
    • 2015
  • The proposed algorithm consists of two stages: the learning and synthesis stages. At the learning stage, we first collect various high-resolution (HR)-low-resolution (LR) text image pairs, and quantize the LR images, and extract HR-LR block pairs. Based on quantized LR blocks, the LR-HR block pairs are clustered into a pre-determined number of classes. For each class, an optimal 2D-FIR filter is computed, and it is stored into a dictionary with the corresponding LR block for indexing. At the synthesis stage, each quantized LR block in an input LR image is compared with every LR block in the dictionary, and the FIR filter of the best-matched LR block is selected. Finally, a HR block is synthesized with the chosen filter, and a final HR image is produced. Also, in order to cope with noisy environment, we generate multiple dictionaries according to noise level at the learning stage. So, the dictionary corresponding to the noise level of the input image is chosen, and a final HR image is produced using the selected dictionary. Experimental results show that the proposed algorithm outperforms the previous works for noisy images as well as noise-free images.

From Multimedia Data Mining to Multimedia Big Data Mining

  • Constantin, Gradinaru Bogdanel;Mirela, Danubianu;Luminita, Barila Adina
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.381-389
    • /
    • 2022
  • With the collection of huge volumes of text, image, audio, video or combinations of these, in a word multimedia data, the need to explore them in order to discover possible new, unexpected and possibly valuable information for decision making was born. Starting from the already existing data mining, but not as its extension, multimedia mining appeared as a distinct field with increased complexity and many characteristic aspects. Later, the concept of big data was extended to multimedia, resulting in multimedia big data, which in turn attracted the multimedia big data mining process. This paper aims to survey multimedia data mining, starting from the general concept and following the transition from multimedia data mining to multimedia big data mining, through an up-to-date synthesis of works in the field, which is a novelty, from our best of knowledge.

Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network (RNN을 이용한 Expressive Talking Head from Speech의 합성)

  • Sakurai, Ryuhei;Shimba, Taiki;Yamazoe, Hirotake;Lee, Joo-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

Study on Fashion Illustration as Viewed from the Allegorical - Based on the theory of Craig Owens - (알레고리 관점의 패션 일러스트레이션에 관한 연구 - 크렉 오웬스의 이론을 중심으로 -)

  • Kim, Mi-Hyun
    • Journal of the Korean Society of Costume
    • /
    • v.62 no.4
    • /
    • pp.81-90
    • /
    • 2012
  • The contents of this study are as follows. First, an academic understanding has been achieved by exploring the theoretical concept "allegory", and a new theoretical approached methodology has been sought. Second, an analysis-index of fashion illustration cases has been suggested based on the allegory theory of Craig Owens. Third, in order to draw the characteristics of fashion illustration as viewed from the allegorical viewpoint and find out its feasibility, the case studies has been referred and the internal significance, external significance that combines different characteristics has been extracted. In regards to this study method, literature studies and case studies has been done in parallel with each other. This study was done in the following sequence: the establishment of the study system, the drawing of the allegory-associated concepts and the discovering the characteristics of aesthetic expressions. The results of this study on the expression characteristics of fashion illustration as viewed from the allegorical viewpoint of Craig Owens are as follows. First, the borrowing of image, which is a characteristic of allegory, contains the meaning of uncertainty in the fashion illustration as it expresses the image-synthesis and forms a completely different meaning as the fixed meaning is dissolved and it is utilized as a photo-montage technique. Second, the inference of pictogram is the mixture of linguistic medium and visual medium. Fashion illustration utilizes the characters and transmits the fashion information visually and immanently. It has the characteristic of making the information into pictograms and the internal significances of mutual-text with communication function. Third, the uniqueness of location in the fashion illustration has the special nature of utilized mediums as it is used for advertising or publicizing. The fashion illustration from the viewpoint of allegory has the impermanency of existing only for a limited time and reflects the coincidence that gives the meaning of utilized location according to the season trend. Fourth, the cross-breeding is expressed as the mixture of various materials in the fashion illustration. The expressions made by the mixture of media, such as the use of computer graphic programs mixed together with various materials showed the trend of diversity and genre dissolution.