• Title/Summary/Keyword: Style Transfer Model

Search Result 27, Processing Time 0.025 seconds

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

Optimization of attention map based model for improving the usability of style transfer techniques

  • Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.31-38
    • /
    • 2023
  • Style transfer is one of deep learning-based image processing techniques that has been actively researched recently. These research efforts have led to significant improvements in the quality of result images. Style transfer is a technology that takes a content image and a style image as inputs and generates a transformed result image by applying the characteristics of the style image to the content image. It is becoming increasingly important in exploiting the diversity of digital content. To improve the usability of style transfer technology, ensuring stable performance is crucial. Recently, in the field of natural language processing, the concept of Transformers has been actively utilized. Attention maps, which forms the basis of Transformers, is also being actively applied and researched in the development of style transfer techniques. In this paper, we analyze the representative techniques SANet and AdaAttN and propose a novel attention map-based structure which can generate improved style transfer results. The results demonstrate that the proposed technique effectively preserves the structure of the content image while applying the characteristics of the style image.

Learning Domain Invariant Representation via Self-Rugularization (자기 정규화를 통한 도메인 불변 특징 학습)

  • Hyun, Jaeguk;Lee, ChanYong;Kim, Hoseong;Yoo, Hyunjung;Koh, Eunjin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.4
    • /
    • pp.382-391
    • /
    • 2021
  • Unsupervised domain adaptation often gives impressive solutions to handle domain shift of data. Most of current approaches assume that unlabeled target data to train is abundant. This assumption is not always true in practices. To tackle this issue, we propose a general solution to solve the domain gap minimization problem without any target data. Our method consists of two regularization steps. The first step is a pixel regularization by arbitrary style transfer. Recently, some methods bring style transfer algorithms to domain adaptation and domain generalization process. They use style transfer algorithms to remove texture bias in source domain data. We also use style transfer algorithms for removing texture bias, but our method depends on neither domain adaptation nor domain generalization paradigm. The second regularization step is a feature regularization by feature alignment. Adding a feature alignment loss term to the model loss, the model learns domain invariant representation more efficiently. We evaluate our regularization methods from several experiments both on small dataset and large dataset. From the experiments, we show that our model can learn domain invariant representation as much as unsupervised domain adaptation methods.

A label-free high precision automated crack detection method based on unsupervised generative attentional networks and swin-crackformer

  • Shiqiao Meng;Lezhi Gu;Ying Zhou;Abouzar Jafari
    • Smart Structures and Systems
    • /
    • v.33 no.6
    • /
    • pp.449-463
    • /
    • 2024
  • Automated crack detection is crucial for structural health monitoring and post-earthquake rapid damage detection. However, realizing high precision automatic crack detection in the absence of corresponding manual labeling presents a formidable challenge. This paper presents a novel crack segmentation transfer learning method and a novel crack segmentation model called Swin-CrackFormer. The proposed method facilitates efficient crack image style transfer through a meticulously designed data preprocessing technique, followed by the utilization of a GAN model for image style transfer. Moreover, the proposed Swin-CrackFormer combines the advantages of Transformer and convolution operations to achieve effective local and global feature extraction. To verify the effectiveness of the proposed method, this study validates the proposed method on three unlabeled crack datasets and evaluates the Swin-CrackFormer model on the METU dataset. Experimental results demonstrate that the crack transfer learning method significantly improves the crack segmentation performance on unlabeled crack datasets. Moreover, the Swin-CrackFormer model achieved the best detection result on the METU dataset, surpassing existing crack segmentation models.

Few-Shot Content-Level Font Generation

  • Majeed, Saima;Hassan, Ammar Ul;Choi, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1166-1186
    • /
    • 2022
  • Artistic font design has become an integral part of visual media. However, without prior knowledge of the font domain, it is difficult to create distinct font styles. When the number of characters is limited, this task becomes easier (e.g., only Latin characters). However, designing CJK (Chinese, Japanese, and Korean) characters presents a challenge due to the large number of character sets and complexity of the glyph components in these languages. Numerous studies have been conducted on automating the font design process using generative adversarial networks (GANs). Existing methods rely heavily on reference fonts and perform font style conversions between different fonts. Additionally, rather than capturing style information for a target font via multiple style images, most methods do so via a single font image. In this paper, we propose a network architecture for generating multilingual font sets that makes use of geometric structures as content. Additionally, to acquire sufficient style information, we employ multiple style images belonging to a single font style simultaneously to extract global font style-specific information. By utilizing the geometric structural information of content and a few stylized images, our model can generate an entire font set while maintaining the style. Extensive experiments were conducted to demonstrate the proposed model's superiority over several baseline methods. Additionally, we conducted ablation studies to validate our proposed network architecture.

Text Steganography Based on Ci-poetry Generation Using Markov Chain Model

  • Luo, Yubo;Huang, Yongfeng;Li, Fufang;Chang, Chinchen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4568-4584
    • /
    • 2016
  • Steganography based on text generation has become a hot research topic in recent years. However, current text-generation methods which generate texts of normal style have either semantic or syntactic flaws. Note that texts of special genre, such as poem, have much simpler language model, less grammar rules, and lower demand for naturalness. Motivated by this observation, in this paper, we propose a text steganography that utilizes Markov chain model to generate Ci-poetry, a classic Chinese poem style. Since all Ci poems have fixed tone patterns, the generation process is to select proper words based on a chosen tone pattern. Markov chain model can obtain a state transfer matrix which simulates the language model of Ci-poetry by learning from a given corpus. To begin with an initial word, we can hide secret message when we use the state transfer matrix to choose a next word, and iterating until the end of the whole Ci poem. Extensive experiments are conducted and both machine and human evaluation results show that our method can generate Ci-poetry with higher naturalness than former researches and achieve competitive embedding rate.

Improved Cycle GAN Performance By Considering Semantic Loss (의미적 손실 함수를 통한 Cycle GAN 성능 개선)

  • Tae-Young Jeong;Hyun-Sik Lee;Ye-Rim Eom;Kyung-Su Park;Yu-Rim Shin;Jae-Hyun Moon
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.908-909
    • /
    • 2023
  • Recently, several generative models have emerged and are being used in various industries. Among them, Cycle GAN is still used in various fields such as style transfer, medical care and autonomous driving. In this paper, we propose two methods to improve the performance of these Cycle GAN model. The ReLU activation function previously used in the generator was changed to Leaky ReLU. And a new loss function is proposed that considers the semantic level rather than focusing only on the pixel level through the VGG feature extractor. The proposed model showed quality improvement on the test set in the art domain, and it can be expected to be applied to other domains in the future to improve performance.

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

Recognition of Multi Label Fashion Styles based on Transfer Learning and Graph Convolution Network (전이학습과 그래프 합성곱 신경망 기반의 다중 패션 스타일 인식)

  • Kim, Sunghoon;Choi, Yerim;Park, Jonghyuk
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.1
    • /
    • pp.29-41
    • /
    • 2021
  • Recently, there are increasing attempts to utilize deep learning methodology in the fashion industry. Accordingly, research dealing with various fashion-related problems have been proposed, and superior performances have been achieved. However, the studies for fashion style classification have not reflected the characteristics of the fashion style that one outfit can include multiple styles simultaneously. Therefore, we aim to solve the multi-label classification problem by utilizing the dependencies between the styles. A multi-label recognition model based on a graph convolution network is applied to detect and explore fashion styles' dependencies. Furthermore, we accelerate model training and improve the model's performance through transfer learning. The proposed model was verified by a dataset collected from social network services and outperformed baselines.

Effects of Childhood Attachment on Attachment Transition and Loneliness in Adolescence: An Examination of Attachment Transfer Process Model (유아기 부모와의 애착이 청소년기 애착전이와 외로움에 미치는 영향: 애착전이모형 검증을 중심으로)

  • 전효정;이귀옥
    • Journal of the Korean Home Economics Association
    • /
    • v.38 no.1
    • /
    • pp.185-198
    • /
    • 2000
  • Recently, many researchers have been interested in attachment processes in adolescence. However, the nature of attachment formation and transfer during this developmental period were not answered yet. This study examine the mechanism of attachment transfer from parents to friends, and the effects of childhood attachment styles on the level of attachment transfer and loneliness in adolescence. The resets show the majority of participants(70%) used their parents as primary attachment figures but were in the process of transferring attachment-related functions from parents to peers. There were the significant effects of attachment style, the level of transfer or state and trait loneliness in adolescence. This study provide the implications and limitation for conceptualization and measurement of adolescent attachment formation and loneliness.

  • PDF