• Title/Summary/Keyword: generative art

Search Result 52, Processing Time 0.021 seconds

Analysis of Fashion Design Reflected Visual Properties of the Generative Art (제너러티브 아트(Generative Art)의 시각적 속성이 반영된 패션디자인 분석)

  • Kim, Dong Ok;Choi, Jung Hwa
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.41 no.5
    • /
    • pp.825-839
    • /
    • 2017
  • Generative Art (also called as the art of the algorithm) creates unexpected results, moving autonomously according to rules or algorithms. The evolution of digital media in art, which tries to seek novelty, increases the possibility of new artistic fields; subsequently, this study establishes the basis for new design approaches by analyzing visual cases of Generative Art that have emerged since the 20th century and characteristics expressed on fashion. For the methodology, the study analyzes fashion designs that have emerged since 2000, based on theoretical research that includes literature and research papers relating to Generative Art. According to the study, expression characteristics shown in fashion, based on visual properties of Generative Art, are as follows. First, abstract randomness is expressed with unexpected coincidental forms using movements of a creator and properties of materials as variables in accordance to rules or algorithms. Second, endlessly repeated pattern imitation expresses an emergent shape by endless repetition created by a modular system using rules or 3D printing using a computer algorithm. Third, the systematic variability expresses constantly changing images with a combination of system and digital media by a wearing method. It is expected that design by algorithm becomes a significant method in producing other creative ideas and expressions in modern fashion.

Methodology for Visual Communication Design Based on Generative AI

  • Younjung Hwang;Yi Wu
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.170-175
    • /
    • 2024
  • The field of Generative AI(Artificial Intelligence) involves a technology that autonomously comprehends user intentions through commands and learns from provided data to generate new content, such as images or text. This capability, which allows autonomous creativity even with design keywords, is anticipated to play a significant role in the domain of visual communication design. This article delves into the tools of generative AI applicable to visual design and the methodology for design creation using these tools. Furthermore, it discusses how designers can interact visually with AI technology in the era of generative AI.

SkelGAN: A Font Image Skeletonization Method

  • Ko, Debbie Honghee;Hassan, Ammar Ul;Majeed, Saima;Choi, Jaeyoung
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.1-13
    • /
    • 2021
  • In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.

Restoration of Ghost Imaging in Atmospheric Turbulence Based on Deep Learning

  • Chenzhe Jiang;Banglian Xu;Leihong Zhang;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.7 no.6
    • /
    • pp.655-664
    • /
    • 2023
  • Ghost imaging (GI) technology is developing rapidly, but there are inevitably some limitations such as the influence of atmospheric turbulence. In this paper, we study a ghost imaging system in atmospheric turbulence and use a gamma-gamma (GG) model to simulate the medium to strong range of turbulence distribution. With a compressed sensing (CS) algorithm and generative adversarial network (GAN), the image can be restored well. We analyze the performance of correlation imaging, the influence of atmospheric turbulence and the restoration algorithm's effects. The restored image's peak signal-to-noise ratio (PSNR) and structural similarity index map (SSIM) increased to 21.9 dB and 0.67 dB, respectively. This proves that deep learning (DL) methods can restore a distorted image well, and it has specific significance for computational imaging in noisy and fuzzy environments.

A Study on User Experience through Analysis of the Creative Process of Using Image Generative AI: Focusing on User Agency in Creativity (이미지 생성형 AI의 창작 과정 분석을 통한 사용자 경험 연구: 사용자의 창작 주체감을 중심으로)

  • Daeun Han;Dahye Choi;Changhoon Oh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.667-679
    • /
    • 2023
  • The advent of image generative AI has made it possible for people who are not experts in art and design to create finished artworks through text input. With the increasing availability of generated images and their impact on the art industry, there is a need for research on how users perceive the process of co-creating with AI. In this study, we conducted an experimental study to investigate the expected and experienced processes of image generative AI creation among general users and to find out which processes affect users' sense of creative agency. The results showed that there was a gap between the expected and experienced creative process, and users tended to perceive a low sense of creative agency. We recommend eight ways that AI can act as an enabler to support users' creative intentions so that they can experience a higher sense of creative agency. This study can contribute to the future development of image-generating AI by considering user-centered creative experiences.

Aesthetic Potential and Value Characteristics of the Game (게임의 미학적 잠재성과 가치 특성)

  • Lee, Jangwon;Yoon, Joonsung
    • Journal of Korea Game Society
    • /
    • v.16 no.5
    • /
    • pp.131-148
    • /
    • 2016
  • The cultural status of the game is always compared with the various area of art, and this discussion is continuing from appearance of the game to the present. During the same period, the cultural influence of the game is gradually increased, but public awareness was significantly undervalued the artistic value of the game. So, the change in this evaluation is needed in order to constantly maintain a cultural influence and to provide a positive influence on our society. Therefore, it is required that the research to prove their aesthetic value in the viewpoint of the game. This paper explores the new aesthetic potential of the game through a discussion of various similarities and the relationship between games and art. We look at the views on the game from the artistic point of view through game art and art game. And it find out the aesthetic subject and value of game. Finally, the approach through features of the information aesthetics and the generative aesthetics helps your understanding of the game aesthetics.

Enhancing Gene Expression Classification of Support Vector Machines with Generative Adversarial Networks

  • Huynh, Phuoc-Hai;Nguyen, Van Hoa;Do, Thanh-Nghi
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.14-20
    • /
    • 2019
  • Currently, microarray gene expression data take advantage of the sufficient classification of cancers, which addresses the problems relating to cancer causes and treatment regimens. However, the sample size of gene expression data is often restricted, because the price of microarray technology on studies in humans is high. We propose enhancing the gene expression classification of support vector machines with generative adversarial networks (GAN-SVMs). A GAN that generates new data from original training datasets was implemented. The GAN was used in conjunction with nonlinear SVMs that efficiently classify gene expression data. Numerical test results on 20 low-sample-size and very high-dimensional microarray gene expression datasets from the Kent Ridge Biomedical and Array Expression repositories indicate that the model is more accurate than state-of-the-art classifying models.

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

Impact of Artificial Intelligence on the Development of Art Projects: Opportunities and Limitations

  • Zheng, Xiang;Xiong, Jinghao;Cao, Xiaoming;Nazarov, Y.V.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.343-347
    • /
    • 2022
  • To date, the use of artificial intelligence has already brought certain results in such areas of art as poetry, painting, and music. The development of AI and its application in the creative process opens up new perspectives, expanding the capabilities of authors and attracting a new audience. The purpose of the article is to analyze the essential, artistic, and technological limitations of AI art. The article discusses the methods of attracting AI to artistic practices, carried out a comparative analysis of the methods of using AI in visual art and in the process of writing music, identified typical features in the creative interaction of the author of a work of art with AI. The basic principles of working with AI have been determined based on the analysis of ways of using AI in visual art and music. The importance of neurobiology mechanisms in the course of working with AI has been determined. The authors conclude that art remains an area in which AI still cannot replace humans, but AI contributes to the further formation of methods for modifying and rethinking the data obtained into innovative art projects.

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.