• Title/Summary/Keyword: Artificial Intelligence Art

Search Result 171, Processing Time 0.031 seconds

Hybrid Fuzzy Association Structure for Robust Pet Dog Disease Information System

  • Kim, Kwang Baek;Song, Doo Heon;Jun Park, Hyun
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.234-240
    • /
    • 2021
  • As the number of pet dog-related businesses is rising rapidly, there is an increasing need for reliable pet dog health information systems for casual pet owners, especially those caring for older dogs. Our goal is to implement a mobile pre-diagnosis system that can provide a first-hand pre-diagnosis and an appropriate coping strategy when the pet owner observes abnormal symptoms. Our previous attempt, which is based on the fuzzy C-means family in inference, performs well when only relevant symptoms are provided for the query, but this assumption is not realistic. Thus, in this paper, we propose a hybrid inference structure that combines fuzzy association memory and a double-layered fuzzy C-means algorithm to infer the probable disease with robustness, even when noisy symptoms are present in the query provided by the user. In the experiment, it is verified that our proposed system is more robust when noisy (irrelevant) input symptoms are provided and the inferred results (probable diseases) are more cohesive than those generated by the single-phase fuzzy C-means inference engine.

A Remote Sensing Scene Classification Model Based on EfficientNetV2L Deep Neural Networks

  • Aljabri, Atif A.;Alshanqiti, Abdullah;Alkhodre, Ahmad B.;Alzahem, Ayyub;Hagag, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.406-412
    • /
    • 2022
  • Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.

Trends of Compiler Development for AI Processor (인공지능 프로세서 컴파일러 개발 동향)

  • Kim, J.K.;Kim, H.J.;Cho, Y.C.P.;Kim, H.M.;Lyuh, C.G.;Han, J.;Kwon, Y.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.32-42
    • /
    • 2021
  • The rapid growth of deep-learning applications has invoked the R&D of artificial intelligence (AI) processors. A dedicated software framework such as a compiler and runtime APIs is required to achieve maximum processor performance. There are various compilers and frameworks for AI training and inference. In this study, we present the features and characteristics of AI compilers, training frameworks, and inference engines. In addition, we focus on the internals of compiler frameworks, which are based on either basic linear algebra subprograms or intermediate representation. For an in-depth insight, we present the compiler infrastructure, internal components, and operation flow of ETRI's "AI-Ware." The software framework's significant role is evidenced from the optimized neural processing unit code produced by the compiler after various optimization passes, such as scheduling, architecture-considering optimization, schedule selection, and power optimization. We conclude the study with thoughts about the future of state-of-the-art AI compilers.

Comparing automated and non-automated machine learning for autism spectrum disorders classification using facial images

  • Elshoky, Basma Ramdan Gamal;Younis, Eman M.G.;Ali, Abdelmgeid Amin;Ibrahim, Osman Ali Sadek
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.613-623
    • /
    • 2022
  • Autism spectrum disorder (ASD) is a developmental disorder associated with cognitive and neurobehavioral disorders. It affects the person's behavior and performance. Autism affects verbal and non-verbal communication in social interactions. Early screening and diagnosis of ASD are essential and helpful for early educational planning and treatment, the provision of family support, and for providing appropriate medical support for the child on time. Thus, developing automated methods for diagnosing ASD is becoming an essential need. Herein, we investigate using various machine learning methods to build predictive models for diagnosing ASD in children using facial images. To achieve this, we used an autistic children dataset containing 2936 facial images of children with autism and typical children. In application, we used classical machine learning methods, such as support vector machine and random forest. In addition to using deep-learning methods, we used a state-of-the-art method, that is, automated machine learning (AutoML). We compared the results obtained from the existing techniques. Consequently, we obtained that AutoML achieved the highest performance of approximately 96% accuracy via the Hyperpot and tree-based pipeline optimization tool optimization. Furthermore, AutoML methods enabled us to easily find the best parameter settings without any human efforts for feature engineering.

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

A Study on Character Design Using [Midjourney] Application

  • Chen Xi;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.409-414
    • /
    • 2023
  • In recent years, the emergence of a number of AI image generation software represented by [Midjourney] has brought great impetus to the development of the field of AI-assisted art creation. Compared with the traditional hand-painted digital painting with the aid of electronic equipment, broke the traditional sense of animation character creation logic.This paper analyzes the application of AI technology in the field of animation character design through the practice of two-dimensional animation character . This is having a significant impact on the productivity and innovation of animation design and character modeling. The key results of the analysis indicate that AI technology, particularly through the utilization of "Midjourney,"enables the automation of certain design tasks, provides innovative approaches, and generates visually appealing and realistic characters. In conclusion, the integration of AI technology, specifically the application of "Midjourney," brings a new dimension to animation character design. The utilization of AI image generation software facilitates streamlined workflows, sparks creativity, and improves the overall quality of animated characters. As the animation industry continues to evolve, AI-assisted tools like "Midjourney" hold great potential for further advancement and innovation.

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.

A Research on 3D Texture Production Using Artificial Intelligence Softwear

  • Ke Ma;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.178-184
    • /
    • 2023
  • AI image generation technology has become a popular research direction in the field of AI, which is widely used in the field of digital art and conceptual design, and can also be used in the process of 3D texture mapping. This paper introduces the production process of 3D texture mapping using AI image technology, and discusses whether it can be used as a new way of 3D texture mapping to enrich the 3D texture mapping production process. Two AI deep learning models, Stable Diffusion and Midjourney, were combined to generate high-quality AI textures. Finally, the lmage to material function of substance 3D Sampler was used to convert the AI-generated textures into PBR 3D texture maps. And applied in 3D environment. This study shows that 3D texture maps generated by AI image generation technology can be used in 3D environment, which not only has short production time and high production efficiency, but also has rich changes in map styles, which can be quickly adjusted and modified according to the design scheme. However, some AI texture maps need to be manually modified before they can be used. With the continuous development of AI technology, there will be great potential for further development and innovation of AI-generated image technology in the 3D content production process in the future.

Implementation of 3D mobile game using radiosity model and AI algorithm (Radiosity model과 AI 알고리즘을 이용한 모바일 게임 구현)

  • Kim, Seongdong;Chin, Seonga;Cho, Teresa
    • Journal of Korea Game Society
    • /
    • v.17 no.1
    • /
    • pp.7-16
    • /
    • 2017
  • The 3D game graphic technology has become an important factor in the contents field with the game contents development. In particular, game character technology provides a realistic technique and visual pleasure, as well as an intermediate step in the immersion of the game in which the game might create an optical illusion that enables the player to enjoy heroic adventure in the game. The high expression level of characters in 3D games is a key factor in the development process, with details and carefulness of the character setting work [3]. In this paper, we propose a character representative technique applied to mobile games using mathematical model of radiosity energy, spectral radiance model, and ray tracing model method using 3D unity game engine with sensible AI algorithm for game implementation. As a practical application to the game contents, it was found that the projection of the surface in the rendering process and the game simulation might change according to the lighting condition of the game content environment, so that the high quality of game characters was simulated.

Analysis of Two-Way Communication Virtual Being Technology and Characteristics in the Content Industry (콘텐츠 산업에서 나타난 양방향 소통 가상존재 기술 및 특성 분석)

  • Kim, Jungho;Park, Jin Wan;Yoo, Taekyung
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.507-517
    • /
    • 2020
  • Along with the development of computer graphics, real-time rendering, motion capture, and artificial intelligence technology, virtual being that enables two-way communication has emerged in the content industry. Although the commercialization of technologies and platforms is creating a two-way communication virtual being, there is a lack of analysis of what characteristics this virtual being has and how it can be used in each field. Therefore, through technical background survey and case study for the production of virtual being, the two-way communication virtual being is analyzed on the characteristics necessary for emotional exchange. The characteristics needed for emotional exchange were divided into interaction, individuality, and autonomy, and this characteristic is classified as the focus and how two-way communication virtual being will be used in the content field. This study is expected to provide significant implications for the research of content production and utilization using virtual being as a basic study of virtual being, which analyzes the technical background and characteristics for two-way communication required for virtual being production.