• Title/Summary/Keyword: Image generation

Search Result 1,745, Processing Time 0.03 seconds

A Study on the Direction of Department of Contents, University Curriculum Introduction According to the Development Status of Image-generating AI

  • Sung Won Park;Jae Yun Park
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.5
    • /
    • pp.107-120
    • /
    • 2023
  • In this study, we investigate the changes and realities of the content production process focusing on Image generation AI revolutions such as Stable Diffusion, Midjourney, and DELL-E, and examine the current status of related department operations at universities and Find out the status of the current curriculum. Through this, we suggest the need to produce AI-adaptive content talent through re-establishing the capabilities of content-related departments in art universities and quickly introducing curriculum. This is because it can be input into the efficient AI content development system currently being applied in industrial fields, and it is necessary to cultivate talent who can perform managerial and technical roles using various AI systems in the future. In conclusion, we will prepare cornerstone research to establish the university's status as a source of talent that can lead the content industry beyond the AI content production era, and focus on convergence capabilities and experience with the goal of producing convergence talent to cultivate AI adaptive content talent, suggests the direction of curriculum application for value creation.

The Generation of SPOT True Color Image Using Neural Network Algorithm

  • Chen, Chi-Farn;Huang, Chih-Yung
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.940-942
    • /
    • 2003
  • In an attempt to enhance the visual effect of SPOT image, this study develops a neural network algorithm to transform SPOT false color into simulated true color. The method has been tested using Landsat TM and SPOT images. The qualitative and quantitative comparisons indicate that the striking similarity can be found between the true and simulated true images in terms of the visual looks and the statistical analysis.

  • PDF

A Development for Web -based Name-plate Production System by using Image Processing

  • Kim, Gibom;Youn, Cho-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.60.2-60
    • /
    • 2001
  • In this paper, manufacturing system and Internet are combined and NC milling machine engraves image and text on nameplate. Image and text are input through Internet. And NC tool path is obtained by thinning algorithm and NC part program is generated. Thinning algorithm detects center lines from image and text by using connectivity and tool path is obtained along the center line. Actually experiments are performed and thinning algorithm and G-code generation module are verified.

  • PDF

Compound Image Identifier Based on Linear Component and Luminance Area (직선요소와 휘도영역 기반 복합 정지영상 인식자)

  • Park, Je-Ho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.48-54
    • /
    • 2011
  • As personal or compact devices with image acquisition functionality are becoming easily available for common users, the voluminous images that need to be managed by image related services or systems demand efficient and effective methods in the perspective of image identification. The objective of image identification is to associate an image with a unique identifier. Moreover, whenever an image identifier needs to be regenerated, the newly generated identifier should be consistent. In this paper, we propose three image identifier generation methods utilizing image features: linear component, luminance area, and combination of both features. The linear component based method exploits the information of distribution of partial lines over an image, while the luminance area based method utilizes the partition of an image into a number of small areas according to the same luminance degree. The third method is proposed in order to take advantage of both former methods. In this paper, we also demonstrate the experimental evaluations for uniqueness and similarity analysis that have shown favorable results.

Mosaic image generation of AISA Eagle hyperspectral sensor using SIFT method (SIFT 기법을 이용한 AISA Eagle 초분광센서의 모자이크영상 생성)

  • Han, You Kyung;Kim, Yong Il;Han, Dong Yeob;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.2
    • /
    • pp.165-172
    • /
    • 2013
  • In this paper, high-quality mosaic image is generated by high-resolution hyperspectral strip images using scale-invariant feature transform (SIFT) algorithm, which is one of the representative image matching methods. The experiments are applied to AISA Eagle images geo-referenced by using GPS/INS information acquired when it was taken on flight. The matching points between three strips of hyperspectral images are extracted using SIFT method, and the transformation models between images are constructed from the points. Mosaic image is, then, generated using the transformation models constructed from corresponding images. Optimal band appropriate for the matching point extraction is determined by selecting representative bands of hyperspectral data and analyzing the matched results based on each band. Mosaic image generated by proposed method is visually compared with the mosaic image generated from initial geo-referenced AISA hyperspectral images. From the comparison, we could estimate geometrical accuracy of generated mosaic image and analyze the efficiency of our methodology.

Image Synthesis and Multiview Image Generation using Control of Layer-based Depth Image (레이어 기반의 깊이영상 조절을 이용한 영상 합성 및 다시점 영상 생성)

  • Seo, Young-Ho;Yang, Jung-Mo;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1704-1713
    • /
    • 2011
  • This paper proposes a method to generate multiview images which use a synthesized image consisting of layered objects. The camera system which consists of a depth camera and a RGB camera is used in capturing objects and extracts 3-dimensional information. Considering the position and distance of the synthesizing image, the objects are synthesized into a layered image. The synthesized image is spaned to multiview images by using multiview generation tools. In this paper, we synthesized two images which consist of objects and human and the multiview images which have 37 view points were generated by using the synthesized images.

The Discontinuity Information Extraction of Rock Slope using the 3D Digital Image (3차원 수치영상을 이용한 암반사면의 불연속면 정보 추출)

  • Um Dae Yong;Lee Sung Soon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.3
    • /
    • pp.233-244
    • /
    • 2004
  • Recently, digital image is increasing greatly practical use degree in several industry fields including construction. And interest about 3D digital image that can express practical object realistically is augmented greatly. In this study, developed 3D digital image generation system based on digital photogrammetry and created 3D digital image fer object. And, wished to verify of 3D digital image through comparative analysis with processing result by digital photogrammetry system been using much the latest for acquisition of 3D information. Also, wished to apply to surface information acquisition about rock slope and execute investigation about discontinuity of joint etc. As the result, could created 3D digital image fur object using the 3D digital image generation system developing in this study, and acquire surface information about rock slope efficiently.

Image Super-Resolution for Improving Object Recognition Accuracy (객체 인식 정확도 개선을 위한 이미지 초해상도 기술)

  • Lee, Sung-Jin;Kim, Tae-Jun;Lee, Chung-Heon;Yoo, Seok Bong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.6
    • /
    • pp.774-784
    • /
    • 2021
  • The object detection and recognition process is a very important task in the field of computer vision, and related research is actively being conducted. However, in the actual object recognition process, the recognition accuracy is often degraded due to the resolution mismatch between the training image data and the test image data. To solve this problem, in this paper, we designed and developed an integrated object recognition and super-resolution framework by proposing an image super-resolution technique to improve object recognition accuracy. In detail, 11,231 license plate training images were built by ourselves through web-crawling and artificial-data-generation, and the image super-resolution artificial neural network was trained by defining an objective function to be robust to the image flip. To verify the performance of the proposed algorithm, we experimented with the trained image super-resolution and recognition on 1,999 test images, and it was confirmed that the proposed super-resolution technique has the effect of improving the accuracy of character recognition.

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF