• Title/Summary/Keyword: image synthesizing

Search Result 81, Processing Time 0.024 seconds

Development of an Image Data Augmentation Apparatus to Evaluate CNN Model (CNN 모델 평가를 위한 이미지 데이터 증강 도구 개발)

  • Choi, Youngwon;Lee, Youngwoo;Chae, Heung-Seok
    • Journal of Software Engineering Society
    • /
    • v.29 no.1
    • /
    • pp.13-21
    • /
    • 2020
  • As CNN model is applied to various domains such as image classification and object detection, the performance of CNN model which is used to safety critical system like autonomous vehicles should be reliable. To evaluate that CNN model can sustain the performance in various environments, we developed an image data augmentation apparatus which generates images that is changed background. If an image which contains object is entered into the apparatus, it extracts an object image from the entered image and generate s composed images by synthesizing the object image with collected background images. A s a method to evaluate a CNN model, the apparatus generate s new test images from original test images, and we evaluate the CNN model by the new test image. As a case study, we generated new test images from Pascal VOC2007 and evaluated a YOLOv3 model with the new images. As a result, it was detected that mAP of new test images is almost 0.11 lower than mAP of the original test images.

Stereo Image Composition Using Poisson Object Editing (포아송 객체 편집을 이용한 스테레오 영상 합성)

  • Baek, Eu-Tteum;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.8
    • /
    • pp.453-458
    • /
    • 2014
  • In this paper, we propose a stereo image composition method based on Poisson image editing. If we synthesize images without considering their depth values, it may lead to unwanted consequences. When we segment an image into its background and foreground regions using Grabcut, we take into account their geometric positions to mix color tones; thus, the image is composited more naturally. After synthesizing images, we apply a blurring operation around object boundaries; then, the foreground object and background are composited more seamlessly. In addition, we can adjust the distance of the object by setting arbitrary depth values and generating right color and depth images automatically. Experimental results show that the proposed stereo image composition method provides naturally synthesized stereo images. Improved portions were subjectively confirmed as well.

Dual-Sensitivity Mode CMOS Image Sensor for Wide Dynamic Range Using Column Capacitors

  • Lee, Sanggwon;Bae, Myunghan;Choi, Byoung-Soo;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.26 no.2
    • /
    • pp.85-90
    • /
    • 2017
  • A wide dynamic range (WDR) CMOS image sensor (CIS) was developed with a specialized readout architecture for realizing high-sensitivity (HS) and low-sensitivity (LS) reading modes. The proposed pixel is basically a three-transistor (3T) active pixel sensor (APS) structure with an additional transistor. In the developed WDR CIS, only one mode between the HS mode for relatively weak light intensity and the LS mode for the strong light intensity is activated by an external controlling signal, and then the selected signal is read through each column-parallel readout circuit. The LS mode is implemented with the column capacitors and a feedback structure for adjusting column capacitor size. In particular, the feedback circuit makes it possible to change the column node capacitance automatically by using the incident light intensity. As a result, the proposed CIS achieved a wide dynamic range of 94 dB by synthesizing output signals from both modes. The prototype CIS is implemented with $0.18-{\mu}m$ 1-poly 6-metal (1P6M) standard CMOS technology, and the number of effective pixels is 176 (H) ${\times}$ 144 (V).

A Color Study of the Sky Area Focused on the Van Gogh's Paintings

  • Xiaodi, Cui;Xinyi, Shan;Jeanhun, Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.113-119
    • /
    • 2023
  • This research analyzed the importance and influence of color expression on psychological and emotional changes of visual perception in the creation of art works. This research takes the element of the sky in the works of Vincent William Van Gogh, a Dutch post-Impressionist representative painter, as the basic research objective to prove the influence of color expression of the same thing on the inner emotional communication of the creator. After synthesizing the contents of previous research and investigation on Van Gogh, this research will summarize the works containing sky elements in Van Gogh's works according to his four creative stages, and select representative works for color analysis and comparison. In this paper, by comparing the colors of the same elements of the sky, we can find Van Gogh's guidance of psychological and emotional changes through the expression of colors in his works, which will play a certain inspiring role in the creation of painting art.

View synthesis with sparse light field for 6DoF immersive video

  • Kwak, Sangwoon;Yun, Joungil;Jeong, Jun-Young;Kim, Youngwook;Ihm, Insung;Cheong, Won-Sik;Seo, Jeongil
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Virtual view synthesis, which generates novel views similar to the characteristics of actually acquired images, is an essential technical component for delivering an immersive video with realistic binocular disparity and smooth motion parallax. This is typically achieved in sequence by warping the given images to the designated viewing position, blending warped images, and filling the remaining holes. When considering 6DoF use cases with huge motion, the warping method in patch unit is more preferable than other conventional methods running in pixel unit. Regarding the prior case, the quality of synthesized image is highly relevant to the means of blending. Based on such aspect, we proposed a novel blending architecture that exploits the similarity of the directions of rays and the distribution of depth values. By further employing the proposed method, results showed that more enhanced view was synthesized compared with the well-designed synthesizers used within moving picture expert group (MPEG-I). Moreover, we explained the GPU-based implementation synthesizing and rendering views in the level of real time by considering the applicability for immersive video service.

Non-Homogeneous Haze Synthesis for Hazy Image Depth Estimation Using Deep Learning (불균일 안개 영상 합성을 이용한 딥러닝 기반 안개 영상 깊이 추정)

  • Choi, Yeongcheol;Paik, Jeehyun;Ju, Gwangjin;Lee, Donggun;Hwang, Gyeongha;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.45-54
    • /
    • 2022
  • Image depth estimation is a technology that is the basis of various image analysis. As analysis methods using deep learning models emerge, studies using deep learning in image depth estimation are being actively conducted. Currently, most deep learning-based depth estimation models are being trained with clean and ideal images. However, due to the lack of data on adverse conditions such as haze or fog, the depth estimation may not work well in such an environment. It is hard to sufficiently secure an image in these environments, and in particular, obtaining non-homogeneous haze data is a very difficult problem. In order to solve this problem, in this study, we propose a method of synthesizing non-homogeneous haze images and a learning method for a monocular depth estimation deep learning model using this method. Considering that haze mainly occurs outdoors, datasets mainly containing outdoor images are constructed. Experiment results show that the model with the proposed method is good at estimating depth in both synthesized and real haze data.

An Ambient Light Control System using The Image Difference between Video Frames (인접한 동영상 프레임의 차영상을 이용한 디스플레이 주변 조명효과의 제어)

  • Shin, Su-Chul;Han, Soon-Hun
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.3
    • /
    • pp.7-16
    • /
    • 2010
  • In this paper, we propose an ambient light control method based on the difference of image frames in video. The proposed method is composed of three steps. 1) The first step is to extract a dominant color of a current frame. 2) The second step is to compute the amount of change and the representative color in the changed region using the difference image. 3) The third step is to make a new representative color. The difference image is created from two images transformed into the YUV color space. The summed color difference of each pixel is used for the amount of change. The new representative color is created by synthesizing the current color and the changed color in proportion to the amount of change. We compare the variations of the light effect according to time with and without the proposed method for the same video. The result shows that the new method generates more dynamic light effects.

Trend of Technologies and Standardizations for Mobile Augmented Reality (모바일 증강현실 기술 및 표준화 동향)

  • Lee, Yong-Hwan;Lee, Yukyong;Park, Je-Ho;Yoon, Kyoungro;Kim, Cheong Ghil;Kim, Youngseop
    • Journal of Satellite, Information and Communications
    • /
    • v.8 no.1
    • /
    • pp.83-88
    • /
    • 2013
  • Recently, by increasing the number of smartphone users, the applications for product brochure and advertising service using a technology of augmented reality are also taking place exponentially. The term, augmented reality, is an application of providing composite view with real world and virtual world, and synthesizing the information to make it look-like things that exist in the actual environments of the original real world. In this paper, we present the trends of core technologies and the standardization related on augmented reality in the mobile environment, and discuss the necessary of standards related to image-based augmented reality.

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

An Artificial Intelligence Research for Maritime Targets Identification based on ISAR Images (ISAR 영상 기반 해상표적 식별을 위한 인공지능 연구)

  • Kim, Kitae;Lim, Yojoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.2
    • /
    • pp.12-19
    • /
    • 2022
  • Artificial intelligence is driving the Fourth Industrial Revolution and is in the spotlight as a general-purpose technology. As the data collection from the battlefield increases rapidly, the need to us artificial intelligence is increasing in the military, but it is still in its early stages. In order to identify maritime targets, Republic of Korea navy acquires images by ISAR(Inverse Synthetic Aperture Radar) of maritime patrol aircraft, and humans make out them. The radar image is displayed by synthesizing signals reflected from the target after radiating radar waves. In addition, day/night and all-weather observations are possible. In this study, an artificial intelligence is used to identify maritime targets based on radar images. Data of radar images of 24 maritime targets in Republic of Korea and North Korea acquired by ISAR were pre-processed, and an artificial intelligence algorithm(ResNet-50) was applied. The accuracy of maritime targets identification showed about 99%. Out of the 81 warship types, 75 types took less than 5 seconds, and 6 types took 15 to 163 seconds.