• Title/Summary/Keyword: Layer Depth Image

Search Result 86, Processing Time 0.024 seconds

A Study on Super Resolution Image Reconstruction for Acquired Images from Naval Combat System using Generative Adversarial Networks (생성적 적대 신경망을 이용한 함정전투체계 획득 영상의 초고해상도 영상 복원 연구)

  • Kim, Dongyoung
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1197-1205
    • /
    • 2018
  • In this paper, we perform Single Image Super Resolution(SISR) for acquired images of EOTS or IRST from naval combat system. In order to conduct super resolution, we use Generative Adversarial Networks(GANs), which consists of a generative model to create a super-resolution image from the given low-resolution image and a discriminative model to determine whether the generated super-resolution image is qualified as a high-resolution image by adjusting various learning parameters. The learning parameters consist of a crop size of input image, the depth of sub-pixel layer, and the types of training images. Regarding evaluation method, we apply not only general image quality metrics, but feature descriptor methods. As a result, a larger crop size, a deeper sub-pixel layer, and high-resolution training images yield good performance.

3D conversion of 2D video using depth layer partition (Depth layer partition을 이용한 2D 동영상의 3D 변환 기법)

  • Kim, Su-Dong;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.44-53
    • /
    • 2011
  • In this paper, we propose a 3D conversion algorithm of 2D video using depth layer partition method. In the proposed algorithm, we first set frame groups using cut detection algorithm. Each divided frame groups will reduce the possibility of error propagation in the process of motion estimation. Depth image generation is the core technique in 2D/3D conversion algorithm. Therefore, we use two depth map generation algorithms. In the first, segmentation and motion information are used, and in the other, edge directional histogram is used. After applying depth layer partition algorithm which separates objects(foreground) and the background from the original image, the extracted two depth maps are properly merged. Through experiments, we verify that the proposed algorithm generates reliable depth map and good conversion results.

Scalable Coding of Depth Images with Synthesis-Guided Edge Detection

  • Zhao, Lijun;Wang, Anhong;Zeng, Bing;Jin, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4108-4125
    • /
    • 2015
  • This paper presents a scalable coding method for depth images by considering the quality of synthesized images in virtual views. First, we design a new edge detection algorithm that is based on calculating the depth difference between two neighboring pixels within the depth map. By choosing different thresholds, this algorithm generates a scalable bit stream that puts larger depth differences in front, followed by smaller depth differences. A scalable scheme is also designed for coding depth pixels through a layered sampling structure. At the receiver side, the full-resolution depth image is reconstructed from the received bits by solving a partial-differential-equation (PDE). Experimental results show that the proposed method improves the rate-distortion performance of synthesized images at virtual views and achieves better visual quality.

A Study on H.264/AVC Video Compression Standard of Multi-view Image Expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 영상에 대한 H.264/AVC 비디오 압축 표준에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.113-120
    • /
    • 2020
  • The multi-view video is a collection of multiple videos capturing the same scene at different viewpoints. Thus, there is an advantage of providing for user oriented view pointed video. This paper is suggested that the compression performance of layered depth image structure expression has improved by using more improved method. We confirm the data size of layer depth image by encoding H.264 technology and the each performances of reconstructed images. The H.264/AVC technology has easily extended for H.264 technology of video contents. In this paper, we suggested that layered depth structure can be applied for an efficient new image contents. We show that the huge data size of multi-view video image is decreased, and the higher performance of image is provided, and there is an advantage of for stressing error restoring.

Illumination Compensation Algorithm based on Segmentation with Depth Information for Multi-view Image (깊이 정보를 이용한 영역분할 기반의 다시점 영상 조명보상 기법)

  • Kang, Keunho;Ko, Min Soo;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.935-944
    • /
    • 2013
  • In this paper, a new illumination compensation algorithm by segmentation with depth information is proposed to improve the coding efficiency of multi-view images. In the proposed algorithm, a reference image is first segmented into several layers where each layer is composed of objects with a similar depth value. Then we separate objects from each other even in the same layer by labeling each separate region in the layered image. Then, the labeled reference depth image is converted to the position of the distortion image view by using 3D warping algorithm. Finally, we apply an illumination compensation algorithm to each of matched regions in the converted reference view and distorted view. The occlusion regions that occur by 3D warping are also compensated by a global compensation method. Through experimental results, we are able to confirm that the proposed algorithm has better performance to improve coding efficiency.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

The Change in Diffusion Coefficient and Wear Characteristic in Carbonitriding Layer of SCM415 Steel (침탄질화 처리된 SCM415강의 깊이에 따른 확산 및 마모특성 변화)

  • Lee, Su-Yeon;Youn, Kuk-Tea;Huh, Seok-Hwan;Lee, Chan-Gyu
    • Journal of the Korean institute of surface engineering
    • /
    • v.44 no.5
    • /
    • pp.207-212
    • /
    • 2011
  • In this study, the change in diffusion coefficient and wear characteristic with depth in the carbonitriding layer of SCM415 steel was discussed. To determine the diffusion coefficient, depth profile of carbon was measured from the surface using the Glow Discharge Spectrometer. In otherwise, measurements of carbide fraction, micro vickers hardness of surface and observation of microstructure have been implemented through the SEM image. $Fe_3$(C,N) layer and effective depth were increased as the time for carbonitriding takes longer. According to wear experiment, the results showed that wear resistance was improved by $Fe_3$(C,N) layer and effective depth.

Image Synthesis and Multiview Image Generation using Control of Layer-based Depth Image (레이어 기반의 깊이영상 조절을 이용한 영상 합성 및 다시점 영상 생성)

  • Seo, Young-Ho;Yang, Jung-Mo;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1704-1713
    • /
    • 2011
  • This paper proposes a method to generate multiview images which use a synthesized image consisting of layered objects. The camera system which consists of a depth camera and a RGB camera is used in capturing objects and extracts 3-dimensional information. Considering the position and distance of the synthesizing image, the objects are synthesized into a layered image. The synthesized image is spaned to multiview images by using multiview generation tools. In this paper, we synthesized two images which consist of objects and human and the multiview images which have 37 view points were generated by using the synthesized images.

Depth perception enhancement based on chromostereopsis in a 3D display

  • Hong, JiYoung;Lee, HoYoung;Park, DuSik;Kim, ChangYeong
    • Journal of Information Display
    • /
    • v.13 no.3
    • /
    • pp.101-106
    • /
    • 2012
  • This study was conducted to enhance the cubic effect by representing an image with a sense of three-dimensional (3D) depth, using chromostereopsis, among the characteristics of human visual perception. An algorithm that enhances the cubic effect, based on the theory that the cubic effect of the chromostereoptic effect and the chromostereoptic reversal effect depends on the lightness of the background, classifies the layers of the 3D image input into the foreground, middle, and background layers according to the depth of the image input. It suits the characteristics of human visual perception because it controls the color factor that was adaptively detected through experiments on each layer; and it can achieve an enhanced cubic effect that is suitable for the characteristics of the image input.

DOI Detector Design using Different Sized Scintillators in Each Layer (각 층의 서로 다른 크기의 섬광체를 사용한 반응 깊이 측정 검출기 설계)

  • Seung-Jae, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • In preclinical positron emisson tomography(PET), spatial resolution degradation occurs outside the field of view(FOV). To solve this problem, a depth of interaction(DOI) detector was developed that measures the position where gamma rays and the scintillator interact. There are a method in which a scintillation pixel array is composed of multiple layers, a method in which photosensors are arranged at both ends of a single layer, a method in which a scintillation pixel array is constituted in several layers and a photosensor is arranged in each layer. In this study, a new type of DOI detector was designed by analyzing the characteristics of the previously developed detectors. In the two-layer detector, different sizes of scintillation pixels were used for each layer, and the array size was configured differently. When configured in this form, the positions of the scintillation pixels for each layer are arranged to be shifted from each other, so that they are imaged at different positions in a flood image. DETECT2000 simulation was performed to confirm the possibility of measuring the depth of interaction of the designed detector. A flood image was reconstructed from a light signal acquired by a gamma-ray event generated at the center of each scintillation pixel. As a result, it was confirmed that all scintillation pixels for each layer were separated from the reconstructed flood image and imaged to measure the interaction depth. When this detector is applied to preclinical PET, it is considered that excellent images can be obtained by improving spatial resolution.