• Title/Summary/Keyword: Downsampling

Search Result 43, Processing Time 0.017 seconds

Fast Multiple-Image-Based Deblurring Method (다중 영상 기반의 고속 처리용 디블러링 기법)

  • Son, Chang-Hwan;Park, Hyung-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.49-57
    • /
    • 2012
  • This paper presents a fast multiple-image-based deblurring method that decreases the computation loads in the image deblurring, enhancing the sharpness of the textures or edges of the restored images. First, two blurred images with some blurring artifacts and one noisy image including severe noises are consecutively captured under a relatively long and short exposures, respectively. To improve the processing speeds, the captured multiple images are downsampled at the ratio of two, and then a way of estimating the point spread function(PSF) based on the image or edge patches extracted from the whole images, is introduced. The method enables to effectively reduce the computation time taken in the PSF prediction. Next, the texture-enhanced image deblurring method of supplementing the ability of the texture representation degraded by the downsampling of the input images, is developed and then applied. Finally, to get the same image size as the original input images, an upsampling method of utilizing the sharp edges of the captured noisy image is applied. By using the proposed method, the processing times taken in the image deblurring, which is the main obstacle of its application to the digital cameras, can be shortened, while recovering the fine details of the textures or edge components.

SHVC-based Texture Map Coding for Scalable Dynamic Mesh Compression (스케일러블 동적 메쉬 압축을 위한 SHVC 기반 텍스처 맵 부호화 방법)

  • Naseong Kwon;Joohyung Byeon;Hansol Choi;Donggyu Sim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.314-328
    • /
    • 2023
  • In this paper, we propose a texture map compression method based on the hierarchical coding method of SHVC to support the scalability function of dynamic mesh compression. The proposed method effectively eliminates the redundancy of multiple-resolution texture maps by downsampling a high-resolution texture map to generate multiple-resolution texture maps and encoding them with SHVC. The dynamic mesh decoder supports the scalability of mesh data by decoding a texture map having an appropriate resolution according to receiver performance and network environment. To evaluate the performance of the proposed method, the proposed method is applied to V-DMC (Video-based Dynamic Mesh Coding) reference software, TMMv1.0, and the performance of the scalable encoder/decoder proposed in this paper and TMMv1.0-based simulcast method is compared. As a result of experiments, the proposed method effectively improves in performance the average of -7.7% and -5.7% in terms of point cloud-based BD-rate (Luma PSNR) in AI and LD conditions compared to the simulcast method, confirming that it is possible to effectively support the texture map scalability of dynamic mesh data through the proposed method.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.