• Title/Summary/Keyword: Video denoising

Search Result 15, Processing Time 0.023 seconds

A Study on the Video Inpainting Performance using Denoising Technique (잡음 제거 기술 기반의 비디오 인페인팅 성능 연구)

  • Jeong-yun, Seo;Han-gyul, Baek;Sang-hyo, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.329-335
    • /
    • 2022
  • In this paper, we study the effect of noise on video inpainting, a technique that fills missing regions of video. Since the video may contain noise, the quality of the video may be affected when applying the video inpainting technique. Therefore, in this paper, we compare the inpainting performance in video with and without denoising techniqueDAVIS dataset. For that, we conducted two experiments: 1) applying inpainting technique after denoising the noisy video and 2) applying the inpainting technique to the video and denoising the video. Through the experiment, we observe the effect of denoising technique on the quality of video inpainting and conclude that video inpainting after denoising would improve the quality of the video subjectively and objectively.

A Kalman Filter based Video Denoising Method Using Intensity and Structure Tensor

  • Liu, Yu;Zuo, Chenlin;Tan, Xin;Xiao, Huaxin;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2866-2880
    • /
    • 2014
  • We propose a video denoising method based on Kalman filter to reduce the noise in video sequences. Firstly, with the strong spatiotemporal correlations of neighboring frames, motion estimation is performed on video frames consisting of previous denoised frames and current noisy frame based on intensity and structure tensor. The current noisy frame is processed in temporal domain by using motion estimation result as the parameter in the Kalman filter, while it is also processed in spatial domain using the Wiener filter. Finally, by weighting the denoised frames from the Kalman and the Wiener filtering, a satisfactory result can be obtained. Experimental results show that the performance of our proposed method is competitive when compared with state-of-the-art video denoising algorithms based on both peak signal-to-noise-ratio and structural similarity evaluations.

High-frame-rate Video Denoising for Ultra-low Illumination

  • Tan, Xin;Liu, Yu;Zhang, Zheng;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4170-4188
    • /
    • 2014
  • In this study, we present a denoising algorithm for high-frame-rate videos in an ultra-low illumination environment on the basis of Kalman filtering model and a new motion segmentation scheme. The Kalman filter removes temporal noise from signals by propagating error covariance statistics. Regarded as the process noise for imaging, motion is important in Kalman filtering. We propose a new motion estimation scheme that is suitable for serious noise. This scheme employs the small motion vector characteristic of high-frame-rate videos. Small changing patches are intentionally neglected because distinguishing details from large-scale noise is difficult and unimportant. Finally, a spatial bilateral filter is used to improve denoising capability in the motion area. Experiments are performed on videos with both synthetic and real noises. Results show that the proposed algorithm outperforms other state-of-the-art methods in both peak signal-to-noise ratio objective evaluation and visual quality.

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

Signal Synchronization Using a Flicker Reduction and Denoising Algorithm for Video-Signal Optical Interconnect

  • Sangirov, Jamshid;Ukaegbu, Ikechi Augustine;Lee, Tae-Woo;Cho, Mu-Hee;Park, Hyo-Hoon
    • ETRI Journal
    • /
    • v.34 no.1
    • /
    • pp.122-125
    • /
    • 2012
  • A video signal through a high-density optical link has been demonstrated to show the reliability of optical link for high-data-rate transmission. To reduce optical point-to-point links, an electrical link has been utilized for control and clock signaling. The latency and flicker with background noise occurred during the transferring of data across the optical link due to electrical-to-optical with optical-to-electrical conversions. The proposed synchronization technology combined with a flicker and denoising algorithm has given good results and can be applied in high-definition serial data interface (HD-SDI), ultra-HD-SDI, and HD multimedia interface transmission system applications.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

Spatio-temporal Denoising Algorithm base on Nonlocal Means (비지역적 평균 기반 시공간 잡음 제거 알고리즘)

  • Park, Sang-Wook;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.24-31
    • /
    • 2011
  • This paper proposes spatio-temporal denoising algorithm based on nonlocal means. Though the conventional denoising algorithms based on nonlocal means have good performance in noise removal, it is difficult to implement them into the hardware system due to much computational load and the need for several frame buffers. Therefore we adopted infinite impulse response temporal noise reduction algorithm in the proposed algorithm. Proposed algorithm shows less artificial denoised result in the motionless region. In the motion region, spatial filter based on efficiently improved nonlocal means algorithm conduct noise removal with less motion blur. Experimental results including comparisons with conventional algorithms for various noise levels and test images show the proposed algorithm has a good performance in both visual and quantitative criteria.

Denoising 3D Skeleton Frames using Intersection Over Union

  • Chuluunsaikhan, Tserenpurev;Kim, Jeong-Hun;Choi, Jong-Hyeok;Nasridinov, Aziz
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.474-475
    • /
    • 2021
  • The accuracy of real-time video analysis system based on 3D skeleton data highly depends on the quality of data. This study proposes a methodology to distinguish noise in 3D skeleton frames using Intersection Over Union (IOU) method. IOU is metric that tells how similar two rectangles (i.e., boxes). Simply, the method decides a frame as noise or not by comparing the frame with a set of valid frames. Our proposed method distinguished noise in 3D skeleton frames with the accuracy of 99%. According to the result, our proposed method can be used to track noise in 3D skeleton frames.

Low-light Image Enhancement Based on Frame Difference and Tone Mapping (프레임 차와 톤 매핑을 이용한 저조도 영상 향상)

  • Jeong, Yunju;Lee, Yeonghak;Shim, Jaechang;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1044-1051
    • /
    • 2018
  • In this paper, we propose a new method to improve low light image. In order to improve the image quality of a night image with a moving object as much as the quality of a daytime image, the following tasks were performed. Firstly, we reduce the noisy of the input night image and improve the night image by the tone mapping method. Secondly, we segment the input night image into a foreground with motion and a background without motion. The motion is detected using both the difference between the current frame and the previous frame and the difference between the current frame and the night background image. The background region of the night image takes pixels from corresponding positions in the daytime image. The foreground regions of the night image take the pixels from the corresponding positions of the image which is improved by the tone mapping method. Experimental results show that the proposed method can improve the visual quality more clearly than the existing methods.