• Title/Summary/Keyword: flickering artifacts

Search Result 10, Processing Time 0.024 seconds

An Efficient Video Dehazing to Without Flickering Artifacts (비디오에서 플리커 현상이 없는 효율적인 안개제거)

  • Kim, Young Min;Park, Ki Tae;Lee, Dong Seok;Choi, Wonju;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.51-57
    • /
    • 2014
  • In this paper, we propose a novel method to effectively eliminate flickering artifacts caused by dehazing in video sequences. When applying a dehazing technique directly to each image in a video sequence, flicker artifacts may occur because atmospheric values are calculated without considering the relation of adjacent frames. Although some existing methods reduce flickering artifacts by calculating highly correlated transmission values between adjacent frames, flickering artifacts may still occur. Therefore, in order to effectively reduce flickering artifacts, we propose a novel approach considering temporal averages of atmospheric light values calculated from adjacent frames. Experimental results have shown that the proposed method achieves better performance of video dehazing with less flickering artifact than existing methods.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.

Measurement of Flickering Artifact for H.264 with Periodic I-Frame Structure (주기적 I-프레임 구조의 H.264 부호화 동영상을 위한 플리커링 측정 알고리즘)

  • Lim, Jong-Min;Kang, Dong-Wook;Jung, Kyeong-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.321-331
    • /
    • 2010
  • Most of multimedia video coding algorithms are lossy schemes and several kinds of spatial and temporal artifacts are inevitable. Flickering, which is the most typical coding artifact in time domain, is mainly due to fact that the quality of coded sequence fluctuates as the quantization parameter is adjusted for rate control. In this paper, we analyzed the effect of quality variation according to the characteristics of video sequence when the I-frames are periodically inserted. And we proposed the FR(Full Reference)-based assessment algorithm to measure the amount of flickering artifacts in the coded video. It is discovered that the flickering becomes critical when the level of quality is intermediate and is affected by the amount of detail or movement, the size of object, and camera parameters. The proposed measurement algorithm shows is well consistent with HVS(Human Visual System).

An efficient Video Dehazing Algorithm Based on Spectral Clustering

  • Zhao, Fan;Yao, Zao;Song, Xiaofang;Yao, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3239-3267
    • /
    • 2018
  • Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. The temporal cost function also suffers from the temporal non-coherence of newly appearing objects in a scene. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on well designed spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that edge images dehazed with optimized transmission values have richer detail than before restoration, an edge intensity function is added to the spatial consistency cost model. Atmospheric light is estimated using a modified quadtree search. Different temporal transmission models are established for newly appearing objects, static backgrounds, and moving objects. The experimental results demonstrate that the new method provides higher dehazing quality and lower time complexity than the previous technique.

Application of the Annealing Method to the Three Dimensional Layout Design (3차원배치설계에 대한 어닐링법의 적용)

  • 장승호;최명진
    • Journal of the Korea Society for Simulation
    • /
    • v.10 no.2
    • /
    • pp.1-14
    • /
    • 2001
  • The layout design of component plays an important role in the design and usability of many engineering products. The Engineering artifacts of today are becoming increasingly complicated. The simulated annealing method has been applied effectively to the layout and packing problems of wafer. The main characteristics of simulated annealing method is that an optimum can be obtained from the many local optimums by controlling the temperature and introducing the statistic flickering. The objective of this study is to suggest a method to apply the simulated annealing method to the three dimensional layout design of submergible boat which has multiple constraint conditions and evaluation criteria. We describe an approach to define cost function, constraints and generate layouts using a computer. In this research three dimensional LAYout Design Optimization Program(LAYDOP ver.2) has been developed. The layout result(the total value of evaluation criteria) designed by a human layout expert has been improved by 31.0% using this program.

  • PDF

Fast Content-preserving Seam Estimation for Real-time High-resolution Video Stitching (실시간 고해상도 동영상 스티칭을 위한 고속 콘텐츠 보존 시접선 추정 방법)

  • Kim, Taeha;Yang, Seongyeop;Kang, Byeongkeun;Lee, Hee Kyung;Seo, Jeongil;Lee, Yeejin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.1004-1012
    • /
    • 2020
  • We present a novel content-preserving seam estimation algorithm for real-time high-resolution video stitching. Seam estimation is one of the fundamental steps in image/video stitching. It is to minimize visual artifacts in the transition areas between images. Typical seam estimation algorithms are based on optimization methods that demand intensive computations and large memory. The algorithms, however, often fail to avoid objects and results in cropped or duplicated objects. They also lack temporal consistency and induce flickering between frames. Hence, we propose an efficient and temporarily-consistent seam estimation algorithm that utilizes a straight line. The proposed method also uses convolutional neural network-based instance segmentation to locate seam at out-of-objects. Experimental results demonstrate that the proposed method produces visually plausible stitched videos with minimal visual artifacts in real-time.

Depth Acquisition Techniques for 3D Contents Generation (3차원 콘텐츠 제작을 위한 깊이 정보 획득 기술)

  • Jang, Woo-Seok;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.15-21
    • /
    • 2012
  • Depth information is necessary for various three dimensional contents generation. Depth acquisition techniques can be categorized broadly into two approaches: active, passive depth sensors depending on how to obtain depth information. In this paper, we take a look at several ways of depth acquirement. We present not only depth acquisition methods using discussed ways, but also hybrid methods which combine both approaches to compensate for drawbacks of each approach. Furthermore, we introduce several matching cost functions and post-processing techniques to enhance the temporal consistency and reduce flickering artifacts and discomforts of users caused by inaccurate depth estimation in 3D video.

  • PDF

Depth Video Post-processing for Immersive Teleconference (원격 영상회의 시스템을 위한 깊이 영상 후처리 기술)

  • Lee, Sang-Beom;Yang, Seung-Jun;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.497-502
    • /
    • 2012
  • In this paper, we present an immersive videoconferencing system that enables gaze correction between users in the internet protocol TV (IPTV) environment. The proposed system synthesizes the gaze corrected images using the depth estimation and the virtual view synthesis algorithms as one of the most important techniques of 3D video system. The conventional processes, however, causes several problems, especially temporal inconsistency of a depth video. This problem leads to flickering artifacts discomforting viewers. Therefore, in order to reduce the temporal inconsistency problem, we exploit the joint bilateral filter which is extended to the temporal domain. In addition, we apply an outlier reduction operation in the temporal domain. From experimental results, we have verified that the proposed system is sufficient to generate the natural gaze-corrected image and realize immersive videoconferencing.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

Depth Estimation and Intermediate View Synthesis for Three-dimensional Video Generation (3차원 영상 생성을 위한 깊이맵 추정 및 중간시점 영상합성 방법)

  • Lee, Sang-Beom;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1070-1075
    • /
    • 2009
  • In this paper, we propose new depth estimation and intermediate view synthesis algorithms for three-dimensional video generation. In order to improve temporal consistency of the depth map sequence, we add a temporal weighting function to the conventional matching function when we compute the matching cost for estimating the depth information. In addition, we propose a boundary noise removal method in the view synthesis operation. after finding boundary noise areas using the depth map, we replace them with corresponding texture information from the other reference image. Experimental results showed that the proposed algorithm improved temporal consistency of the depth sequence and reduced flickering artifacts in the virtual view. It also improved visual quality of the synthesized virtual views by removing the boundary noise.