• Title/Summary/Keyword: Video Enhancement

Search Result 273, Processing Time 0.03 seconds

Flickering Effect Reduction Based on the Modified Transformation Function for Video Contrast Enhancement

  • Yang, Hyeonseok;Park, Jinwook;Moon, Youngshik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.6
    • /
    • pp.358-365
    • /
    • 2014
  • This paper proposes a method that reduces the flickering effect caused by A-GLG (Adaptive Gray-Level Grouping) during video contrast enhancement. Of the GLG series, A-GLG shows the best contrast enhancement performance. The GLG series is based on histogram grouping. Histogram grouping is calculated differently between the continuous frames with a similar histogram and causes a subtle change in the transformation function. This is the reason for flickering effect when the video contrast is enhanced by A-GLG. To reduce the flickering effect caused by A-GLG, the proposed method calculates a modified transformation function. The modified transformation function is calculated using a previous and current transformation function applied with a weight separately. The proposed method was compared with A-GLG for flickering effect reduction and video contrast enhancement. Through the experimental results, the proposed method showed not only a reduced flickering effect, but also video contrast enhancement.

Low-Light Invariant Video Enhancement Scheme Using Zero Reference Deep Curve Estimation (Zero Deep Curve 추정방식을 이용한 저조도에 강인한 비디오 개선 방법)

  • Choi, Hyeong-Seok;Yang, Yoon Gi
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.991-998
    • /
    • 2022
  • Recently, object recognition using image/video signals is rapidly spreading on autonomous driving and mobile phones. However, the actual input image/video signals are easily exposed to a poor illuminance environment. A recent researches for improving illumination enable to estimate and compensate the illumination parameters. In this study, we propose VE-DCE (video enhancement zero-reference deep curve estimation) to improve the illumination of low-light images. The proposed VE-DCE uses unsupervised learning-based zero-reference deep curve, which is one of the latest among learning based estimation techniques. Experimental results show that the proposed method can achieve the quality of low-light video as well as images compared to the previous method. In addition, it can reduce the computational complexity with respect to the existing method.

Contrast Enhancement with Bin Underflow and Bin Overflow (Bin Underflow Bin Overflow를 이용한 Contrast Enhancement)

  • Oh, Jae-Hwan;Kang, Hyun;Yang, Seung-Joon
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1719-1722
    • /
    • 2003
  • Image enhancement를 하기 위한 영상처리 알고리즘 중의 하나인 contrast enhancement 알고리즘은 화면의 flickering 과 같은 부작용과 조절 가능한 contrast enhancement rate 에 대한 구현의 어려움 등으로 실제 TV와 같은 동영상에 적용하기에 어려움이 있었다. 본 논문에서는 Bin Underflow Bin Overflow(BUBO)를 이용하여 동영상에 적용할 경우에도 flickering 등의 부작용이 생기지 않으며 contrast enhancement rate 을 조절할 수 있는 효율적인 알고리즘을 제안한다. 또한 이와 관련하여 영상의 휘도 레벨에 있어서 어두운 영역의 계조와 밝은 영역의 계조를 향상시킬 수 있는 black/white level stretch 알고리즘과 전체 화면의 출력 휘도 레벨에 대한 dynamic range를 유지하면서 brightness를 조절할 수 있는 알고리즘을 제안한다.

  • PDF

3D video coding for e-AG using spatio-temporal scalability (e-AG를 위한 시공간적 계위를 이용한 3차원 비디오 압축)

  • 오세찬;이영호;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.199-202
    • /
    • 2003
  • In this paper, we propose a new 3D coding method for heterogeneous systems over enhanced Access Grid (e-AG) with 3D display using spatio-temporal scalability. The proposed encoder produces four bit-streams: one base layer and enhancement layer l, 2 and 3. The base layer represents a video sequence for left eye with lower spatial resolution. An enhancement layer l provides additional bit-stream needed for reproduction of frames produced in base layer with full resolution. Similarly, the enhancement layer 2 represents a video sequence for right eye with lower spatial resolution and an enhancement layer 3 provides additional bit-stream needed for reproduction of its reference pictures with full resolution. In this system, temporal resolution reduction is obtained by dropping B-frames in the receiver according to network condition. The receiver system can select the spatial and temporal resolution of video sequence with its display condition by properly combining bit-streams.

  • PDF

Gradient Fusion Method for Night Video Enhancement

  • Rao, Yunbo;Zhang, Yuhong;Gou, Jianping
    • ETRI Journal
    • /
    • v.35 no.5
    • /
    • pp.923-926
    • /
    • 2013
  • To resolve video enhancement problems, a novel method of gradient domain fusion wherein gradient domain frames of the background in daytime video are fused with nighttime video frames is proposed. To verify the superiority of the proposed method, it is compared to conventional techniques. The implemented output of our method is shown to offer enhanced visual quality.

8K Programmable Multimedia Platform based on SRP (SRP 를 기반으로 하는 8K 프로그래머블 멀티미디어 플랫폼)

  • Lee, Wonchang;Kim, Minsoo;Song, Joonho;Kim, Jeahyun;Lee, Shihwa
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.163-165
    • /
    • 2014
  • In this paper, we propose a world's first programmable video processing platform for video quality enhancement of 8K ($7680{\times}4320$) UHD (Ultra High Definition) TV at 60 frames per second. To support huge computation and memory bandwidth of video quality enhancement for 8K resolution, the proposed platform has unique features like symmetric multi-cluster architecture for data partitioning, ring data-path between clusters to support data pipelining, on-the-fly processing architecture to reduce DDR bandwidth, flexible hardware to accelerating common kernel in video enhancement algorithms. In addition to those features, general programmability of SRP (Samsung reconfigurable processor) as main core of the proposed platform makes it possible to upgrade continuously video enhancement algorithm even after the platform is fixed. This ability is very important because algorithms for 8K DTV is under development. The proposed sub-system has been embedded into SoC (System on Chip) and new 8K UHD TV using the programmable SoC is expected at CES2015 for the first time in the world.

  • PDF

Video Watermarking Algorithm for H.264 Scalable Video Coding

  • Lu, Jianfeng;Li, Li;Yang, Zhenhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.56-67
    • /
    • 2013
  • Because H.264/SVC can meet the needs of different networks and user terminals, it has become more and more popular. In this paper, we focus on the spatial resolution scalability of H.264/SVC and propose a blind video watermarking algorithm for the copyright protection of H.264/SVC coded video. The watermark embedding occurs before the H.264/SVC encoding, and only the original enhancement layer sequence is watermarked. However, because the watermark is embedded into the average matrix of each macro block, it can be detected in both the enhancement layer and base layer after downsampling, video encoding, and video decoding. The proposed algorithm is examined using JSVM, and experiment results show that is robust to H.264/SVC coding and has little influence on video quality.

Adaptive Regularized Enhancement of Wavelet Compressed Video (웨이블릿 압축 동영상의 정칙화 기반 적응적 개선에 관한 연구)

  • 정정훈;기현종;이성원;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.39-44
    • /
    • 2004
  • The three-dimensional (3D) wavelet transform with motion compensation is suitable for very high quality video coding due to both spatial and temporal decorrelations. However, it still suffers from image degradation such as ringing artifact and afterimage because of the loss of high frequency components by quantization. This paper proposes an iterative regularized enhancement of the motion-compensated 3D wavelet coded video. We also propose the adaptive implementation of the constraints for the regularization. It selectively suppresses the high frequency component along only the corresponding edge direction.

High-definition Video Enhancement Using Color Constancy Based on Scene Unit and Modified Histogram Equalization (장면단위 색채 항상성과 변형 히스토그램 평활화 방법을 이용한 고선명 동영상의 화질 향상 방법)

  • Cho, Dong-Chan;Kang, Hyung-Sub;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.368-379
    • /
    • 2010
  • As high-definition video is broadly used in various system such as broadcast system and digital camcorder the proper method in order to improve the quality of high-definition video is needed. In this paper, we propose an efficient method to improve color and contrast of high-definition video. In order to apply the image enhancement method to high-definition video, scale-down video of high-definition video is used and the parameter for image enhancement method is computed from small size video. To enhance the color of high-definition video, we apply color constancy method. First, we separate the video into several scenes by cut detection method. Then, we apply color constancy to each scene with same parameter. To improve the contrast of high-definition video, we use union of original image and histogram equalized image, and weight is calculated based on sorting of histogram bins. Finally, the performance of proposed method is demonstrated in experiment section.

Adaptive Video Enhancement Algorithm for Military Surveillance Camera Systems (국방용 감시카메라를 위한 적응적 영상화질 개선 알고리즘)

  • Shin, Seung-Ho;Park, Youn-Sun;Kim, Yong-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.1
    • /
    • pp.28-35
    • /
    • 2014
  • Surveillance cameras in national border and coastline area often occur the video distortion because of rapidly changing weather and light environments. It is positively necessary to enhance the distorted video quality for keeping surveillance. In this paper, we propose an adaptive video enhancement algorithm in the various environment changes. To solve an unstable performance problem of the existing method, the proposed method is based on Retinex algorithm and uses enhanced curves which is adapted in foggy and low-light conditions. In addition, we mixture the weighted HSV color model to keep color constancy and reduce noise to obtain clear images. As a results, the proposed algorithm improves the performance of well-balanced contrast enhancement and effective color restoration without any quality loss compared with the existing algorithm. We expect that this method will be used in surveillance camera systems and offer help of national defence with reliability.