• Title/Summary/Keyword: 비디오 복원

Search Result 258, Processing Time 0.035 seconds

A Method for Recovering Text Regions in Video using Extended Block Matching and Region Compensation (확장적 블록 정합 방법과 영역 보상법을 이용한 비디오 문자 영역 복원 방법)

  • 전병태;배영래
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.767-774
    • /
    • 2002
  • Conventional research on image restoration has focused on restoring degraded images resulting from image formation, storage and communication, mainly in the signal processing field. Related research on recovering original image information of caption regions includes a method using BMA(block matching algorithm). The method has problem with frequent incorrect matching and propagating the errors by incorrect matching. Moreover, it is impossible to recover the frames between two scene changes when scene changes occur more than twice. In this paper, we propose a method for recovering original images using EBMA(Extended Block Matching Algorithm) and a region compensation method. To use it in original image recovery, the method extracts a priori knowledge such as information about scene changes, camera motion and caption regions. The method decides the direction of recovery using the extracted caption information(the start and end frames of a caption) and scene change information. According to the direction of recovery, the recovery is performed in units of character components using EBMA and the region compensation method. Experimental results show that EBMA results in good recovery regardless of the speed of moving object and complexity of background in video. The region compensation method recovered original images successfully, when there is no information about the original image to refer to.

Video Augmentation of Virtual Object by Uncalibrated 3D Reconstruction from Video Frames (비디오 영상에서의 비보정 3차원 좌표 복원을 통한 가상 객체의 비디오 합성)

  • Park Jong-Seung;Sung Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.421-433
    • /
    • 2006
  • This paper proposes a method to insert virtual objects into a real video stream based on feature tracking and camera pose estimation from a set of single-camera video frames. To insert or modify 3D shapes to target video frames, the transformation from the 3D objects to the projection of the objects onto the video frames should be revealed. It is shown that, without a camera calibration process, the 3D reconstruction is possible using multiple images from a single camera under the fixed internal camera parameters. The proposed approach is based on the simplification of the camera matrix of intrinsic parameters and the use of projective geometry. The method is particularly useful for augmented reality applications to insert or modify models to a real video stream. The proposed method is based on a linear parameter estimation approach for the auto-calibration step and it enhances the stability and reduces the execution time. Several experimental results are presented on real-world video streams, demonstrating the usefulness of our method for the augmented reality applications.

  • PDF

Reversible Watermarking based Video Contents Management and Control technique using Biological Organism Model (생물학적 유기체 모델을 이용한 가역 워터마킹 기반 비디오 콘텐츠 관리 및 제어 기법)

  • Jang, Bong-Joo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.841-851
    • /
    • 2013
  • The infectious information hiding system(IIHS) is proposed for secure distribution of high quality video contents by applying optimized watermark embedding and detection algorithms to video codecs. And the watermark as infectious information is transmitted while target video is displayed or edited by codecs. This paper proposes a fast and effective reversible watermarking and infectious information generation for IIHS. Our reversible watermarking scheme enables video decoder to control video quality and watermark strength actively for by adding control code and expiration date with the watermark. Also, we designed our scheme with low computational complexity to satisfy it's real-time processing in a video codec, and to prevent time or frame delay during watermark detection and video restoration, we embedded one watermark and one side information within a macro-block. Experimental results verify that our scheme satisfy real-time watermark embedding and detection and watermark error is 0% after reversible watermark detection. Finally, we conform that the quality of restored video contens is almost same with compressed video without watermarking algorithm.

A Method of Estimating Distortion in Pixel-Domain Wyner-Ziv Residual Video Coding (화면 간 차이신호의 화소영역 위너-지브 비디오 부호화 기법에서 왜곡 예측방법)

  • Kim, Jin-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.891-898
    • /
    • 2014
  • The DVC (Distributed Video Coding) provides a theoretical basis for the implementation of light video encoder. Conventionally, lots of studies have been focused on the codec scheme of Stanford University that has a feedback channel to control the bit rate finely. However, the codec scheme can not evaluate the qualities of the frames reconstructed by the received parity bits at the decoder side. This paper presents an efficient method of estimating distortion by correcting the virtual channel noises in side information and then facilitating the measurements of the visual qualities. Through several simulations, it is shown that the proposed method is very efficient in estimating the visual qualities of the reconstructed WZ frames.

Phase-only Hologram Video Compression Method Using Deep Learning-Based Restoration Network (딥러닝 기반의 복원 네트워크을 사용한 위상 홀로그램 비디오 압축 방법)

  • Kim, Woosuk;Kang, Ji-Won;Oh, Kwan-Jung;Kim, Jin-Woong;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.93-94
    • /
    • 2021
  • 본 연구는 딥러닝 기반의 복원 모델을 사용하여, 비디오 압축을 통해 변질된 위상 홀로그램의 화질을 복원하는 방법을 제안한다. 압축 효율을 위해 위상 홀로그램의 해상도를 감소시킨 후 압축한다. 원래의 해상도로 되돌린 홀로그램을 딥러닝 모델을 사용하여 복원한다. 복원된 위상 홀로그램은 원본 홀로그램을 압축한 것보다 동일한 BPP에서 더 높은 PSNR을 보인다.

  • PDF

3D Human Reconstruction from Video using Quantile Regression (분위 회귀 분석을 이용한 비디오로부터의 3차원 인체 복원)

  • Han, Jisoo;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.264-272
    • /
    • 2019
  • In this paper, we propose a 3D human body reconstruction and refinement method from the frames extracted from a video to obtain natural and smooth motion in temporal domain. Individual frames extracted from the video are fed into convolutional neural network to estimate the location of the joint and the silhouette of the human body. This is done by projecting the parameter-based 3D deformable model to 2D image and by estimating the value of the optimal parameters. If the reconstruction process for each frame is performed independently, temporal consistency of human pose and shape cannot be guaranteed, yielding an inaccurate result. To alleviate this problem, the proposed method analyzes and interpolates the principal component parameters of the 3D morphable model reconstructed from each individual frame. Experimental result shows that the erroneous frames are corrected and refined by utilizing the relation between the previous and the next frames to obtain the improved 3D human reconstruction result.

A Study on Video Analysis using Re-constructing of Motion Vector on MPEG Compressed Domain (MPEG 에서의 움직임 벡터 재구성을 이용한 비디오 해석 기법 연구)

  • 김낙우;김태용;강응관;최종수
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.685-688
    • /
    • 2001
  • 본 논문은 MPEG 비디오에서 나타나는 여러 예측 형태의 움직임 벡터를 프레임 타입에 관계없이 단일 예측방향만을 갖도록 새롭게 추정하여 비디오 영상물의 분석에 직접적으로 활용하는 방안에 대해 제시하고 있다. 또한 재 추정된 각 프레임에서의 움직임 벡터를 이용한 비디오 시퀀스 내에서의 객체 추출 및 추적 기법 등에 대해서도 함께 제안하였다. 제안된 알고리즘은 영상에 대한 복원과정을 거치지 않고, 압축 비디오 영역으로부터 쉽게 추출될 수 있는 매크로 블록 영역 상에서 수행되었으며, 실험 결과는 제안된 방법의 높은 성능을 잘 나타내어 주고 있다.

  • PDF

Non-rigid 3D Shape Recovery from Stereo 2D Video Sequence (스테레오 2D 비디오 영상을 이용한 비정형 3D 형상 복원)

  • Koh, Sung-shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.281-288
    • /
    • 2016
  • The natural moving objects are the most non-rigid shapes with randomly time-varying deformation, and its types also very diverse. Methods of non-rigid shape reconstruction have widely applied in field of movie or game industry in recent years. However, a realistic approach requires moving object to stick many beacon sets. To resolve this drawback, non-rigid shape reconstruction researches from input video without beacon sets are investigated in multimedia application fields. In this regard, our paper propose novel CPSRF(Chained Partial Stereo Rigid Factorization) algorithm that can reconstruct a non-rigid 3D shape. Our method is focused on the real-time reconstruction of non-rigid 3D shape and motion from stereo 2D video sequences per frame. And we do not constrain that the deformation of the time-varying non-rigid shape is limited by a Gaussian distribution. The experimental results show that the 3D reconstruction performance of the proposed CPSRF method is superior to that of the previous method which does not consider the random deformation of shape.

Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks (적대적생성신경망을 이용한 연안 파랑 비디오 영상에서의 빗방울 제거 및 배경 정보 복원)

  • Huh, Dong;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a video enhancement method using generative adversarial networks to remove raindrops and restore the background information on the removed region in the coastal wave video imagery distorted by raindrops during rainfall. Two experimental models are implemented: Pix2Pix network widely used for image-to-image translation and Attentive GAN, which is currently performing well for raindrop removal on a single images. The models are trained with a public dataset of paired natural images with and without raindrops and the trained models are evaluated their performance of raindrop removal and background information recovery of rainwater distortion of coastal wave video imagery. In order to improve the performance, we have acquired paired video dataset with and without raindrops at the real coast and conducted transfer learning to the pre-trained models with those new dataset. The performance of fine-tuned models is improved by comparing the results from pre-trained models. The performance is evaluated using the peak signal-to-noise ratio and structural similarity index and the fine-tuned Pix2Pix network by transfer learning shows the best performance to reconstruct distorted coastal wave video imagery by raindrops.

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.