• Title/Summary/Keyword: Video media

Search Result 2,646, Processing Time 0.024 seconds

VIDEO INPAINTING ALGORITHM FOR A DYNAMIC SCENE

  • Lee, Sang-Heon;Lee, Soon-Young;Heu, Jun-Hee;Lee, Sang-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.114-117
    • /
    • 2009
  • A new video inpainting algorithm is proposed for removing unwanted objects or error of sources from video data. In the first step, the block bundle is defined by the motion information of the video data to keep the temporal consistency. Next, the block bundles are arranged in the 3-dimensional graph that is constructed by the spatial and temporal correlation. Finally, we pose the inpainting problem in the form of a discrete global optimization and minimize the objective function to find the best temporal bundles for the grid points. Extensive simulation results demonstrate that the proposed algorithm yields visually pleasing video inpainting results even in a dynamic scene.

  • PDF

IMPLEMENTATION EXPERIMENT OF VTP BASED ADAPTIVE VIDEO BIT-RATE CONTROL OVER WIRELESS AD-HOC NETWORK

  • Ujikawa, Hirotaka;Katto, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.668-672
    • /
    • 2009
  • In wireless ad-hoc network, knowing the available bandwidth of the time varying channel is imperative for live video streaming applications. This is because the available bandwidth is varying all the time and strictly limited against the large data size of video streaming. Additionally, adapting the encoding rate to the suitable bit-rate for the network, where an overlarge encoding rate induces congestion loss and playback delay, decreases the loss and delay. While some effective rate controlling methods have been proposed and simulated well like VTP (Video Transport Protocol) [1], implementing to cooperate with the encoder and tuning the parameters are still challenging works. In this paper, we show our result of the implementation experiment of VTP based encoding rate controlling method and then introduce some techniques of our parameter tuning for a video streaming application over wireless environment.

  • PDF

Graphical Video Representation for Scalability

  • Jinzenji, Kumi;Kasahara, Hisashi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.29-34
    • /
    • 1996
  • This paper proposes a new concept in video called Graphical Video. Graphical Video is a content-based and scalable video representation. A video consists of several elements such as moving images, still images, graphics, characters and charts. All of these elements can be represented graphically except moving images. It is desirable to transform these moving images graphical elements so that they can be treated in the same way as other graphical elements. To achieve this, we propose a new graphical representation of moving images using spatio-temporal clusters, which consist of texture and contours. The texture is described by three-dimensional fractal coefficients, while the contours are described by polygons. We propose a method that gives domain pool location and size as a means to describe cluster texture within or near a region of clusters. Results of an experiment on texture quality confirm that the method provides sufficiently high SNR as compared to that in the original three-dimensional fractal approximation.

  • PDF

A Novel Selective Frame Discard Method for 3D Video over IP Networks

  • Chung, Young-Uk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1209-1221
    • /
    • 2010
  • Three dimensional (3D) video is expected to be an important application for broadcast and IP streaming services. One of the main limitations for the transmission of 3D video over IP networks is network bandwidth mismatch due to the large size of 3D data, which causes fatal decoding errors and mosaic-like damage. This paper presents a novel selective frame discard method to address the problem. The main idea of the proposed method is the symmetrical discard of the two dimensional (2D) video frame and the depth map frame. Also, the frames to be discarded are selected after additional consideration of the playback deadline, the network bandwidth, and the inter-frame dependency relationship within a group of pictures (GOP). It enables the efficient utilization of the network bandwidth and high quality 3D IPTV service. The simulation results demonstrate that the proposed method enhances the media quality of 3D video streaming even in the case of bad network conditions.

Object segmentation and object-based surveillance video indexing

  • Kim, Jin-Woong;Kim, Mun-Churl;Lee, Kyu-Won;Kim, Jae-Gon;Ahn, Chie-Teuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.165.1-170
    • /
    • 1999
  • Object segmentation fro natural video scenes has recently become one of very active research to pics due to the object-based video coding standard MPEG-4. Object detection and isolation is also useful for object-based indexing and search of video content, which is a goal of the emerging new standard, MPEG-7. In this paper, an automatic segmentation method of moving objects in image sequence is presented which is applicable to multimedia content authoring for MPEG-4, and two different segmentation approaches suitable for surveillance applications are addressed in raw data domain and compressed bitstream domains. We also propose an object-based video description scheme based on object segmentation for video indexing purposes.

Flowing Water Editing and Synthesis Based on a Dynamic Texture Model

  • Zhang, Qian;Lee, Ki-Jung;WhangBo, Taeg-Keun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.729-736
    • /
    • 2008
  • Using video synthesis to depict flowing water is useful in virtual reality, computer games, digital movies and scientific computing. This paper presents a novel algorithm for synthesizing dynamic water scenes through a sample video based on a dynamic texture model. In the paper, we treat the video sample as a 2-D texture image. In order to obtain textons, we analyze the video sample automatically based on dynamic texture model. Then, we utilize a linear dynamic system (LDS) to describe the characteristics of each texton. Using these textons, we synthesize a new video for dynamic flowing water which is prolonged and non-fuzzy in vision. Compared with other classical methods, our method was tested to demonstrate the effectiveness and efficiency with several video samples.

  • PDF

Standardization Trend of 3DoF+ Video for Immersive Media (이머시브미디어를 3DoF+ 비디오 부호화 표준 동향)

  • Lee, G.S.;Jeong, J.Y.;Shin, H.C.;Seo, J.I.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.156-163
    • /
    • 2019
  • As a primitive immersive video technology, a three degrees of freedom (3DoF) $360^{\circ}$ video can currently render viewport images that are dependent on the rotational movements of the viewer. However, rendering a flat $360^{\circ}$ video, that is supporting head rotations only, may generate visual discomfort especially when objects close to the viewer are rendered. 3DoF+ enables head movements for a seated person adding horizontal, vertical, and depth translations. The 3DoF+ $360^{\circ}$ video is positioned between 3DoF and six degrees of freedom, which can realize the motion parallax with relatively simple virtual reality software in head-mounted displays. This article introduces the standardization trends for the 3DoF+ video in the MPEG-I visual group.

Quality of Experience Experiment Method and Statistical Analysis for 360-degree Video with Sensory Effect

  • Jin, Hoe-Yong;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.25 no.7
    • /
    • pp.1063-1072
    • /
    • 2020
  • This paper proposes an experimental method for measuring the quality of experience to measure the influence of the participants' immersion, satisfaction, and presence according to the application of sensory effects to 360-degree video. Participants of the experiment watch 360-degree videos using HMD and receive sensory effects using scent diffusing devices and wind devices. Subsequently, a questionnaire was conducted on the degree of immersion, satisfaction, and present feelings for the video you watched. By analyzing the correlation of the survey results, we found that the provision of sensory effects satisfies the 360-degree video viewing, and the experimental method was appropriate. In addition, using the P.910 method, a result was derived that was not suitable for measuring the quality of the immersion and presence of 360-degree video according to the provision of sensory effects.

Implementing VVC Tile Extractor for 360-degree Video Streaming Using Motion-Constrained Tile Set

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.7
    • /
    • pp.1073-1080
    • /
    • 2020
  • 360-degree video streaming technologies have been widely developed to provide immersive virtual reality (VR) experiences. However, high computational power and bandwidth are required to transmit and render high-quality 360-degree video through a head-mounted display (HMD). One way to overcome this problem is by transmitting high-quality viewport areas. This paper therefore proposes a motion-constrained tile set (MCTS)-based tile extractor for versatile video coding (VVC). The proposed extractor extracts high-quality viewport tiles, which are simulcasted with low-quality whole video to respond to unexpected movements by the user. The experimental results demonstrate a savings of 24.81% in the bjøntegaard delta rate (BD-rate) saving for the luma peak signal-to-noise ratio (PSNR) compared to the rate obtained using a VVC anchor without tiled streaming.

Video Palmprint Recognition System Based on Modified Double-line-single-point Assisted Placement

  • Wu, Tengfei;Leng, Lu
    • Journal of Multimedia Information System
    • /
    • v.8 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • Palmprint has become a popular biometric modality; however, palmprint recognition has not been conducted in video media. Video palmprint recognition (VPR) has some advantages that are absent in image palmprint recognition. In VPR, the registration and recognition can be automatically implemented without users' manual manipulation. A good-quality image can be selected from the video frames or generated from the fusion of multiple video frames. VPR in contactless mode overcomes several problems caused by contact mode; however, contactless mode, especially mobile mode, encounters with several revere challenges. Double-line-single-point (DLSP) assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. This paper modifies DLSP technique to reduce the invalid area in the frames. In addition, the valid frames, in which users place their hands correctly, are selected according to finger gap judgement, and then some key frames, which have good quality, are selected from the valid frames as the gallery samples that are matched with the query samples for authentication decision. The VPR algorithm is conducted on the system designed and developed on mobile device.