• Title/Summary/Keyword: Video media

Search Result 2,646, Processing Time 0.078 seconds

Video Quality for DTV Essential Hidden Area Utilization

  • Han, Chan-Ho
    • Journal of Multimedia Information System
    • /
    • v.4 no.1
    • /
    • pp.19-26
    • /
    • 2017
  • The compression of video for both full HD and UHD requires the inclusion of extra vertical lines to every video frame, named as the DTV essential hidden area (DEHA), for the effective functioning of the MPEG-2/4/H encoder, stream, and decoder. However, while the encoding/decoding process is dependent on the DEHA, the DEHA is conventionally viewed as a redundancy in terms of channel utilization or storage efficiency. This paper proposes a block mode DEHA method to more effectively utilize the DEHA. Partitioning video block images and then evenly filling the representative DEHA macroblocks with the average DC coefficient of the active video macroblock can minimize the amount of DEHA data entering the compressed video stream. Theoretically, this process results in smaller DEHA data entering the video stream. Experimental testing of the proposed block mode DEHA method revealed a slight improvement in the quality of the active video. Outside of this technological improvement to video quality, the attractiveness of the proposed DEHA method is also heightened by the ease that it can be implemented with existing video encoders.

A Personal Videocasting System with Intelligent TV Browsing for a Practical Video Application Environment

  • Kim, Sang-Kyun;Jeong, Jin-Guk;Kim, Hyoung-Gook;Chung, Min-Gyo
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.10-20
    • /
    • 2009
  • In this paper, a video broadcasting system between a home-server-type device and a mobile device is proposed. The home-server-type device can automatically extract semantic information from video contents, such as news, a soccer match, and a baseball game. The indexing results are utilized to convert the original video contents to a digested or arranged format. From the mobile device, a user can make recording requests to the home-server-type devices and can then watch and navigate recorded video contents in a digested form. The novelty of this study is the actual implementation of the proposed system by combining the actual IT environment that is available with indexing algorithms. The implementation of the system is demonstrated along with experimental results of the automatic video indexing algorithms. The overall performance of the developed system is compared with existing state-of-the-art personal video recording products.

  • PDF

A Study on Cross-Association between UCI Identification System and Content-based Identifier for Copyright Identification and Management of Broadcasting Content (방송콘텐츠 저작권 식별관리를 위한 UCI 표준식별체계와 내용기반 식별정보의 상호연계 연구)

  • Kim, Joo-Sub;Nam, Je-Ho
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.288-298
    • /
    • 2009
  • In this paper, we propose a scheme to associate content-based video signature with Universal Content Identifier (UCI) system of broadcast content for copyright identification and management. Note that content-based video signature can identify a previously distributed content since it is directly extracted from content itself without allocation process of identifier such as UCI. Thus, we design the schema of UCI application metadata, which provides a video signature in order to consistently maintain a systemic link between UCI and the video signature. Also, we present the scenarios of copyright identification, management and additional service, which are based on transmission and management mechanism of video signature with UCI identification system.

Geometry Padding for Segmented Sphere Projection (SSP) in 360 Video (360 비디오의 SSP를 위한 기하학적 패딩)

  • Kim, Hyun-Ho;Myeong, Sang-Jin;Yoon, Yong-Uk;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.25-31
    • /
    • 2019
  • 360 video is attracting attention as immersive media, and is also considered in VVC (Versatile Video Coding), which is being developed in JVET (Joint Video Expert Team) as a new video coding standard of post-HEVC. A 2D image projected from 360 video for its compression may has discontinuities between the projected faces and inactive regions, and they may cause the visual artifacts in the reconstructed video as well as decrease of coding efficiency. In this paper, we propose a method of efficient geometric padding to reduce these discontinuities and inactive regions in the projection format of SSP (Segmented Sphere Projection). Experimental results show that the proposed method improves subjective quality compared to the existing padding of SSP that uses copy padding with minor loss of coding gain.

The User Interface of Button Type for Stereo Video-See-Through (Stereo Video-See-Through를 위한 버튼형 인터페이스)

  • Choi, Young-Ju;Seo, Young-Duek
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.47-54
    • /
    • 2007
  • This paper proposes a user interface based on video see-through environment which shows the images via stereo-cameras so that the user can control the computer systems or other various processes easily. We include an AR technology to synthesize virtual buttons; the graphic images are overlaid on the captured frames taken by the camera real-time. We search for the hand position in the frames to judge whether or not the user selects the button. The result of judgment is visualized through changing of the button color. The user can easily interact with the system by selecting the virtual button in the screen with watching the screen and moving her fingers at the air.

  • PDF

Interdependence of Images and Music Combined by Sharing the Identical Properties - Based on the Movie - (동일 속성 결합에 의한 영상과 음악의 상호의존성 -영화 <인터스텔라(Interstellar, 2014)>를 중심으로-)

  • Lee, Do-Kyoung;Kim, Jun
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.10
    • /
    • pp.237-247
    • /
    • 2019
  • This study looks at common features between the two media to explore the deepening relationship within music and images in movies and finds at how they are combined each other and the resulting effects. Beyond the limitations of conventional viewpoints of music as video-dependent media, The is analyzed to focus on the deepening relationship between the two media. In the movie, music and video were combined by sharing the same attributes of 'repeating structure', which increases the transmission power to the film's subject and story, maximizes visual and auditory stimuli, and forms a sense of immersion. That is, video and music combined in equal positions have great influence to each other, thus representing for a positive consideration of the potential for establishing interdependent relationships.

Automatic Generation of Video Metadata for the Super-personalized Recommendation of Media

  • Yong, Sung Jung;Park, Hyo Gyeong;You, Yeon Hwi;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.288-294
    • /
    • 2022
  • The media content market has been growing, as various types of content are being mass-produced owing to the recent proliferation of the Internet and digital media. In addition, platforms that provide personalized services for content consumption are emerging and competing with each other to recommend personalized content. Existing platforms use a method in which a user directly inputs video metadata. Consequently, significant amounts of time and cost are consumed in processing large amounts of data. In this study, keyframes and audio spectra based on the YCbCr color model of a movie trailer were extracted for the automatic generation of metadata. The extracted audio spectra and image keyframes were used as learning data for genre recognition in deep learning. Deep learning was implemented to determine genres among the video metadata, and suggestions for utilization were proposed. A system that can automatically generate metadata established through the results of this study will be helpful for studying recommendation systems for media super-personalization.

Character Recognition and Search for Media Editing (미디어 편집을 위한 인물 식별 및 검색 기법)

  • Park, Yong-Suk;Kim, Hyun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.519-526
    • /
    • 2022
  • Identifying and searching for characters appearing in scenes during multimedia video editing is an arduous and time-consuming process. Applying artificial intelligence to labor-intensive media editing tasks can greatly reduce media production time, improving the creative process efficiency. In this paper, a method is proposed which combines existing artificial intelligence based techniques to automate character recognition and search tasks for video editing. Object detection, face detection, and pose estimation are used for character localization and face recognition and color space analysis are used to extract unique representation information.

Real-time multi-GPU-based 8KVR stitching and streaming on 5G MEC/Cloud environments

  • Lee, HeeKyung;Um, Gi-Mun;Lim, Seong Yong;Seo, Jeongil;Gwak, Moonsung
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.62-72
    • /
    • 2022
  • In this study, we propose a multi-GPU-based 8KVR stitching system that operates in real time on both local and cloud machine environments. The proposed system first obtains multiple 4 K video inputs, decodes them, and generates a stitched 8KVR video stream in real time. The generated 8KVR video stream can be downloaded and rendered omnidirectionally in player apps on smartphones, tablets, and head-mounted displays. To speed up processing, we adopt group-of-pictures-based distributed decoding/encoding and buffering with the NV12 format, along with multi-GPU-based parallel processing. Furthermore, we develop several algorithms such as equirectangular projection-based color correction, real-time CG overlay, and object motion-based seam estimation and correction, to improve the stitching quality. From experiments in both local and cloud machine environments, we confirm the feasibility of the proposed 8KVR stitching system with stitching speed of up to 83.7 fps for six-channel and 62.7 fps for eight-channel inputs. In addition, in an 8KVR live streaming test on the 5G MEC/cloud, the proposed system achieves stable performances with 8 K@30 fps in both indoor and outdoor environments, even during motion.

A Study on the Design of Synchronization Protocol for Multimedia Communication (멀티미디어 통신을 위한 동기 프로토콜의 설계에 관한 연구)

  • 우희곤;김대영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.8
    • /
    • pp.1612-1627
    • /
    • 1994
  • There is a synchronization function which deals with only single media of text in the OSI Session Layer. So new synchronization schem and synchronization protocol are required for multimedia communications which include audio, video and graphic as well as text information. In this paper, conceptional Multmedia Synchronization Layer(MS layer) environment is composed and its service primitives and protocols based on 'multi-channel, base media scheme' are designed and proposed for multimedia synchronization services. This MS layer Manager (MSM) establishes the MS layer connection to the peer MS layer and manages each media channel which is created in MS layer media by media. The MSM also finds the synch-position through the media frame number by utilizing it like the time stamp to provide inter-media synchronization services as well as intra-media synchronization services.

  • PDF