• Title/Summary/Keyword: video content

Search Result 1,248, Processing Time 0.022 seconds

Signature Extraction Method from H.264 Compressed Video (H.264/AVC로 압축된 비디오로부터 시그너쳐 추출방법)

  • Kim, Sung-Min;Kwon, Yong-Kwang;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.10-17
    • /
    • 2009
  • This paper proposes a compressed domain signature extraction method which can be used for CBCD (Content Based Copy Detection). Since existing signature extraction methods for the CBCD are executed in spatial domain, they need additional computations to decode the compressed video before the signature extraction. To avoid this overhead, we generate a thumbnail image directly from the compressed video without full decoding. Then we can extract the video signature from the thumbnail image. Experimental results of extracting brightness ordering information as the signature for CBCD show that our proposed method is 2.8 times faster than the spatial domain method while maintaining 80.98% accuracy.

Broken Integrity Detection of Video Files in Video Event Data Recorders

  • Lee, Choongin;Lee, Jehyun;Pyo, Youngbin;Lee, Heejo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3943-3957
    • /
    • 2016
  • As digital evidence has a highly influential role in proving the innocence of suspects, methods for integrity verification of such digital evidence have become essential in the digital forensic field. Most surveillance camera systems are not equipped with proper built-in integrity protection functions. Because digital forgery techniques are becoming increasingly sophisticated, manually determining whether digital content has been falsified is becoming extremely difficult for investigators. Hence, systematic approaches to forensic integrity verification are essential for ascertaining truth or falsehood. We propose an integrity determination method that utilizes the structure of the video content in a Video Event Data Recorder (VEDR). The proposed method identifies the difference in frame index fields between a forged file and an original file. Experiments conducted using real VEDRs in the market and video files forged by a video editing tool demonstrate that the proposed integrity verification scheme can detect broken integrity in video content.

Performance Evaluation of New Signatures for Video Copy Detection (비디오 복사방지를 위한 새로운 특징들의 성능평가)

  • 현기호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.96-102
    • /
    • 2003
  • Video copy detection is a complementary approach to watermarking. As opposed to watermarking, which relies on inserting a distinct pattern into the video stream, video copy detection techniques match content-based signatures to detect copies of video. Existing typical content-based copy detection schemes have relied on image matching. This paper proposes two new sequence matching techniques for copy detection and compares the performance with color techniques that is the existing techniques. Motion, intensity and color-based signatures are compared in the context of copy detection. Comparison of experimental results are reported on detecting copies of movie clips.

Analysis of characteristics of YouTube video contents for the development of pattern drafting video (패턴제작 교육용 영상콘텐츠 개발을 위한 유튜브 영상 현황 분석)

  • Kang, Yeo Sun
    • The Research Journal of the Costume Culture
    • /
    • v.27 no.6
    • /
    • pp.599-614
    • /
    • 2019
  • The aim of this study to provide basic reference data for the development of video contents used in pattern drafting education and to explore the possibility of utilizing YouTube videos in such education. Subject videos were selected using the number of views. A total of 596 videos and 28 channels were analyzed for the period July to September 2019 and the results are as follows. With regard to content, there were 27 pattern drafting items, the majority being dress, pants, skirt, blouse and sleeve drafting, although high-level content such as cowl, bustier, corset patterns were also available. Therefore, there is a high likelihood that YouTube videos could be used as educational material, especially as supplementary references to provide specific examples and easy explanations for difficult concepts or method, for students majoring in this field. However, as most videos currently focus on a few items, expanding video content to features a wider variety of clothing items at different levels is necessary. With regard to video length, it mostly ranged from 10 to 15 minutes. It is not advisable to create lengthy lecture-style videos expounding on different principles or variations in pattern drafting when developing educational video material.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Case Study on Realistic Content Development Process of Public Enterprise - Focus on case of Korea Industrial Complex Corporation Gallery - (공기업의 실감콘텐츠 개발 프로세스 사례연구 - 한국산업단지공단 홍보관 사례를 중심으로-)

  • Chung, Hae Won;Cho, Woo Ri
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.91-97
    • /
    • 2024
  • Recently, with the rapid development of digital media technology, Realistic content that stimulates users' five senses is being used in various fields. This study focused on the case of the development of the Korea Industrial Complex Corporation's public relations center as the subject of the study to study the realistic content development process of public enterprises. First, the realistic content development process was divided into 10 stages and practical guidelines were presented to help develop realistic content in the future by presenting important development points and methods at each stage. Second, among the realistic content development processes, the importance of storytelling was analyzed at the scenario stage. Third, various methods of displaying content were analyzed. In the case of the Korea Industrial Complex Corporation's public relations center, it was proposed in three ways: story video, experience video, and media wall. It is suggested that the role of branding, promotion, and PR can be performed in one public relations center through an effective development process.

A new approach for content-based video retrieval

  • Kim, Nac-Woo;Lee, Byung-Tak;Koh, Jai-Sang;Song, Ho-Young
    • International Journal of Contents
    • /
    • v.4 no.2
    • /
    • pp.24-28
    • /
    • 2008
  • In this paper, we propose a new approach for content-based video retrieval using non-parametric based motion classification in the shot-based video indexing structure. Our system proposed in this paper has supported the real-time video retrieval using spatio-temporal feature comparison by measuring the similarity between visual features and between motion features, respectively, after extracting representative frame and non-parametric motion information from shot-based video clips segmented by scene change detection method. The extraction of non-parametric based motion features, after the normalized motion vectors are created from an MPEG-compressed stream, is effectively fulfilled by discretizing each normalized motion vector into various angle bins, and by considering the mean, variance, and direction of motion vectors in these bins. To obtain visual feature in representative frame, we use the edge-based spatial descriptor. Experimental results show that our approach is superior to conventional methods with regard to the performance for video indexing and retrieval.

A case study on the content types and characteristics of global fashion YouTubers (글로벌 패션 유튜버의 콘텐츠 유형과 특성에 관한 사례연구)

  • Kim, Koh Woon;Kim, Yoon
    • The Research Journal of the Costume Culture
    • /
    • v.28 no.3
    • /
    • pp.389-407
    • /
    • 2020
  • With YouTube's overwhelming share of the market, research on analyzing the types of content on YouTube is essential. An analysis of major global fashion YouTubers that the types of video content could be largely classified into three main categories: Fashion, beauty and daily life. The fashion category was subdivided into styling and fashion product review content type. The beauty category was subdivided into tutorials, beauty product reviews, and beauty tip content types. The daily life category was subdivided into daily sharing, consultation, and Q & A content types. Video content within fashion YouTuber channels is accompanied by expertise in fashion and beauty. At the same time, videos on daily life are uploaded, and through interactive communication with viewers, YouTubers form an intimate bond with subscribers. Content emphasizing entertainment, not just information delivery that introduces fashion products, is attracting growing interest among subscribers. This study analyzed the content of the increasingly popular fashion YouTuber channels and determined its important characteristics. The study makes a significant contribution to academic research by laying a foundation for future studies of YouTube content in the fashion field. Since differences in country of birth and race among YouTubers may influence content production, follow-up research will be conducted on the types and characteristics of domestic fashion YouTubers.

A Research on the Teaser Video Production Method by Keyframe Extraction Based on YCbCr Color Model (YCbCr 컬러모델 기반의 키프레임 추출을 통한 티저 영상 제작 방법에 대한 연구)

  • Lee, Seo-young;Park, Hyo-Gyeong;Young, Sung-Jung;You, Yeon-Hwi;Moon, Il-Young
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.439-445
    • /
    • 2022
  • Due to the development of online media platforms and the COVID-19 incident, the mass production and consumption of digital video content are rapidly increasing. In order to select digital video content, users grasp it in a short time through thumbnails and teaser videos, and select and watch digital video content that suits them. It is very inconvenient to check all digital video contents produced around the world one by one and manually edit teaser videos for users to choose from. In this paper, keyframes are extracted based on YCbCr color models to automatically generate teaser videos, and keyframes extracted through clustering are optimized. Finally, we present a method of producing a teaser video to help users check digital video content by connecting the finally extracted keyframes.

Hybrid Video Information System Supporting Content-based Retrieval and Similarity Retrieval (비디오의 의미검색과 유사성검색을 위한 통합비디오정보시스템)

  • Yun, Mi-Hui;Yun, Yong-Ik;Kim, Gyo-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2031-2041
    • /
    • 1999
  • In this paper, we present the HVIS (Hybrid Video Information System) which bolsters up meaning retrieval of all the various users by integrating feature-based retrieval and annotation-based retrieval of unformatted formed and massive video data. HVIS divides a set of video into video document, sequence, scene and object to model the metadata and suggests the Two layered Hybrid Object-oriented Metadata Model(THOMM) which is composed of raw-data layer for physical video stream, metadata layer to support annotation-based retrieval, content-based retrieval, and similarity retrieval. Grounded on this model, we presents the video query language which make the annotation-based query, content-based query and similar query possible and Video Query Processor to process the query and query processing algorithm. Specially, We present the similarity expression to appear degree of similarity which considers interesting of user. The proposed system is implemented with Visual C++, ActiveX and ORACLE.

  • PDF