• Title/Summary/Keyword: Video Browsing

Search Result 119, Processing Time 0.03 seconds

Temporal Video Modeling of Cultural Video (교양비디오의 시간지원 비디오 모델링)

  • 강오형;이지현;고성현;김정은;오재철
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.439-442
    • /
    • 2004
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, 1 present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we tan use video information more efficiently by me variation.

  • PDF

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

Multimodal Approach for Summarizing and Indexing News Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Young-Tae;Kang, Kyeong-Ok;Kim, Mun-Churl;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • A video summary abstracts the gist from an entire video and also enables efficient access to the desired content. In this paper, we propose a novel method for summarizing news video based on multimodal analysis of the content. The proposed method exploits the closed caption data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the closed caption data with the video in a time-line. Then, the detected highlights are described using MPEG-7 Summarization Description Scheme, which allows efficient browsing of the content through such functionalities as multi-level abstracts and navigation guidance. Multimodal search and retrieval are also within the proposed framework. By indexing synchronized closed caption data, the video clips are searchable by inputting a text query. Intensive experiments with prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications.

  • PDF

Temporal_based Video Retrival System (시간기반 비디오 검색 시스템)

  • Lee, Ji-Hyun;Kang, Oh-Hyung;Na, Do-Won;Lee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.631-634
    • /
    • 2005
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, I present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we can use video information more efficiently by time variation.

  • PDF

Automatic Video Genre Classification Method in MPEG compressed domain (MPEG 부호화 영역에서 Video Genre 자동 분류 방법)

  • Kim, Tae-Hee;Lee, Woong-Hee;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.836-845
    • /
    • 2002
  • Video summary is one of the tools which can provide the fast and effective browsing for a lengthy video. Video summary consists of many key-frames that could be defined differently depending on the video genre it belongs to. Consequently, the video summary constructed by the uniform manner might lead into inadequate result. Therefore, identifying the video genre is the important first step in generating the meaningful video summary. We propose a new method that can classify the genre of the video data in MPEC compressed bit-stream domain. Since the proposed method operates directly on the compressed bit-stream without decoding the frame, it has merits such as simple calculation and short processing time. In the proposed method, only the visual information is utilized through the spatial-temporal analysis to classify the video genre. Experiments are done for 6 genres of video: Cartoon, commercial, Music Video, News, Sports, and Talk Show. Experimental result shows more than 90% of accuracy in genre classification for the well -structured video data such as Talk Show and Sports.

A Web-based Synchronous Distance Learning System Supporting the Collaborative Browsing (공동 브라우징을 지원하는 웹 기반의 동기적 원격 학습 시스템)

  • 이성제;신근재;김엄준;김문석;성미영
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.5
    • /
    • pp.430-438
    • /
    • 2001
  • In this paper, we present the design and implementation of a web-based distance learning system supporting the collaborative browsing. Our system consists of an education affair management system, a video conferencing server/client, a white-board server/client, a session manager and a web browser sharing system. Among other things, our collaborative web browser is unique and not found in any other system. The web browser shows synchronously the same web pages as the lecturer moves through them. Therefore, it allows the student to feel real-time surfing gust as the lecturer would. The session manager supports multi-user and multi-group, and integrates various synchronous collaborative component into one distance learning system by providing the same session data and information of users in a session group. Our collaborative browsing system can increase the efficiency of distance learning and provides the effect of learning in the same classroom by supporting various synchronous functionalities, such as collaborative browsing.

  • PDF

Dynamic Video Abstraction for Interactive Broadcasting Applications (대화형 방송 환경을 위한 동적 비디오 요약)

  • 김재곤;장현성;김진웅
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06b
    • /
    • pp.103-108
    • /
    • 1999
  • 방송 환경의 디지털화와 더불어 단순히 단방향 방송 정보만을 시청하던 기존의 방식을 넘어 사용자의 다양한 욕구를 수용할 수 있는 대화형 방송 서비스(interactive broadcasting service)가 시작되고 있다. 대화형 방송 환경에서는 특히 사용자 측에 제공된 방대한 양의 디지털 멀티미디어 자료에 대한 효과적인 접근이 요구되는데, 본 논문에서는 이를 위하여 방송 비디오를 효과적으로 브라우징(browsing) 및 검색하고 전체의 내용을 짧은 시간 내에 개관할 수 있도록 하는 동적 비디오 요약(dynamic video abstraction) 기법에 관하여 고찰한다. 동적 비디오 요약에 의한 요약 비디오(skim video)는 전체 비디오를 내용에 기반하여 효과적으로 표현할 수 있도록 동영상 내의 주요 구간만으로 구성된 것으로, 대화형 방송에서 새로운 형태의 프로그램 안내 및 사용자 저장 자료에 대한 브라우징 도구 등으로써 매우 유용하게 사용할 수 있다. 본 논문에서는 자동으로 비디오 요약을 구현하기 위한 접근 방법과 전체 기능 구성 및 각 기능들의 구현 방법에 대하여 기술한다.

  • PDF

Toward a Structural and Semantic Metadata Framework for Efficient Browsing and Searching of Web Videos

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.1
    • /
    • pp.227-243
    • /
    • 2017
  • This study proposed a structural and semantic framework for the characterization of events and segments in Web videos that permits content-based searches and dynamic video summarization. Although MPEG-7 supports multimedia structural and semantic descriptions, it is not currently suitable for describing multimedia content on the Web. Thus, the proposed metadata framework that was designed considering Web environments provides a thorough yet simple way to describe Web video contents. Precisely, the metadata framework was constructed on the basis of Chatman's narrative theory, three multimedia metadata formats (PBCore, MPEG-7, and TV-Anytime), and social metadata. It consists of event information, eventGroup information, segment information, and video (program) information. This study also discusses how to automatically extract metadata elements including structural and semantic metadata elements from Web videos.

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF

A Study on Hypermap Database (하이퍼맵 데이타베이스에 관한 연구)

  • Kim, Yong-Il;Pyeon, Mu-Wook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.43-55
    • /
    • 1996
  • The objective of this research is to design a digital map database structure supporting video images which is one of the fundamental elements of hypermap. In order to reach the research objective, the work includes the identification of the relationships between two dimensional digital map database and video elements. The proposed database model has functions for interactive browsing between video image frames and specific points on two dimensional digital map, fer connecting the map elements and features on video images. After that, the images and the database are transformed to the pilot system fer testing the map database structure. The pilot project results indicate that the map database structure can integrate functionally two dimensional digital map and video images.

  • PDF