• Title/Summary/Keyword: Video data retrieval

Search Result 176, Processing Time 0.023 seconds

Hardware Implementation of Moving Picture Retrieval System Using Scene Change Technique (장면 전환 기법을 이용한 동영상 검색 시스템의 하드웨어 구현)

  • Kim, Jang-Hui;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.30-36
    • /
    • 2008
  • The multimedia that is characterized by multi-media, multi-features, multi-representations, huge volume, and varieties, is rapidly spreading out due to the increasing of application domains. Thus, it is urgently needed to develop a multimedia information system that can retrieve the needed information rapidly and accurately from the huge amount of multimedia data. For the content-based retrieval of moving picture, picture information is generally used. It is generally used when video is segmented. Through that, it can be a structural video browsing. The tasking that divides video to shot is called video segmentation, and detecting the cut for video segmentation is called cut detection. The goal of this paper is to divide moving picture using HMMD(Hue-Mar-Min-Diff) color model and edge histogram descriptor among the MPEG-7 visual descriptors. HMMD color model is more familiar to human's perception than the other color spaces. Finally, the proposed retrieval system is implemented as hardware.

Extraction of Superimposed-Caption Frame Scopes and Its Regions for Analyzing Digital Video (비디오 분석을 위한 자막프레임구간과 자막영역 추출)

  • Lim, Moon-Cheol;Kim, Woo-Saeng
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3333-3340
    • /
    • 2000
  • Recently, Requnremeni for video data have been increased rapidly by high progress of both hardware and cornpression technique. Because digital video data are unformed and mass capacity, it needs various retrieval techniquesjust as contednt-based rehieval Superimposed-caption ina digital video can help us to analyze the video story easier and be used as indexing information for many retrieval techniques In this research we propose a new method that segments the caption as analyzing texture eature of caption regions in each video frame, and that extracts the accurate scope of superimposed-caption frame and its key regions and color by measunng cominuity of caption regions between frames

  • PDF

A Dynamic Segmentation Method for Representative Key-frame Extraction from Video data (동적 분할 기법을 이용한 비디오 데이터의 대표키 프레임 추출)

  • Lee, Soon-Hee;Kim, Young-Hee;Ryu, Keun-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.46-57
    • /
    • 2001
  • To access the multimedia data, such as video data with temporal properties, the content-based image retrieval technique is required. Moreover, one of the basic techniques for content-based image retrieval is an extraction of representative key-frames. Not only did we implement this method, but also by analyzing the video data, we have proven the proposed method to be both effective and accurate. In addition, this method is expected to solve the real world problem of building video databases, as it is very useful in building an index.

  • PDF

Indexing and Retrieval of Human Individuals on Video Data Using Face and Speaker Recognition

  • Y.Sugiyama;N.Ishikawa;M.Nishida;Y.Ariki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.122-127
    • /
    • 1998
  • In this paper, we focus on the information retrieval of human individuals who are recorded on the video database. Our purpose is to index persons by their faces or voice and to retrieve their existing time sections on the video data. The database system can track as well as extract a face or voice of a certain person and construct a model of the individual person in self-organization mode. If he appears again at different time, the system can put the mark of the same person to the associated frames. In this way, the same person can be retrieved even if the system does not know his exact name. As the face and speaker modeling, a subspace method is employed to improve the indexing accuracy.

  • PDF

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Video retrieval system based on closed caption (폐쇄자막을 기반한 자막기반 동영상 검색 시스템)

  • 김효진;황인정;이은주;이응혁;민홍기
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.12a
    • /
    • pp.57-60
    • /
    • 2000
  • Even if the video data is utilized for a lot of field, its very difficult to reuse and search easily because of its atypical(unfixed form) and complicated structure. In this study, we presented the video retrieval system which is based on the synchronized closed caption and video, SMIL and SAMI languages which are described to structured and systematic form like multimedia data These have next structure; At first, a key word is inputted by user, then time stamp would be sampling from the string which has a key word in the caption file. To the result, the screen shows an appropriate video frame.

  • PDF

Surveillance Video Retrieval based on Object Motion Trajectory (물체의 움직임 궤적에 기반한 감시 비디오의 검색)

  • 정영기;이규원;호요성
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.41-49
    • /
    • 2000
  • In this paper, we propose a new method of indexing and searching based on object-specific features at different semantic levels for video retrieval. A moving trajectory model is used as an indexing key for accessing the individual object in the semantic level. By tracking individual objects with segmented data, we can generate motion trajectories and set model parameters using polynomial curve fitting. The proposed searching scheme supports various types of queries including query by example, query by sketch, and query on weighting parameters for event-based video retrieval. When retrieving the interested video clip, the system returns the best matching event in the similarity order.

  • PDF

A Video Shot Verification System (비디오 샷 검증 시스템)

  • Chung, Ji-Moon
    • Journal of Digital Convergence
    • /
    • v.7 no.2
    • /
    • pp.93-102
    • /
    • 2009
  • Since video is composed of unstructured data with massive storage and linear forms, it is essential to conduct various research studies to provide the required contents for users who are accustomed to dealing with standardized data such as documents and images. Previous studies have shown the occurrence of undetected and false detected shots. This thesis suggested shot verification and video retrieval system using visual rhythm to reduce these kinds of errors. First, the system suggested in this paper is designed to detect the parts easily and quickly, which are assumed as shot boundaries, just by changing the visual rhythm without playing the image. Therefore, this enables to delete the false detected shot and to generate the unidentified shot and key frame. The following are the summaries of the research results of this study. Second, during the retrieving process, a thumbnail and keyword method of inquiry is possible and the user is able to put some more priorities on one part than the other between the color and shape. As a result, the corresponding shot or scene is displayed. However, in the case of not finding the preferred shot, the key picture frame of similar shot is supplied and can be used in the further inquiry of the next scene.

  • PDF

A Multimedia Database System using Method of Automatic Annotation Update and Multi-Partition Color Histogram (자동 주석 갱신 및 다중 분할 칼라 히스토그램 기법을 이용한 멀티미디에 데이터베이스 시스템)

  • Ahn Jae-Myung;Oh Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.701-708
    • /
    • 2004
  • Existing contents-based video retrieval systems search by using a single method such as annotation-based or feature-based retrieval. Hence, it not only shows low search efficiency, but also requires many efforts to provide system administrator or annotator with a perfect automatic processing. Tn this paper, we propose an agent-based, and automatic and unified semantics-based video retrieval system, which support various semantics-retrieval of the massive video data by integrating the feature-based retrieval and the annotation-based retrieval. The indexing agent embodies the semantics about annotation of extracted key frames by analyzing a fundamental query of a user and by selecting a key-frame image that is ed by a query. Also, a key frame selected by user takes a query image of the feature-based retrieval and the indexing agent searches and displays the most similar key-frame images after comparing query images with key frames in the database by using the color-multiple-partition histogram techniques. Furthermore, it is shown that the performance of the proposed system can be significantly improved.

Real-time Playback of a Windows based Multichannel Visual Monitoring System (윈도우즈 기반 다채널 영상 감시 시스템의 실시간 재생)

  • 양정훈;정선태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2116-2119
    • /
    • 2003
  • In this paper, we present a DirectShow-based retrieval and playback subsystem of DVR(Digital Video Recorder), which supports real-time playback of stored video data and synchronized playback among several video channel data. The effectiveness of out proposed design is verified through experiments with a DVR system implementing the proposed design.

  • PDF