• Title/Summary/Keyword: Video Retrieval

Search Result 372, Processing Time 0.025 seconds

Similar Movie Retrieval using Low Peak Feature and Image Color (Low Peak Feature와 영상 Color를 이용한 유사 동영상 검색)

  • Chung, Myoung-Beom;Ko, Il-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.8
    • /
    • pp.51-58
    • /
    • 2009
  • In this paper. we propose search algorithm using Low Peak Feature of audio and image color value by which similar movies can be identified. Combing through entire video files for the purpose of recognizing and retrieving matching movies requires much time and memory space. Moreover, these methods still share a critical problem of erroneously recognizing as being different matching videos that have been altered only in resolution or converted merely with a different codec. Thus we present here a similar-video-retrieval method that relies on analysis of audio patterns, whose peak features are not greatly affected by changes in the resolution or codec used and image color values. which are used for similarity comparison. The method showed a 97.7% search success rate, given a set of 2,000 video files whose audio-bit-rate had been altered or were purposefully written in a different codec.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Content based Video Copy Detection Using Spatio-Temporal Ordinal Measure (시공간 순차 정보를 이용한 내용기반 복사 동영상 검출)

  • Jeong, Jae-Hyup;Kim, Tae-Wang;Yang, Hun-Jun;Jin, Ju-Kyong;Jeong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.113-121
    • /
    • 2012
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Video Browsing Using An Efficient Scene Change Detection in Telematics (텔레매틱스에서 효율적인 장면전환 검출기법을 이용한 비디오 브라우징)

  • Shin Seong-Yoon;Pyo Seong-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.147-154
    • /
    • 2006
  • Effective and efficient representation of color features of multiple video frames is an important vet challenging task for visual information management systems. This paper Proposes a Video Browsing Service(VBS) that provides both the video content retrieval and the video browsing by the real-time user interface on Web. For the scene segmentation and key frame extraction of video sequence, we proposes an efficient scene change detection method that combine the RGB color histogram with the X2 (Chi Square) histogram. Resulting key frames are linked by both physical and logical indexing. This system involves the video editing and retrieval function of a VCR's. Three elements that are the date, the need and the subject are used for video browsing. A Video Browsing Service is implemented with MySQL, PHP and JMF under Apache Web Server.

  • PDF

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

Indexing and Retrieval of Human Individuals on Video Data Using Face and Speaker Recognition

  • Y.Sugiyama;N.Ishikawa;M.Nishida;Y.Ariki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.122-127
    • /
    • 1998
  • In this paper, we focus on the information retrieval of human individuals who are recorded on the video database. Our purpose is to index persons by their faces or voice and to retrieve their existing time sections on the video data. The database system can track as well as extract a face or voice of a certain person and construct a model of the individual person in self-organization mode. If he appears again at different time, the system can put the mark of the same person to the associated frames. In this way, the same person can be retrieved even if the system does not know his exact name. As the face and speaker modeling, a subspace method is employed to improve the indexing accuracy.

  • PDF

Video retrieval system based on closed caption (폐쇄자막을 기반한 자막기반 동영상 검색 시스템)

  • 김효진;황인정;이은주;이응혁;민홍기
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.12a
    • /
    • pp.57-60
    • /
    • 2000
  • Even if the video data is utilized for a lot of field, its very difficult to reuse and search easily because of its atypical(unfixed form) and complicated structure. In this study, we presented the video retrieval system which is based on the synchronized closed caption and video, SMIL and SAMI languages which are described to structured and systematic form like multimedia data These have next structure; At first, a key word is inputted by user, then time stamp would be sampling from the string which has a key word in the caption file. To the result, the screen shows an appropriate video frame.

  • PDF

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Emotion-based Video Scene Retrieval using Interactive Genetic Algorithm (대화형 유전자 알고리즘을 이용한 감성기반 비디오 장면 검색)

  • Yoo Hun-Woo;Cho Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.514-528
    • /
    • 2004
  • An emotion-based video scene retrieval algorithm is proposed in this paper. First, abrupt/gradual shot boundaries are detected in the video clip representing a specific story Then, five video features such as 'average color histogram' 'average brightness', 'average edge histogram', 'average shot duration', and 'gradual change rate' are extracted from each of the videos and mapping between these features and the emotional space that user has in mind is achieved by an interactive genetic algorithm. Once the proposed algorithm has selected videos that contain the corresponding emotion from initial population of videos, feature vectors from the selected videos are regarded as chromosomes and a genetic crossover is applied over them. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on the similarity function to obtain the most similar videos as solutions of the next generation. By iterating above procedures, new population of videos that user has in mind are retrieved. In order to show the validity of the proposed method, six example categories such as 'action', 'excitement', 'suspense', 'quietness', 'relaxation', 'happiness' are used as emotions for experiments. Over 300 commercial videos, retrieval results show 70% effectiveness in average.