• Title/Summary/Keyword: Text-Video Retrieval

Search Result 47, Processing Time 0.027 seconds

A Study on the Alternative Method of Video Characteristics Using Captioning in Text-Video Retrieval Model (텍스트-비디오 검색 모델에서의 캡션을 활용한 비디오 특성 대체 방안 연구)

  • Dong-hun, Lee;Chan, Hur;Hyeyoung, Park;Sang-hyo, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.347-353
    • /
    • 2022
  • In this paper, we propose a method that performs a text-video retrieval model by replacing video properties using captions. In general, the exisiting embedding-based models consist of both joint embedding space construction and the CNN-based video encoding process, which requires a lot of computation in the training as well as the inference process. To overcome this problem, we introduce a video-captioning module to replace the visual property of video with captions generated by the video-captioning module. To be specific, we adopt the caption generator that converts candidate videos into captions in the inference process, thereby enabling direct comparison between the text given as a query and candidate videos without joint embedding space. Through the experiment, the proposed model successfully reduces the amount of computation and inference time by skipping the visual processing process and joint embedding space construction on two benchmark dataset, MSR-VTT and VATEX.

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

A Semantic Content Retrieval and Browsing System Based on Associative Relation in Video Databases

  • Bok Kyoung-Soo;Yoo Jae-Soo
    • International Journal of Contents
    • /
    • v.2 no.1
    • /
    • pp.22-28
    • /
    • 2006
  • In this paper, we propose new semantic contents modeling using individual features, associative relations and visual features for efficiently supporting browsing and retrieval of video semantic contents. And we implement and design a browsing and retrieval system based on the semantic contents modeling. The browsing system supports annotation based information, keyframe based visual information, associative relations, and text based semantic information using a tree based browsing technique. The retrieval system supports text based retrieval, visual feature and associative relations according to the retrieval types of semantic contents.

  • PDF

Improving Transformer with Dynamic Convolution and Shortcut for Video-Text Retrieval

  • Liu, Zhi;Cai, Jincen;Zhang, Mengmeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2407-2424
    • /
    • 2022
  • Recently, Transformer has made great progress in video retrieval tasks due to its high representation capability. For the structure of a Transformer, the cascaded self-attention modules are capable of capturing long-distance feature dependencies. However, the local feature details are likely to have deteriorated. In addition, increasing the depth of the structure is likely to produce learning bias in the learned features. In this paper, an improved Transformer structure named TransDCS (Transformer with Dynamic Convolution and Shortcut) is proposed. A Multi-head Conv-Self-Attention module is introduced to model the local dependencies and improve the efficiency of local features extraction. Meanwhile, the augmented shortcuts module based on a dual identity matrix is applied to enhance the conduction of input features, and mitigate the learning bias. The proposed model is tested on MSRVTT, LSMDC and Activity-Net benchmarks, and it surpasses all previous solutions for the video-text retrieval task. For example, on the LSMDC benchmark, a gain of about 2.3% MdR and 6.1% MnR is obtained over recently proposed multimodal-based methods.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Video Data Modeling for Supporting Structural and Semantic Retrieval (구조 및 의미 검색을 지원하는 비디오 데이타의 모델링)

  • 복경수;유재수;조기형
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.237-251
    • /
    • 2003
  • In this paper, we propose a video retrieval system to search logical structure and semantic contents of video data efficiently. The proposed system employs a layered modelling method that orBanifes video data in raw data layer, content layer and key frame layer. The layered modelling of the proposed system represents logical structures and semantic contents of video data in content layer. Also, the proposed system supports various types of searches such as text search, visual feature based similarity search, spatio-temporal relationship based similarity search and semantic contents search.

Semantic Video Retrieval Based On User Preference (사용자 선호도를 고려한 의미기반 비디오 검색)

  • Jung, Min-Young;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.4
    • /
    • pp.127-133
    • /
    • 2009
  • To ensure access to rapidly growing video collection, video indexing is becoming more and more essential. A database for video should be build for fast searching and extracting the accurate features of video information with more complex characteristics. Moreover, video indexing structure supports efficient retrieval of interesting contents to reflect user preferences. In this paper, we propose semantic video retrieval method based on user preference. Unlikely the previous methods do not consider user preferences. Futhermore, the conventional methods show the result as simple text matching for the user's query that does not supports the semantic search. To overcome these limitations, we develop a method for user preference analysis and present a method of video ontology construction for semantic retrieval. The simulation results show that the proposed algorithm performs better than previous methods in terms of semantic video retrieval based on user preferences.

Natural Language based Video Retrieval System with Event Analysis of Multi-camera Image Sequence in Office Environment (사무실 환경 내 다중카메라 영상의 이벤트분석을 통한 자연어 기반 동영상 검색시스템)

  • Lim, Soo-Jung;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.384-389
    • /
    • 2008
  • Recently, the necessity of systems which effectively store and retrieve video data has increased. Conventional video retrieval systems retrieve data using menus or text based keywords. Due to the lack of information, many video clips are simultaneously searched, and the user must have a certain level of knowledge to utilize the system. In this paper, we suggest a natural language based conversational video retrieval system that reflects users' intentions and includes more information than keyword based queries. This system can also retrieve from events or people to their movements. First, an event database is constructed based on meta-data which are generated by domain analysis for collected video in an office environment. Then, a script database is also constructed based on the query pre-processing and analysis. From that, a method to retrieve a video through a matching technique between natural language queries and answers is suggested and validated through performance and process evaluation for 10 users The natural language based retrieval system has shown its better efficiency in performance and user satisfaction than the menu based retrieval system.

  • PDF

A Study in the Preference of e-Learning Contents Delivery Types on Web Information Search Literacy in the case of Agricultural High School (농업계 고등학교 학생들의 정보검색 능력에 따른 이러닝 콘텐츠 유형 선호도 연구)

  • Yu, Byeong-Min;Kim, Su-Wook;Park, Sung-Youl;Choi, Jun-Sik
    • Journal of Agricultural Extension & Community Development
    • /
    • v.16 no.2
    • /
    • pp.463-486
    • /
    • 2009
  • The purpose of this study was to find out the differences of preferences in e-Learning contents delivery types according to information searching retrieval ability in agricultural high school students. Contents delivery types are limited three kinds which are HTML type, video type, and text type and need to know about differences. The following summarizes the results of this study. On the preference of e-Learning contents delivery type on information searching retrieval ability had differences. High level group of information searching retrieval ability showed that they mostly preferred text contents delivery type. However, low level group of information searching retrieval ability showed that they preferred video contents delivery type. The results support our belief that there could be the differences in preferences in e-Learning delivery types with students' information searching retrieval abilities. We suggest that delivery types of e-Learning should be based on the students not on designers and developers.

  • PDF

A Study on Implementation of XML-Based Information Retrieval System for Video Contents (XML 기반의 동영상콘텐츠 검색 시스템 설계 및 구현)

  • Kim, Yong;So, Min-Ho
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.4
    • /
    • pp.113-128
    • /
    • 2009
  • Generally, a user uses briefly summarized video data and text information to search video contents. To provide fast and accurate search tool for video contents in the process of searching video contents, this study proposes a method to search video clips which was partitioned from video contents. To manage and control video contents and metadata, the proposed method creates structural information based on XML on a video and metadata, and saves the information into XML database. With the saved information, when a user try to search video contents, the results of user's query to retrieve video contents would be provided through creating Xpath which has class structure information. Based on the proposed method, an information retrieval system for video clips was designed and implemented.