• 제목/요약/키워드: Video Features

검색결과 688건 처리시간 0.025초

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

A Multiple Features Video Copy Detection Algorithm Based on a SURF Descriptor

  • Hou, Yanyan;Wang, Xiuzhen;Liu, Sanrong
    • Journal of Information Processing Systems
    • /
    • 제12권3호
    • /
    • pp.502-510
    • /
    • 2016
  • Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현 (Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing)

  • 조은희;김용걸;이훈순;정영은;진성일
    • 한국정보처리학회논문지
    • /
    • 제6권3호
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

Action Recognition Method in Sports Video Shear Based on Fish Swarm Algorithm

  • Jie Sun;Lin Lu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.554-562
    • /
    • 2023
  • This research offers a sports video action recognition approach based on the fish swarm algorithm in light of the low accuracy of existing sports video action recognition methods. A modified fish swarm algorithm is proposed to construct invariant features and decrease the dimension of features. Based on this algorithm, local features and global features can be classified. The experimental findings on the typical sports action data set demonstrate that the key details of sports action can be successfully retained by the dimensionality-reduced fusion invariant characteristics. According to this research, the average recognition time of the proposed method for walking, running, squatting, sitting, and bending is less than 326 seconds, and the average recognition rate is higher than 94%. This proves that this method can significantly improve the performance and efficiency of online sports video motion recognition.

An Efficient Video Retrieval Algorithm Using Color and Edge Features

  • Kim Sang-Hyun
    • 융합신호처리학회논문지
    • /
    • 제7권1호
    • /
    • pp.11-16
    • /
    • 2006
  • To manipulate large video databases, effective video indexing and retrieval are required. A large number of video indexing and retrieval algorithms have been presented for frame-w]so user query or video content query whereas a relatively few video sequence matching algorithms have been proposed for video sequence query. In this paper, we propose an efficient algorithm to extract key frames using color histograms and to match the video sequences using edge features. To effectively match video sequences with low computational load, we make use of the key frames extracted by the cumulative measure and the distance between key frames, and compare two sets of key frames using the modified Hausdorff distance. Experimental results with several real sequences show that the proposed video retrieval algorithm using color and edge features yields the higher accuracy and performance than conventional methods such as histogram difference, Euclidean metric, Battachaya distance, and directed divergence methods.

  • PDF

움직임 벡터와 빛의 특징을 이용한 비디오 인덱스 (Video Indexing using Motion vector and brightness features)

  • 이재현;조진선
    • 한국컴퓨터정보학회논문지
    • /
    • 제3권4호
    • /
    • pp.27-34
    • /
    • 1998
  • 본 논문에서는 움직임 벡터와 빛의 세기를 이용하여 비디오의 인덱싱과 검색 기법에 대하여 제안한다. 본 논문에서는 움직임 벡터의 특징과 빛의 세기를 계산하여 각 샷 당하나의 대표프레임을 추출하였다. 각각의 대표프레임은 빛의 흐름을 계산하였다. 즉 움직임벡터의 특징은 빛의 흐름으로부터 얻어냈고, BMA 는 움직임 벡터를 찾기 위해 사용했다. 그리고 빛의 세기 값을 히스토그램으로 변환 한 후 컷 검출에 사용하였다. 비디오 프레임의움직임 벡터와 빛의 세기 특징을 기반으로 비디오 데이터를 구성하고 인덱싱 하였다. 비디오 데이터베이스는 비디오의 접근을 위해 내용기반을 제공하고, 인덱스 특징은 B+ 트리 검색을 사용했고, 내부적으로 구성되어 단 노드 방식으로 저장되어 컴퓨터 저장장치에 직접 접근할 수 있게 했다. 본 논문에서는 비디오 데이터 모델을 기반으로 한 비디오 인덱스의 문제를 정의하였다.

  • PDF

자막의 구조적 특징을 이용한 축구 비디오 하이라이트 생성 (Creation of Soccer Video Highlight Using The Structural Features of Caption)

  • 허문행;신성윤;이양원;류근호
    • 정보처리학회논문지D
    • /
    • 제10D권4호
    • /
    • pp.671-678
    • /
    • 2003
  • 디지털 비디오는 대용량의 저장 공간을 필요로 하는 시간적으로 매우 긴 데이터이다. 따라서 사용자들은 대용량의 긴 비디오를 시청하기 전에 사전에 제작된 요약된 비디오를 시청하고 싶어 한다. 특히, 스포츠 비디오 분야에서는 하이라이트 비디오를 시청하고자 한다. 결과적으로 하이라이트 비디오는 사용자들이 비디오를 시청하고자 할 경우 그 비디오가 시청할 가치가 있는지를 결정하는데 사용된다. 본 논문에서는 자막의 구조적 특징을 이용하여 축구 비디오 하이라이트를 생성하는 방법을 제시한다. 자막의 구조적 특징은 자막이 갖는 시간적 특징과 공간적 특징으로서 이러한 구조적 특징을 이용하여 자막 프레임 구간과 자막 키 프레임을 추출한다. 그리고 하이라이트 비디오는 자막 키 프레임들에 대한 장면 재설정과 논리적 색인화 및 하이라이트 생성 규칙을 이용하여 생성한다. 마지막으로, 브라우저를 통한 사용자의 항목 선택에 의하여 하이라이트 비디오와 비디오 세그먼트에 대한 검색과 브라우징을 수행할 수 있다.