• Title/Summary/Keyword: Video Classification

Search Result 355, Processing Time 0.031 seconds

Social Pedestrian Group Detection Based on Spatiotemporal-oriented Energy for Crowd Video Understanding

  • Huang, Shaonian;Huang, Dongjun;Khuhroa, Mansoor Ahmed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3769-3789
    • /
    • 2018
  • Social pedestrian groups are the basic elements that constitute a crowd; therefore, detection of such groups is scientifically important for modeling social behavior, as well as practically useful for crowd video understanding. A social group refers to a cluster of members who tend to keep similar motion state for a sustained period of time. One of the main challenges of social group detection arises from the complex dynamic variations of crowd patterns. Therefore, most works model dynamic groups to analysis the crowd behavior, ignoring the existence of stationary groups in crowd scene. However, in this paper, we propose a novel unified framework for detecting social pedestrian groups in crowd videos, including dynamic and stationary pedestrian groups, based on spatiotemporal-oriented energy measurements. Dynamic pedestrian groups are hierarchically clustered based on energy flow similarities and trajectory motion correlations between the atomic groups extracted from principal spatiotemporal-oriented energies. Furthermore, the probability distribution of static spatiotemporal-oriented energies is modeled to detect stationary pedestrian groups. Extensive experiments on challenging datasets demonstrate that our method can achieve superior results for social pedestrian group detection and crowd video classification.

Video retrieval method using non-parametric based motion classification (비-파라미터 기반의 움직임 분류를 통한 비디오 검색 기법)

  • Kim Nac-Woo;Choi Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.1-11
    • /
    • 2006
  • In this paper, we propose the novel video retrieval algorithm using non-parametric based motion classification in the shot-based video indexing structure. The proposed system firstly gets the key frame and motion information from each shot segmented by scene change detection method, and then extracts visual features and non-parametric based motion information from them. Finally, we construct real-time retrieval system supporting similarity comparison of these spatio-temporal features. After the normalized motion vector fields is created from MPEG compressed stream, the extraction of non-parametric based motion feature is effectively achieved by discretizing each normalized motion vectors into various angle bins, and considering a mean, a variance, and a direction of these bins. We use the edge-based spatial descriptor to extract the visual feature in key frames. Experimental evidence shows that our algorithm outperforms other video retrieval methods for image indexing and retrieval. To index the feature vectors, we use R*-tree structures.

Video augmentation technique for human action recognition using genetic algorithm

  • Nida, Nudrat;Yousaf, Muhammad Haroon;Irtaza, Aun;Velastin, Sergio A.
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.327-338
    • /
    • 2022
  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning (DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1281-1297
    • /
    • 2016
  • In recent years, personal videos have seen a tremendous growth due to the substantial increase in the use of smart devices and networking services in which users create and share video content easily without many restrictions. However, taking both into account would significantly improve event detection performance because videos generally have multiple modalities and the frame data in video varies at different time points. This paper proposes an event detection method. In this method, high-level features are first extracted from multiple modalities in the videos, and the features are rearranged according to time sequence. Then the association of the modalities is learned by means of DNN to produce a personal video event detector. In our proposed method, audio and image data are first synchronized and then extracted. Then, the result is input into GoogLeNet as well as Multi-Layer Perceptron (MLP) to extract high-level features. The results are then re-arranged in time sequence, and every video is processed to extract one feature each for training by means of DNN.

Land Cover Classification and Accuracy Assessment Using Aerial Videography and Landsat-TM Satellite Image -A Case Study of Taean Seashore National Park- (항공비디오와 Landsat-TM 자료를 이용한 지피의 분류와 평가 - 태안 해안국립공원을 사례로 -)

  • 서동조;박종화;조용현
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.27 no.4
    • /
    • pp.131-136
    • /
    • 1999
  • Aerial videography techniques have been used to inventory conditions associated with grassland, forests, and agricultural crop production. Most recently, aerial videography has been used to verity satellite image classifications as part of the natural ecosystem survey. The objectives of this study were: (1) to use aerial video images of the study area, one part of Taean Seashore National Park, for the accuracy assessment, and (2) to determine the suitability of aerial videography as an accuracy assessment, of the land cover classification with Landsat-TM data. Video images were collected twice, summer and winter seasons, and divided into two kinds of images, wide angle and narrow angle images. Accuracy assessment methods include the calculation of the error matrix, the overall accuracy and kappa coefficient of agreement. This study indicates that aerial videography is an effective tool for accuracy assessment of the satellite image classifications of which features are relatively large and continuous. And it would be possible to overcome the limits of the present natural ecosystem survey method.

  • PDF

Automatic Video Genre Identification Method in MPEG compressed domain

  • Kim, Tae-Hee;Lee, Woong-Hee;Jeong, Dong-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1527-1530
    • /
    • 2002
  • Video summary is one of the tools which can provide the fast and effective browsing fur a lengthy video. Video summary consists of many key-frames that could be defined differently depending on the video genre it belongs to. Consequently, the video summary constructed by the uniform manner might lead into inadequate result. Therefore, identifying the video genre is the important first step in generating the meaningful video summary. We propose a new method that can classify the genre of the video data in MPEG compressed bit-stream domain. Since the proposed method operates directly on the com- pressed bit-stream without decoding the frame, it has merits such as simple calculation and short processing time. In the proposed method, only the visual information is utilized through the spatial-temporal analysis to classify the video genre. Experiments are done for 6 genres of video: Cartoon, Commercial, Music Video, News, Sports, and Talk Show. Experimental result shows more than 90% of accuracy in genre classification for the well-structured video data such as Talk Show and Sports.

  • PDF

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.

Classification of Phornographic Video with using the Features of Multiple Audio (다중 오디오 특징을 이용한 유해 동영상의 판별)

  • Kim, Jung-Soo;Chung, Myung-Bum;Sung, Bo-Kyung;Kwon, Jin-Man;Koo, Kwang-Hyo;Ko, Il-Ju
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.522-525
    • /
    • 2009
  • This paper proposed the content-based method of classifying filthy Phornographic video, which causes a big problem of modern society as the reverse function of internet. Audio data was used to extract the features from Phornographic video. There are frequency spectrum, autocorrelation, and MFCC as the feature of audio used in this paper. The sound that could be filthy contents was extracted, and the Phornographic was classified by measuring how much percentage of relevant sound was corresponding with the whole audio of video. For the experiment on the proposed method, The efficiency of classifying Phornographic was measured on each feature, and the measured result and comparison with using multi features were performed. I can obtain the better result than when only one feature of audio was extracted, and used.

  • PDF

A Study on Classification of Pickpocket in Video (비디오에서 소매치기의 분류에 관한 연구)

  • Rhee, Yang-Won;Shin, Kwang-Seong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.95-100
    • /
    • 2012
  • Currently, crimes are becoming increasingly dense, the technique is also very tricky. Pickpocket, the theft of crime, is occurred at the most crowded or congested places. However, nowadays occurs more commonly at the rare and secluded place in the human. In this paper, we investigate the techniques and types of pickpockets. And realistically to submit classified video, to standing, sitting, and lying to the classification. Target for pickpockets, we classify it to submit a video forensic evidence. This paper will be needed in order to prevent and cope with pickpockets crimes.