• Title/Summary/Keyword: Video Classification

Search Result 356, Processing Time 0.022 seconds

Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning (MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현)

  • YU, LI
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.897-904
    • /
    • 2022
  • Sports video is a very critical information resource. High-precision extraction of effective segments in sports video can better assist coaches in analyzing the player's actions in the video, and enable users to more intuitively appreciate the player's hitting action. Aiming at the shortcomings of the current sports video clip extraction results, such as strong subjectivity, large workload and low efficiency, a classification method of sports video clips based on MobileNetV3 is proposed to save user time. Experiments evaluate the effectiveness of effective segment extraction. Among the extracted segments, the effective proportion is 97.0%, indicating that the effective segment extraction results are good, and it can lay the foundation for the construction of the subsequent badminton action metadata video dataset.

Rainfall Recognition from Road Surveillance Videos Using TSN (TSN을 이용한 도로 감시 카메라 영상의 강우량 인식 방법)

  • Li, Zhun;Hyeon, Jonghwan;Choi, Ho-Jin
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.5
    • /
    • pp.735-747
    • /
    • 2018
  • Rainfall depth is an important meteorological information. Generally, high spatial resolution rainfall data such as road-level rainfall data are more beneficial. However, it is expensive to set up sufficient Automatic Weather Systems to get the road-level rainfall data. In this paper, we propose to use deep learning to recognize rainfall depth from road surveillance videos. To achieve this goal, we collect a new video dataset and propose a procedure to calculate refined rainfall depth from the original meteorological data. We also propose to utilize the differential frame as well as the optical flow image for better recognition of rainfall depth. Under the Temporal Segment Networks framework, the experimental results show that the combination of the video frame and the differential frame is a superior solution for the rainfall depth recognition. The final model is able to achieve high performance in the single-location low sensitivity classification task and reasonable accuracy in the higher sensitivity classification task for both the single-location and the multi-location case.

Design and Implementation of a Real-Time Face Detection System (실시간 얼굴 검출 시스템 설계 및 구현)

  • Jung Sung-Tae;Lee Ho-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.8
    • /
    • pp.1057-1068
    • /
    • 2005
  • This paper proposes a real-time face detection system which detects multiple faces from low resolution video such as web-camera video. First, It finds face region candidates by using AdaBoost based object detection method which selects a small number of critical features from a larger set. Next, it generates reduced feature vector for each face region candidate by using principle component analysis. Finally, it classifies if the candidate is a face or non-face by using SVM(Support Vector Machine) based binary classification. According to experiment results, the proposed method achieves real-time face detection from low resolution video. Also, it reduces the false detection rate than existing methods by using PCA and SVM based face classification step.

  • PDF

Fast Algorithm for Intra Prediction of HEVC Using Adaptive Decision Trees

  • Zheng, Xing;Zhao, Yao;Bai, Huihui;Lin, Chunyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.3286-3300
    • /
    • 2016
  • High Efficiency Video Coding (HEVC) Standard, as the latest coding standard, introduces satisfying compression structures with respect to its predecessor Advanced Video Coding (H.264/AVC). The new coding standard can offer improved encoding performance compared with H.264/AVC. However, it also leads to enormous computational complexity that makes it considerably difficult to be implemented in real time application. In this paper, based on machine learning, a fast partitioning method is proposed, which can search for the best splitting structures for Intra-Prediction. In view of the video texture characteristics, we choose the entropy of Gray-Scale Difference Statistics (GDS) and the minimum of Sum of Absolute Transformed Difference (SATD) as two important features, which can make a balance between the computation complexity and classification performance. According to the selected features, adaptive decision trees can be built for the Coding Units (CU) with different size by offline training. Furthermore, by this way, the partition of CUs can be resolved as a binary classification problem. Experimental results have shown that the proposed algorithm can save over 34% encoding time on average, with a negligible Bjontegaard Delta (BD)-rate increase.

TsCNNs-Based Inappropriate Image and Video Detection System for a Social Network

  • Kim, Youngsoo;Kim, Taehong;Yoo, Seong-eun
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.677-687
    • /
    • 2022
  • We propose a detection algorithm based on tree-structured convolutional neural networks (TsCNNs) that finds pornography, propaganda, or other inappropriate content on a social media network. The algorithm sequentially applies the typical convolutional neural network (CNN) algorithm in a tree-like structure to minimize classification errors in similar classes, and thus improves accuracy. We implemented the detection system and conducted experiments on a data set comprised of 6 ordinary classes and 11 inappropriate classes collected from the Korean military social network. Each model of the proposed algorithm was trained, and the performance was then evaluated according to the images and videos identified. Experimental results with 20,005 new images showed that the overall accuracy in image identification achieved a high-performance level of 99.51%, and the effectiveness of the algorithm reduced identification errors by the typical CNN algorithm by 64.87 %. By reducing false alarms in video identification from the domain, the TsCNNs achieved optimal performance of 98.11% when using 10 minutes frame-sampling intervals. This indicates that classification through proper sampling contributes to the reduction of computational burden and false alarms.

Ontology and Sequential Rule Based Streaming Media Event Recognition (온톨로지 및 순서 규칙 기반 대용량 스트리밍 미디어 이벤트 인지)

  • Soh, Chi-Seung;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.470-479
    • /
    • 2016
  • As the number of various types of media data such as UCC (User Created Contents) increases, research is actively being carried out in many different fields so as to provide meaningful media services. Amidst these studies, a semantic web-based media classification approach has been proposed; however, it encounters some limitations in video classification because of its underlying ontology derived from meta-information such as video tag and title. In this paper, we define recognized objects in a video and activity that is composed of video objects in a shot, and introduce a reasoning approach based on description logic. We define sequential rules for a sequence of shots in a video and describe how to classify it. For processing the large amount of increasing media data, we utilize Spark streaming, and a distributed in-memory big data processing framework, and describe how to classify media data in parallel. To evaluate the efficiency of the proposed approach, we conducted an experiment using a large amount of media ontology extracted from Youtube videos.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Dimension-Reduced Audio Spectrum Projection Features for Classifying Video Sound Clips

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.3E
    • /
    • pp.89-94
    • /
    • 2006
  • For audio indexing and targeted search of specific audio or corresponding visual contents, the MPEG-7 standard has adopted a sound classification framework, in which dimension-reduced Audio Spectrum Projection (ASP) features are used to train continuous hidden Markov models (HMMs) for classification of various sounds. The MPEG-7 employs Principal Component Analysis (PCA) or Independent Component Analysis (ICA) for the dimensional reduction. Other well-established techniques include Non-negative Matrix Factorization (NMF), Linear Discriminant Analysis (LDA) and Discrete Cosine Transformation (DCT). In this paper we compare the performance of different dimensional reduction methods with Gaussian mixture models (GMMs) and HMMs in the classifying video sound clips.

Classification of Operating State of Screw Decanter using Video-Based Optical Flow and LSTM Classifier

  • Lee, Sang-Hyeop;Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_1
    • /
    • pp.169-176
    • /
    • 2022
  • Prognostics and health management (PHM) is recently converging throughout the industry, one of the trending issue is to detect abnormal conditions at decanter centrifuge during water treatment facilities. Wastewater treatment operation produces corrosive gas which results failures on attached sensors. This scenario causes frequent sensor replacement and requires highly qualified manager's visual inspection while replacing important parts such as bearings and screws. In this paper, we propose anomaly detection by measuring the vibration of the decanter centrifuge based on the video camera images. Measuring the vibration of the screw decanter by applying the optical flow technique, the amount of movement change of the corresponding pixel is measured and fed into the LST M model. As a result, it is possible to detect the normal/warning/dangerous state based on LSTM classification. In the future work, we aim to gather more abnormal data in order to increase the further accuracy so that it can be utilized in the field of industry.

Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding

  • Moon, Jinyoung;Jin, Junho;Kwon, Yongjin;Kang, Kyuchang;Park, Jongyoul;Park, Kyoung
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.502-513
    • /
    • 2017
  • For video understanding, namely analyzing who did what in a video, actions along with objects are primary elements. Most studies on actions have handled recognition problems for a well-trimmed video and focused on enhancing their classification performance. However, action detection, including localization as well as recognition, is required because, in general, actions intersect in time and space. In addition, most studies have not considered extensibility for a newly added action that has been previously trained. Therefore, proposed in this paper is an extensible hierarchical method for detecting generic actions, which combine object movements and spatial relations between two objects, and inherited actions, which are determined by the related objects through an ontology and rule based methodology. The hierarchical design of the method enables it to detect any interactive actions based on the spatial relations between two objects. The method using object information achieves an F-measure of 90.27%. Moreover, this paper describes the extensibility of the method for a new action contained in a video from a video domain that is different from the dataset used.