• Title/Summary/Keyword: Video recognition

Search Result 679, Processing Time 0.021 seconds

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A Logo Transition Detection Method for Opaque and Semi-Transparent TV Logo Recognition in Video (비디오에서 불투명 및 반투명 TV 로고 인식을 위한 로고 전이 검출 방법)

  • Roh, Myung-Cheol;Kang, Seung-Yeon;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.753-763
    • /
    • 2008
  • The amount of UCCs (User Created Contents) has been increasing rapidly and is associated with a serious copyright problem. Automatic logo detection in videos is an efficient means of overcoming the copyright problem. However, logos have varying characteristics, which make logo detection and recognition very difficult. Especially, there are frequent logo transitions in a video, comprising several video contents. This disrupts accurate video segmentation based on logos. Therefore, this paper proposes an accurate logo transition detection method for recognizing logos in digital video contents. The proposed method accurately segments a video according to logo and efficiently recognizes various types of logos. The experimental results demonstrate the effectiveness of the proposed method for logo detection and video segmentation according to logo.

Dual-Stream Fusion and Graph Convolutional Network for Skeleton-Based Action Recognition

  • Hu, Zeyuan;Feng, Yiran;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.423-430
    • /
    • 2021
  • Aiming Graph convolutional networks (GCNs) have achieved outstanding performances on skeleton-based action recognition. However, several problems remain in existing GCN-based methods, and the problem of low recognition rate caused by single input data information has not been effectively solved. In this article, we propose a Dual-stream fusion method that combines video data and skeleton data. The two networks respectively identify skeleton data and video data and fuse the probabilities of the two outputs to achieve the effect of information fusion. Experiments on two large dataset, Kinetics and NTU-RGBC+D Human Action Dataset, illustrate that our proposed method achieves state-of-the-art. Compared with the traditional method, the recognition accuracy is improved better.

A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition (얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구)

  • Jo, In Gu;Kong, Younwoo;Jeon, Soyi;Cho, Seoyeong;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

A Study on Efficient Learning Units for Behavior-Recognition of People in Video (비디오에서 동체의 행위인지를 위한 효율적 학습 단위에 관한 연구)

  • Kwon, Ick-Hwan;Hadjer, Boubenna;Lee, Dohoon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.196-204
    • /
    • 2017
  • Behavior of intelligent video surveillance system is recognized by analyzing the pattern of the object of interest by using the frame information of video inputted from the camera and analyzes the behavior. Detection of object's certain behaviors in the crowd has become a critical problem because in the event of terror strikes. Recognition of object's certain behaviors is an important but difficult problem in the area of computer vision. As the realization of big data utilizing machine learning, data mining techniques, the amount of video through the CCTV, Smart-phone and Drone's video has increased dramatically. In this paper, we propose a multiple-sliding window method to recognize the cumulative change as one piece in order to improve the accuracy of the recognition. The experimental results demonstrated the method was robust and efficient learning units in the classification of certain behaviors.

Extraction of User Preference for Video Stimuli Using EEG-Based User Responses

  • Moon, Jinyoung;Kim, Youngrae;Lee, Hyungjik;Bae, Changseok;Yoon, Wan Chul
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1105-1114
    • /
    • 2013
  • Owing to the large number of video programs available, a method for accessing preferred videos efficiently through personalized video summaries and clips is needed. The automatic recognition of user states when viewing a video is essential for extracting meaningful video segments. Although there have been many studies on emotion recognition using various user responses, electroencephalogram (EEG)-based research on preference recognition of videos is at its very early stages. This paper proposes classification models based on linear and nonlinear classifiers using EEG features of band power (BP) values and asymmetry scores for four preference classes. As a result, the quadratic-discriminant-analysis-based model using BP features achieves a classification accuracy of 97.39% (${\pm}0.73%$), and the models based on the other nonlinear classifiers using the BP features achieve an accuracy of over 96%, which is superior to that of previous work only for binary preference classification. The result proves that the proposed approach is sufficient for employment in personalized video segmentation with high accuracy and classification power.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Misclassified Samples based Hierarchical Cascaded Classifier for Video Face Recognition

  • Fan, Zheyi;Weng, Shuqin;Zeng, Yajun;Jiang, Jiao;Pang, Fengqian;Liu, Zhiwen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.785-804
    • /
    • 2017
  • Due to various factors such as postures, facial expressions and illuminations, face recognition by videos often suffer from poor recognition accuracy and generalization ability, since the within-class scatter might even be higher than the between-class one. Herein we address this problem by proposing a hierarchical cascaded classifier for video face recognition, which is a multi-layer algorithm and accounts for the misclassified samples plus their similar samples. Specifically, it can be decomposed into single classifier construction and multi-layer classifier design stages. In single classifier construction stage, classifier is created by clustering and the number of classes is computed by analyzing distance tree. In multi-layer classifier design stage, the next layer is created for the misclassified samples and similar ones, then cascaded to a hierarchical classifier. The experiments on the database collected by ourselves show that the recognition accuracy of the proposed classifier outperforms the compared recognition algorithms, such as neural network and sparse representation.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.