• 제목/요약/키워드: Video Classification

검색결과 349건 처리시간 0.034초

신경망을 이용한 효율적인 비디오 컨텐츠 분류 방법 (An Effective Classification Method of Video Contents Using a Neural-Network)

  • 이후형;전승철;박성한
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(4)
    • /
    • pp.109-112
    • /
    • 2001
  • This paper proposes a method to classify different video contents using features of digital video. Classified video types are the news, drama, show, sports, and talk program. Features, such as intra-coded macroblock number St motion vector in P-picture in MPEG domain are used. The frame difference of YCbCr is also employed as a measure of classification. We detect the occurrences of cuts in a video for a measure of classification. Finally, back-propagation neural-network of 3 layers is used to classify video contents.

  • PDF

Chaotic Features for Traffic Video Classification

  • Wang, Yong;Hu, Shiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2833-2850
    • /
    • 2014
  • This paper proposes a novel framework for traffic video classification based on chaotic features. First, each pixel intensity series in the video is modeled as a time series. Second, the chaos theory is employed to generate chaotic features. Each video is then represented by a feature vector matrix. Third, the mean shift clustering algorithm is used to cluster the feature vectors. Finally, the earth mover's distance (EMD) is employed to obtain a distance matrix by comparing the similarity based on the segmentation results. The distance matrix is transformed into a matching matrix, which is evaluated in the classification task. Experimental results show good traffic video classification performance, with robustness to environmental conditions, such as occlusions and variable lighting.

Extraction of User Preference for Video Stimuli Using EEG-Based User Responses

  • Moon, Jinyoung;Kim, Youngrae;Lee, Hyungjik;Bae, Changseok;Yoon, Wan Chul
    • ETRI Journal
    • /
    • 제35권6호
    • /
    • pp.1105-1114
    • /
    • 2013
  • Owing to the large number of video programs available, a method for accessing preferred videos efficiently through personalized video summaries and clips is needed. The automatic recognition of user states when viewing a video is essential for extracting meaningful video segments. Although there have been many studies on emotion recognition using various user responses, electroencephalogram (EEG)-based research on preference recognition of videos is at its very early stages. This paper proposes classification models based on linear and nonlinear classifiers using EEG features of band power (BP) values and asymmetry scores for four preference classes. As a result, the quadratic-discriminant-analysis-based model using BP features achieves a classification accuracy of 97.39% (${\pm}0.73%$), and the models based on the other nonlinear classifiers using the BP features achieve an accuracy of over 96%, which is superior to that of previous work only for binary preference classification. The result proves that the proposed approach is sufficient for employment in personalized video segmentation with high accuracy and classification power.

Video Quality Representation Classification of Encrypted HTTP Adaptive Video Streaming

  • Dubin, Ran;Hadar, Ofer;Dvir, Amit;Pele, Ofir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.3804-3819
    • /
    • 2018
  • The increasing popularity of HTTP adaptive video streaming services has dramatically increased bandwidth requirements on operator networks, which attempt to shape their traffic through Deep Packet inspection (DPI). However, Google and certain content providers have started to encrypt their video services. As a result, operators often encounter difficulties in shaping their encrypted video traffic via DPI. This highlights the need for new traffic classification methods for encrypted HTTP adaptive video streaming to enable smart traffic shaping. These new methods will have to effectively estimate the quality representation layer and playout buffer. We present a new machine learning method and show for the first time that video quality representation classification for (YouTube) encrypted HTTP adaptive streaming is possible. The crawler codes and the datasets are provided in [43,44,51]. An extensive empirical evaluation shows that our method is able to independently classify every video segment into one of the quality representation layers with 97% accuracy if the browser is Safari with a Flash Player and 77% accuracy if the browser is Chrome, Explorer, Firefox or Safari with an HTML5 player.

필드와 모션벡터의 특징정보를 이용한 스포츠 뉴스 비디오의 장르 분류 (Automatic Genre Classification of Sports News Video Using Features of Playfield and Motion Vector)

  • 송미영;장상현;조형제
    • 정보처리학회논문지B
    • /
    • 제14B권2호
    • /
    • pp.89-98
    • /
    • 2007
  • 비디오와 브라우징, 검색, 조작을 위해서 비디오 내용을 기술하는 색인이 요구된다. 지금까지 색인의 구성은 대부분 비디오 내용에 제한된 키워드를 수작업으로 할당하는 전문가에 의해 수행되었는데 이는 비용과 시간을 소비하는 사업이므로 비디오 내용을 자동으로 분류하는 것이 필요하다. 이 연구는 축구, 골프, 야구, 농구, 배구 등 5종의 스포츠 뉴스 비디오의 분석과 요약을 위해서 자동적이고 효율적인 방법을 제안한다. 우선, 스포츠 뉴스 비디오를 앵커 장면과 스포츠 기사 장면으로 분류한다. 장면 분류는 앵커 장면의 영상 전처리와 색상 특정을 기반으로 한다. 그리고 필드의 우세색상과 모션 방향을 특징으로 이용하여 스포츠 장면을 5개의 장르로 분류한다. 241개의 스포츠 뉴스 장면에 대한 실험에서 75%의 정확도를 얻었다. 따라서 제안된 기법은 향후 개별 스포츠 뉴스와 스포츠 하이라이트를 위한 뉴스 비디오를 검색하는데 이용될 수 있을 것이다.

공간과 시간적 특징 융합 기반 유해 비디오 분류에 관한 연구 (Using the fusion of spatial and temporal features for malicious video classification)

  • 전재현;김세민;한승완;노용만
    • 정보처리학회논문지B
    • /
    • 제18B권6호
    • /
    • pp.365-374
    • /
    • 2011
  • 최근 인터넷, IPTV/SMART TV, 소셜 네트워크 (social network)와 같은 정보 유통 채널의 다양화로 유해 비디오 분류 및 차단 기술 연구에 대한 요구가 높아가고 있으나, 현재까지는 비디오에 대한 유해성을 판단하는 연구는 부족한 실정이다. 기존 유해 이미지 분류 연구에서는 이미지에서의 피부 영역의 비율이나 Bag of Visual Words (BoVW)와 같은 공간적 특징들 (spatial features)을 이용하고 있다. 그러나, 비디오에서는 공간적 특징 이외에도 모션 반복성 특징이나 시간적 상관성 (temporal correlation)과 같은 시간적 특징들 (temporal features)을 추가적으로 이용하여 유해성을 판단할 수 있다. 기존의 유해 비디오 분류 연구에서는 공간적 특징과 시간적 특징들에서 하나의 특징만을 사용하거나 두 개의 특징들을 단순히 결정 단계에서 데이터 융합하여 사용하고 있다. 일반적으로 결정 단계 데이터 융합 방법은 특징 단계 데이터 융합 방법보다 높은 성능을 가지지 못한다. 본 논문에서는 기존의 유해 비디오 분류 연구에서 사용되고 있는 공간적 특징과 시간적 특징들을 특징 단계 융합 방법을 이용하여 융합하여 유해 비디오를 분류하는 방법을 제안한다. 실험에서는 사용되는 특징이 늘어남에 따른 분류 성능 변화와 데이터 융합 방법의 변화에 따른 분류 성능 변화를 보였다. 공간적 특징만을 이용하였을 때에는 92.25%의 유해 비디오 분류 성능을 보이는데 반해, 모션 반복성 특징을 이용하고 특징 단계 데이터 융합 방법을 이용하게 되면 96%의 향상된 분류 성능을 보였다.

A Personal Videocasting System with Intelligent TV Browsing for a Practical Video Application Environment

  • Kim, Sang-Kyun;Jeong, Jin-Guk;Kim, Hyoung-Gook;Chung, Min-Gyo
    • ETRI Journal
    • /
    • 제31권1호
    • /
    • pp.10-20
    • /
    • 2009
  • In this paper, a video broadcasting system between a home-server-type device and a mobile device is proposed. The home-server-type device can automatically extract semantic information from video contents, such as news, a soccer match, and a baseball game. The indexing results are utilized to convert the original video contents to a digested or arranged format. From the mobile device, a user can make recording requests to the home-server-type devices and can then watch and navigate recorded video contents in a digested form. The novelty of this study is the actual implementation of the proposed system by combining the actual IT environment that is available with indexing algorithms. The implementation of the system is demonstrated along with experimental results of the automatic video indexing algorithms. The overall performance of the developed system is compared with existing state-of-the-art personal video recording products.

  • PDF

Video Action Classification 최신 기술 조사 (A Survey on Recent Video Action Classification Techniques)

  • 차진혁;정승원
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.1049-1052
    • /
    • 2019
  • 최근 딥러닝을 이용해 정지 영상에 대한 연구 뿐만 아니라 동영상에 대한 연구들이 진행되고 있다. 본 논문에서는 동영상 딥러닝 기술에서 가장 주가 되고 있는 video action classification 에 대한 최신 기술들을 조사했다.

Classification of TV Program Scenes Based on Audio Information

  • Lee, Kang-Kyu;Yoon, Won-Jung;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권3E호
    • /
    • pp.91-97
    • /
    • 2004
  • In this paper, we propose a classification system of TV program scenes based on audio information. The system classifies the video scene into six categories of commercials, basketball games, football games, news reports, weather forecasts and music videos. Two type of audio feature set are extracted from each audio frame-timbral features and coefficient domain features which result in 58-dimensional feature vector. In order to reduce the computational complexity of the system, 58-dimensional feature set is further optimized to yield l0-dimensional features through Sequential Forward Selection (SFS) method. This down-sized feature set is finally used to train and classify the given TV program scenes using κ -NN, Gaussian pattern matching algorithm. The classification result of 91.6% reported here shows the promising performance of the video scene classification based on the audio information. Finally, the system stability problem corresponding to different query length is investigated.

비디오 분류에 기반 해석가능한 딥러닝 알고리즘 (An Explainable Deep Learning Algorithm based on Video Classification)

  • 김택위;조인휘
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.449-452
    • /
    • 2023
  • The rapid development of the Internet has led to a significant increase in multimedia content in social networks. How to better analyze and improve video classification models has become an important task. Deep learning models have typical "black box" characteristics. The model requires explainable analysis. This article uses two classification models: ConvLSTM and VGG16+LSTM models. And combined with the explainable method of LRP, generate visualized explainable results. Finally, based on the experimental results, the accuracy of the classification model is: ConvLSTM: 75.94%, VGG16+LSTM: 92.50%. We conducted explainable analysis on the VGG16+LSTM model combined with the LRP method. We found VGG16+LSTM classification model tends to use the frames biased towards the latter half of the video and the last frame as the basis for classification.