• Title/Summary/Keyword: Video Classification

Search Result 356, Processing Time 0.025 seconds

Modeling and Classification of MPEG VBR Video Data using Gradient-based Fuzzy c_means with Divergence Measure (분산 기반의 Gradient Based Fuzzy c-means 에 의한 MPEG VBR 비디오 데이터의 모델링과 분류)

  • 박동철;김봉주
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.7C
    • /
    • pp.931-936
    • /
    • 2004
  • GBFCM(DM), Gradient-based Fuzzy c-means with Divergence Measure, for efficient clustering of GPDF(Gaussian Probability Density Function) in MPEG VBR video data modeling is proposed in this paper. The proposed GBFCM(DM) is based on GBFCM( Gradient-based Fuzzy c-means) with the Divergence for its distance measure. In this paper, sets of real-time MPEG VBR Video traffic data are considered. Each of 12 frames MPEG VBR Video data are first transformed to 12-dimensional data for modeling and the transformed 12-dimensional data are Pass through the proposed GBFCM(DM) for classification. The GBFCM(DM) is compared with conventional FCM and GBFCM algorithms. The results show that the GBFCM(DM) gives 5∼15% improvement in False Alarm Rate over conventional algorithms such as FCM and GBFCM.

Exploring Image Processing and Image Restoration Techniques

  • Omarov, Batyrkhan Sultanovich;Altayeva, Aigerim Bakatkaliyevna;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.172-179
    • /
    • 2015
  • Because of the development of computers and high-technology applications, all devices that we use have become more intelligent. In recent years, security and surveillance systems have become more complicated as well. Before new technologies included video surveillance systems, security cameras were used only for recording events as they occurred, and a human had to analyze the recorded data. Nowadays, computers are used for video analytics, and video surveillance systems have become more autonomous and automated. The types of security cameras have also changed, and the market offers different kinds of cameras with integrated software. Even though there is a variety of hardware, their capabilities leave a lot to be desired. Therefore, this drawback is trying to compensate by dint of computer program solutions. Image processing is a very important part of video surveillance and security systems. Capturing an image exactly as it appears in the real world is difficult if not impossible. There is always noise to deal with. This is caused by the graininess of the emulsion, low resolution of the camera sensors, motion blur caused by movements and drag, focus problems, depth-of-field issues, or the imperfect nature of the camera lens. This paper reviews image processing, pattern recognition, and image digitization techniques, which will be useful in security services, to analyze bio-images, for image restoration, and for object classification.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

User Perception of Olfactory Information for Video Reality and Video Classification (영상실감을 위한 후각정보에 대한 사용자 지각과 영상분류)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, Chung Hyun;Choi, Ji Hoon;Kim, Shin Woo
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.9-19
    • /
    • 2013
  • There has been much advancement in reality enhancement using audio-visual information. On the other hand, there is little research on provision of olfactory information because smell is difficult to implement and control. In order to obtain necessary basic data when intend to provide smell for video reality, in this research, we investigated user perception of smell in diverse videos and then classified the videos based on the collected user perception data. To do so, we chose five main questions which were 'whether smell is present in the video'(smell presence), 'whether one desire to experience the smell with the video'(preference for smell presence with the video), 'whether one likes the smell itself'(preference for the smell itself), 'desired smell intensity if it is presented with the video'(smell intensity), and 'the degree of smell concreteness'(smell concreteness). After sampling video clips of various genre which are likely to receive either high and low ratings in the questions, we had participants watch each video after which they provided ratings on 7-point scale for the above five questions. Using the rating data for each video clips, we constructed scatter plots by pairing the five questions and representing the rating scale of each paired questions as X-Y axes in 2 dimensional spaces. The video clusters and distributional shape in the scatter plots would provide important insight into characteristics of each video clusters and about how to present olfactory information for video reality.

  • PDF

Customizing Ground Color to Deliver Better Viewing Experience of Soccer Video

  • Ahn, Il-Koo;Kim, Young-Woo;Kim, Chang-Ick
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.101-112
    • /
    • 2008
  • In this paper, we present a method to customize the ground color in outdoor sports video to provide TV viewers with a better viewing experience or subjective satisfaction. This issue, related to content personalization, is becoming critical with the advent of mobile TV and interactive TV. In outdoor sports video, such as soccer video, it is sometimes observed that the ground color is not satisfactory to viewers. In this work, the proposed algorithm is focused on customizing the ground color to deliver a better viewing experience for viewers. The algorithm comprises three modules: ground detection, shot classification, and ground color customization. We customize the ground color by considering the difference between ground colors from both input video and the target ground patch. Experimental results show that the proposed scheme offers useful tools to provide a more comfortable viewing experience and that it is amenable to real-time performance, even in a software-based implementation.

  • PDF

News Video Editing System (뉴스비디오 편집시스템)

  • 고경철;이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.421-425
    • /
    • 2000
  • The efficient researching of the News Video is require the development of video processing and editing technology to extract meaningful information from the Video data. The advanced information nations are researching the Video Editing System and recently they are concerned to research the perfect practical system. This paper represents the System that can extract and edit the meaningful information from the Video Data by the User demand through the Scene change detection and Editing system by the automatic/ passive classification and this system represents more efficient scene change detection algorithm which was selected by the user.

  • PDF

Design and Implementation of ONVIF Video Analytics Service for a Smart IP Network camera (Smart IP 네트워크 카메라의 비디오 내용 분석 서비스 설계 및 구현)

  • Nguyen, Vo Thanh Phu;Nguyen, Thanh Binh;Chung, Sun-Tae;Kang, Ho-Seok
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.102-105
    • /
    • 2012
  • ONVIF is becoming a de factor standard specification for supporting interoperability among network video products, which also supports a specification for video analytics service. A smart IP network camera is an IP network supporting video analytics. In this paper, we present our efforts in integrating ONVIF Video Analytics Service into our currently developing smart IP network camera(SS IPNC; Soongsil Smart IP Network Camera). SSIPNC supports object detection, tracking, classification, and event detection with proprietary configuration protocol and meta data formats. SSIPNC is based on TI' IPNC ONVIF implementation which supports ONVI Core specification, and several ONVIF services such as device service, imaging service and media service, but not video analytics service.

  • PDF

Classification of Degradation Types Based on Distribution of Blocky Blocks for IP-Based Video Services

  • Min, Kyung-Yeon;Lee, Seon-Oh;Sim, Dong-Gyu;Lee, Hyun-Woo;Ryu, Won;Lee, Kyoung-Hee
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.454-457
    • /
    • 2011
  • In this letter, we propose a new quality measurement method to identify the causes of video quality degradation for IP-based video services. This degradation mainly results from network performance issues and video compression. The proposed algorithm identifies the causes based on statistical feature values from blocky block distribution in degraded IP-based videos. We found that the sensitivity and specificity of the proposed algorithm are 93.63% and 91.99%, respectively, in comparison with real error types and subjective test data.

A Vehicle Classification Method in Thermal Video Sequences using both Shape and Local Features (형태특징과 지역특징 융합기법을 활용한 열영상 기반의 차량 분류 방법)

  • Yang, Dong Won
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.97-105
    • /
    • 2020
  • A thermal imaging sensor receives the radiating energy from the target and the background, so it has been widely used for detection, tracking, and classification of targets at night for military purpose. In recognizing the target automatically using thermal images, if the correct edges of object are used then it can generate the classification results with high accuracy. However since the thermal images have lower spatial resolution and more blurred edges than color images, the accuracy of the classification using thermal images can be decreased. In this paper, to overcome this problem, a new hierarchical classifier using both shape and local features based on the segmentation reliabilities, and the class/pose updating method for vehicle classification are proposed. The proposed classification method was validated using thermal video sequences of more than 20,000 images which include four types of military vehicles - main battle tank, armored personnel carrier, military truck, and estate car. The experiment results showed that the proposed method outperformed the state-of-the-arts methods in classification accuracy.

The Adopting C4.5 classification and it's Application for Deinterlacing (디인터레이싱을 위한 C4.5 분류화 기법의 적용 및 구현)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.1
    • /
    • pp.8-14
    • /
    • 2017
  • Deinterlacing is a method to convert interlaced video, including two fields (even and odd), to progressive video. It can be divided into spatial and temporal methods. The deinterlacing method in the spatial domain can easily be hardware-implemented, but yields image degradation if information about the deinterlaced pixel does not exist in the same field. On the other hand, the method in the temporal domain yields a deinterlaced image with higher quality but uses more memory, and hardware implementation is more difficult. Furthermore, the deinterlacing method in the temporal domain degrades image quality when motion is not estimated properly. The proposed method is for deinterlacing in the spatial domain. It uses several deinterlacing methods according to statistical characteristics in neighboring pixel locations. In this procedure, the proposed method uses the C4.5 algorithm, a typical classification algorithm based on entropy for choosing optimal methods from among the candidates. The simulation results show that the proposed algorithm outperforms previous deinterlacing methods in terms of objective and subjective image quality.