• Title/Summary/Keyword: video analysis

Search Result 2,464, Processing Time 0.034 seconds

Multi-channel Video Analysis Based on Deep Learning for Video Surveillance (보안 감시를 위한 심층학습 기반 다채널 영상 분석)

  • Park, Jang-Sik;Wiranegara, Marshall;Son, Geum-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1263-1268
    • /
    • 2018
  • In this paper, a video analysis is proposed to implement video surveillance system with deep learning object detection and probabilistic data association filter for tracking multiple objects, and suggests its implementation using GPU. The proposed video analysis technique involves object detection and object tracking sequentially. The deep learning network architecture uses ResNet for object detection and applies probabilistic data association filter for multiple objects tracking. The proposed video analysis technique can be used to detect intruders illegally trespassing any restricted area or to count the number of people entering a specified area. As a results of simulations and experiments, 48 channels of videos can be analyzed at a speed of about 27 fps and real-time video analysis is possible through RTSP protocol.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

A Flow Analysis Framework for Traffic Video

  • Bai, Lu-Shuang;Xia, Ying;Lee, Sang-Chul
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.45-53
    • /
    • 2009
  • The fast progress on multimedia data acquisition technologies has enabled collecting vast amount of videos in real time. Although the amount of information gathered from these videos could be high in terms of quantity and quality, the use of the collected data is very limited typically by human-centric monitoring systems. In this paper, we propose a framework for analyzing long traffic video using series of content-based analyses tools. Our framework suggests a method to integrate theses analyses tools to extract highly informative features specific to a traffic video analysis. Our analytical framework provides (1) re-sampling tools for efficient and precise analysis, (2) foreground extraction methods for unbiased traffic flow analysis, (3) frame property analyses tools using variety of frame characteristics including brightness, entropy, Harris corners, and variance of traffic flow, and (4) a visualization tool that summarizes the entire video sequence and automatically highlight a collection of frames based on some metrics defined by semi-automated or fully automated techniques. Based on the proposed framework, we developed an automated traffic flow analysis system, and in our experiments, we show results from two example traffic videos taken from different monitoring angles.

  • PDF

A Study on the Comparative Analysis of Overseas Medical Care Video and Domestic Medical Care Video (해외 의료케어 전문 영상과 국내 의료케어 영상 비교분석에 관한 연구)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.415-420
    • /
    • 2021
  • In a situation where the medical care field is developing from various angles, medical promotion video analysis has an important meaning. It is important as a matter of improving competitiveness, and in the era of acceleration of AI systems, medical care is also the leading field. Accordingly, the importance of videos on publicity, advertisements, and explanations is very important, and it is also an important direction to change the image of a company. In this study, the design characteristics and differences in the video were compared, focusing on the comparative analysis of professional videos of AI medical brands, with two foreign major companies (Stryker and Hill-rom) and one domestic leading company (Nine Bell), and detailed part analysis and section analysis were performed accordingly. As a technical partial analysis of image editing, the transition method and infographic graphics were considered. In an in-depth comparison, we found that AI medical imaging Points such as differences in image tone and image color harmony were analyzed and compared. For a detailed analysis in the video image determination part, we compared and studied the differentiated elements appearing in the promotional design and specific scenes of the video intro part and the product description video part of each video.

Object Motion Analysis and Interpretation in Video

  • Song, Dan;Cho, Mi-Young;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.694-696
    • /
    • 2004
  • With the more sophisticated abilities development of video, object motion analysis and interpretation has become the fundamental task for the computer vision understanding. For that understanding, firstly, we seek a sum of absolute difference algorithm to apply to the motion detection, which was based on the scene. Then we will focus on the moving objects representation in the scene using spatio-temporal relations. The video can be explained comprehensively from the both aspects : moving objects relations and video events intervals.

  • PDF

HEVC Coding Unit Mode Based Motion Frame Analysis

  • Jia, Qiong;Dong, Tianyu;Jang, Euee S.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.52-54
    • /
    • 2021
  • In this paper we propose a method predict whether a video frame contains motion according to the invoking situation of the coding unit mode in HEVC. The motion prediction of video frames is conducive for use in video compression and video data extraction. In the existing technology, motion prediction is usually performed by high complexity computer vision technology. However, we proposed to analyze the motion frame based on HEVC coding unit mode which does not need to use the static background frame. And the prediction accuracy rate of motion frame analysis by our method has exceeded 80%.

  • PDF

Similar Video Detection Method with Summarized Video Image and PCA (요약 비디오 영상과 PCA를 이용한 유사비디오 검출 기법)

  • Yoo, Jae-Man;Kim, Woo-Saeng
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.8
    • /
    • pp.1134-1141
    • /
    • 2005
  • With ever more popularity of video web-publishing, popular content is being compressed, reformatted and modified, resulting in excessive content duplication. Such overlapped data can cause problem of search speed and rate of searching. However, duplicated data on other site can provide alternatives while specific site cause problem. This paper proposes the efficient method, for retrieving. similar video data in large database. In this research we have used the method to compare summarized video image instead of the raw video data, and detected similar videos through clustering in that dimension feature vector through PCA(principle component analysis). We show that our proposed method is efficient and accurate through our experiment.

  • PDF

Performance Analysis of 6DoF Video Streaming Based on MPEG Immersive Video (MPEG 몰입형 비디오 기반 6DoF 영상 스트리밍 성능 분석)

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.773-793
    • /
    • 2022
  • The moving picture experts group (MPEG) immersive video (MIV) coding standard has been established to support six degrees of freedom (6DoF) in virtual reality (VR) by transmitting high-quality multiple immersive videos. The MIV exploits two approaches considering tradeoff between bandwidth and computational complexity: 1) eliminating correlation between multi-view videos or 2) selecting representative videos. This paper presents performance analysis on intermediate synthesized views on source view positions and synthesized pose traces using high-efficiency video coding (HEVC) and versatile video coding (VVC) for above-mentioned two approaches.

Relationship between the Biomechanical Analysis and the Qualitative Analysis of Video Software for the Walking Movement (보행동작에 대한 바이오메카닉스적 분석과 비디오의 정성적 분석의 상호관련성)

  • Bae, Young-Sang;Woo, Oh-Goo;Lee, Jeong-Min
    • Korean Journal of Applied Biomechanics
    • /
    • v.20 no.4
    • /
    • pp.421-427
    • /
    • 2010
  • The purpose of this study was to investigate the relationship between the quantitative analysis of biomechanical movement and the qualitative analysis of video software in order to evaluate for the walking movement. The fourteen collegiate students who agreed with the purpose and method of this study participated as subjects. The slow walking and fast walking of the subjects in the place of experiment were photographed, and calculated several mechanical factors. This empirical evidence from the experiment indicated the significant difference(p<.001) between each distant factors of the walking movement for both analyses methods, but there was no statistically significant difference between the spacial factors observed in the experiment. For more detail, no significant difference between the walking ratios that expressed the coordination between stride length and stride frequency was found. The findings also indicated the high coefficient of correlation(over r=.9) which supports higher explanation force for the biomechanical method and the Dartfish video software method. Therefore, if the data was gathered by using the proper experimental method, the video software method could be used just like the quantitative data of biomechanical method.