• Title/Summary/Keyword: Video Object

Search Result 1,056, Processing Time 0.035 seconds

Multiple Object Tracking and Identification System Using CCTV and RFID (감시 카메라와 RFID를 활용한 다수 객체 추적 및 식별 시스템)

  • Kim, Jin-Ah;Moon, Nammee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.51-58
    • /
    • 2017
  • Because of safety and security, Surveillance camera market is growing. Accordingly, Study on video recognition and tracking is also actively in progress, but There is a limit to identify object by obtaining the information of object identified and tracked. Especially, It is more difficult to identify multiple objects in open space like shopping mall, airport and others utilized surveillance camera. Therefore, This paper proposed adding object identification function by using RFID to existing video-based object recognition and tracking system. Also, We tried to complement each other to solve the problem of video and RFID based. Thus, through the interaction of system modules We propose a solution to the problems of failing video-based object recognize and tracking and the problems that could be cased by the recognition error of RFID. The system designed to identify the object by classifying the identification of object in four steps so that the data reliability of the identified object can be maintained. To judge the efficiency of this system, this demonstrated by implementing the simulation program.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

Anomalous Event Detection in Traffic Video Based on Sequential Temporal Patterns of Spatial Interval Events

  • Ashok Kumar, P.M.;Vaidehi, V.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.169-189
    • /
    • 2015
  • Detection of anomalous events from video streams is a challenging problem in many video surveillance applications. One such application that has received significant attention from the computer vision community is traffic video surveillance. In this paper, a Lossy Count based Sequential Temporal Pattern mining approach (LC-STP) is proposed for detecting spatio-temporal abnormal events (such as a traffic violation at junction) from sequences of video streams. The proposed approach relies mainly on spatial abstractions of each object, mining frequent temporal patterns in a sequence of video frames to form a regular temporal pattern. In order to detect each object in every frame, the input video is first pre-processed by applying Gaussian Mixture Models. After the detection of foreground objects, the tracking is carried out using block motion estimation by the three-step search method. The primitive events of the object are represented by assigning spatial and temporal symbols corresponding to their location and time information. These primitive events are analyzed to form a temporal pattern in a sequence of video frames, representing temporal relation between various object's primitive events. This is repeated for each window of sequences, and the support for temporal sequence is obtained based on LC-STP to discover regular patterns of normal events. Events deviating from these patterns are identified as anomalies. Unlike the traditional frequent item set mining methods, the proposed method generates maximal frequent patterns without candidate generation. Furthermore, experimental results show that the proposed method performs well and can detect video anomalies in real traffic video data.

Hybrid Blending for Video Composition (동영상 합성을 위한 혼합 블랜딩)

  • Kim, Jihong;Heo, Gyeongyong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.231-237
    • /
    • 2020
  • In this paper, we provide an efficient hybrid video blending scheme to improve the naturalness of composite video in Poisson equation-based composite methods. In image blending process, various blending methods are used depending on the purpose of image composition. The hybrid blending method proposed in this paper has the characteristics that there is no seam in the composite video and the color distortion of the object is reduced by properly utilizing the advantages of Poisson blending and alpha blending. First, after blending the source object by the Poisson blending method, the color difference between the blended object and the original object is compared. If the color difference is equal to or greater than the threshold value, the object of source video is alpha blended and is added together with the Poisson blended object. Simulation results show that the proposed method has not only better naturalness than Poisson blending and alpha blending, but also requires a relatively small amount of computation.

Design and Implementation of the Video Data Model Based on Temporal Relationship (시간 관계성을 기반으로 한 비디오 데이터 모델의 설계 및 구현)

  • 최지희;용환승
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.3
    • /
    • pp.252-264
    • /
    • 1999
  • The key characteristic of video data is its spatial/temporal relationships. In this paper, we propose an content based video retrieval system based on hierarchical data structure for specifying the temporal semantics of video data. In this system, video data's hierarchical structure temporal relationship, inter video object temporal relationship, and moving video object temporal relationship can be represented. We also implemented these video data's temporal relationship into an object-relational database management system using inheritance, encapsulation function overloading, etc. So more extended and richer temporal functions can be used to support a broad range of temporal queries.

  • PDF

Object Motion Analysis and Interpretation in Video

  • Song, Dan;Cho, Mi-Young;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.694-696
    • /
    • 2004
  • With the more sophisticated abilities development of video, object motion analysis and interpretation has become the fundamental task for the computer vision understanding. For that understanding, firstly, we seek a sum of absolute difference algorithm to apply to the motion detection, which was based on the scene. Then we will focus on the moving objects representation in the scene using spatio-temporal relations. The video can be explained comprehensively from the both aspects : moving objects relations and video events intervals.

  • PDF

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Multi-View Point switch System Structure & Implementation of Video player in MPEG-4 based (MPEG-4 시스템 기반의 다시점 전환 시스템 구조 및 재생기 구현)

  • Lee, Jun-Cheol;Lee, Jung-Won;Chang, Yong-Seok;Kim, Sung-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.80-93
    • /
    • 2007
  • This paper suggests structures of the Object Descriptor and the Elementary Stream Descriptor that provide multi-view video services in 3-Dimensional Audio Video technical standards of current MPEG-4. First, it defines that the structures of the Object Descriptor and the Elementary Stream Descriptor on established MPEG-4 system, then distributes individually, and analyzes that. But extension of established system is inappropriate for providing multi-view audio video services connected transmissions and receptions. And, this paper suggests a structure of new Object Descriptor able to switch viewpoints that considers the correlation between each viewpoints, when multi-view video is transmitted. By means of that, it is able to switch viewpoints according to a requirement of a user in a multi-view video services, and reduce overheads for transmitting information about necessary viewpoint.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Research on Intelligent Anomaly Detection System Based on Real-Time Unstructured Object Recognition Technique (실시간 비정형객체 인식 기법 기반 지능형 이상 탐지 시스템에 관한 연구)

  • Lee, Seok Chang;Kim, Young Hyun;Kang, Soo Kyung;Park, Myung Hye
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.546-557
    • /
    • 2022
  • Recently, the demand to interpret image data with artificial intelligence in various fields is rapidly increasing. Object recognition and detection techniques using deep learning are mainly used, and video integration analysis to determine unstructured object recognition is a particularly important problem. In the case of natural disasters or social disasters, there is a limit to the object recognition structure alone because it has an unstructured shape. In this paper, we propose intelligent video integration analysis system that can recognize unstructured objects based on video turning point and object detection. We also introduce a method to apply and evaluate object recognition using virtual augmented images from 2D to 3D through GAN.