• Title/Summary/Keyword: video object

Search Result 1,056, Processing Time 0.029 seconds

Moving Object Segmentation Using Object Area Tracking Algorithm (움직임 영역 추출 알고리즘을 이용한 자동 움직임 물체 분할)

  • Lee Kwang-Ho;Lee Seung-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.9
    • /
    • pp.1240-1245
    • /
    • 2004
  • This paper presents the moving objects segmentation algorithms from the sequence images in the stationary backgrounds such as surveillance camera and video phone and so on. In this paper, the moving object area is extracted with proposed object searching algorithm and then moving object is segmented within the moving object area. Also the proposed algorithms have the robustness against noise problems and results show the proposed algorithm is able to efficiently segment and track the moving object area.

  • PDF

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

RAVIP: Real-Time AI Vision Platform for Heterogeneous Multi-Channel Video Stream

  • Lee, Jeonghun;Hwang, Kwang-il
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.227-241
    • /
    • 2021
  • Object detection techniques based on deep learning such as YOLO have high detection performance and precision in a single channel video stream. In order to expand to multiple channel object detection in real-time, however, high-performance hardware is required. In this paper, we propose a novel back-end server framework, a real-time AI vision platform (RAVIP), which can extend the object detection function from single channel to simultaneous multi-channels, which can work well even in low-end server hardware. RAVIP assembles appropriate component modules from the RODEM (real-time object detection module) Base to create per-channel instances for each channel, enabling efficient parallelization of object detection instances on limited hardware resources through continuous monitoring with respect to resource utilization. Through practical experiments, RAVIP shows that it is possible to optimize CPU, GPU, and memory utilization while performing object detection service in a multi-channel situation. In addition, it has been proven that RAVIP can provide object detection services with 25 FPS for all 16 channels at the same time.

A Research of CNN-based Object Detection for Multiple Object Tracking in Image (영상에서 다중 객체 추적을 위한 CNN 기반의 다중 객체 검출에 관한 연구)

  • Ahn, Hyochang;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.3
    • /
    • pp.110-114
    • /
    • 2019
  • Recently, video monitoring system technology has been rapidly developed to monitor and respond quickly to various situations. In particular, computer vision and related research are being actively carried out to track objects in the video. This paper proposes an efficient multiple objects detection method based on convolutional neural network (CNN) for multiple objects tracking. The results of the experiment show that multiple objects can be detected and tracked in the video in the proposed method, and that our method is also good performance in complex environments.

A Design of Video Conversation System Using the UML (UML을 이용한 화상 대화 시스템의 설계)

  • Jang Jae-Myoung;Kim Yun-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.561-569
    • /
    • 2005
  • Recently, the object-oriented design is the major paradigm for software development. Most systems are following this paradigm, but the past studies for a video conversation domain were not based on full-scale object-oriented design. Thus, this paper presents an systematical architecture design using UML for a video conversation system that is well-known and has high rate of usefulness. It analysis a video conversation system that has much demand of service as systematical functional/non-functional requirements, and the object-oriented design applying '4+1 View Model' guarantees the reusability of a component and makes it possible to extend a system by adding components as needed. Consequently, it is expected that the components of video conversation system designed by this paper will be useful the other video conversation systems and will be expanded to web environment.

Hybrid Video Information System Supporting Content-based Retrieval and Similarity Retrieval (비디오의 의미검색과 유사성검색을 위한 통합비디오정보시스템)

  • Yun, Mi-Hui;Yun, Yong-Ik;Kim, Gyo-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2031-2041
    • /
    • 1999
  • In this paper, we present the HVIS (Hybrid Video Information System) which bolsters up meaning retrieval of all the various users by integrating feature-based retrieval and annotation-based retrieval of unformatted formed and massive video data. HVIS divides a set of video into video document, sequence, scene and object to model the metadata and suggests the Two layered Hybrid Object-oriented Metadata Model(THOMM) which is composed of raw-data layer for physical video stream, metadata layer to support annotation-based retrieval, content-based retrieval, and similarity retrieval. Grounded on this model, we presents the video query language which make the annotation-based query, content-based query and similar query possible and Video Query Processor to process the query and query processing algorithm. Specially, We present the similarity expression to appear degree of similarity which considers interesting of user. The proposed system is implemented with Visual C++, ActiveX and ORACLE.

  • PDF

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Design and Implementation of Distributed Object Framework Supporting Audio/Video Streaming (오디오/비디오 스트리밍을 지원하는 분산 객체 프레임 워크 설계 및 구현)

  • Ban, Deok-Hun;Kim, Dong-Seong;Park, Yeon-Sang;Lee, Heon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.4
    • /
    • pp.440-448
    • /
    • 1999
  • 본 논문은 객체지향형 분산처리 환경 하에서 오디오나 비디오 등과 같은 실시간(real-time) 스트림(stream) 데이타를 처리하는 데 필요한 소프트웨어 기반구조를 설계하고 구현한 내용을 기술한다. 본 논문에서 제시한 DAViS(Distributed Object Framework supporting Audio/Video Streaming)는, 오디오/비디오 데이타의 처리와 관련된 여러 소프트웨어 구성요소들을 분산객체로 추상화하고, 그 객체들간의 제어정보 교환경로와 오디오/비디오 데이타 전송경로를 서로 분리하여 처리한다. 분산응용프로그램 작성자는 DAViS에서 제공하는 서비스들을 이용하여, 기존의 분산프로그래밍 환경이 제공하는 것과 동일한 수준에서 오디오/비디오 데이타에 대한 처리를 표현할 수 있다. DAViS는, 새로운 형식의 오디오/비디오 데이타를 처리하는 부분을 손쉽게 통합하고, 하부 네트워크의 전송기술이나 컴퓨터시스템 관련 기술의 진보를 신속하고 자연스럽게 수용할 수 있도록 하는 유연한 구조를 가지고 있다. Abstract This paper describes the design and implementation of software framework which supports the processing of real-time stream data like audio and video in distributed object-oriented computing environment. DAViS(Distributed Object Framework supporting Audio/Video Streaming), proposed in this paper, abstracts software components concerning the processing of audio/video data as distributed objects and separates the transmission path of data between them from that of control information. Based on DAViS, distributed applications can be written in the same abstract level as is provided by the existing distributed environment in handling audio/video data. DAViS has a flexible internal structure enough to easily incorporate new types of audio/video data and to rapidly accommodate the progress of underlying network and computer system technology with very little modifications.

Context-aware Video Surveillance System

  • An, Tae-Ki;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.1
    • /
    • pp.115-123
    • /
    • 2012
  • A video analysis system used to detect events in video streams generally has several processes, including object detection, object trajectories analysis, and recognition of the trajectories by comparison with an a priori trained model. However, these processes do not work well in a complex environment that has many occlusions, mirror effects, and/or shadow effects. We propose a new approach to a context-aware video surveillance system to detect predefined contexts in video streams. The proposed system consists of two modules: a feature extractor and a context recognizer. The feature extractor calculates the moving energy that represents the amount of moving objects in a video stream and the stationary energy that represents the amount of still objects in a video stream. We represent situations and events as motion changes and stationary energy in video streams. The context recognizer determines whether predefined contexts are included in video streams using the extracted moving and stationary energies from a feature extractor. To train each context model and recognize predefined contexts in video streams, we propose and use a new ensemble classifier based on the AdaBoost algorithm, DAdaBoost, which is one of the most famous ensemble classifier algorithms. Our proposed approach is expected to be a robust method in more complex environments that have a mirror effect and/or a shadow effect.

Development of the Stereo Camera System for Active Remote Monitoring (능동적 원격감시를 위한 스테레오 카메라 시스템의 개발)

  • Park, K.;Cho, D. H.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.437-441
    • /
    • 1997
  • In the conventional remote monitoring system, a user in front of a computer monitor can acquire only 2 dimensional visual information in a passive way. Thus, even thoght the user finds an interesting object from the video image, helshe can hardly acquire additional information on the object such as name. 311 shape, etc. In this paper, an active monitoring system that shows additional information on the selected object is proposed. The active remote monitoring system can calculate the 3D position of the object that is selected in the video images. Then, using the 3D position of the object, other information on the object can be retrieved from the database and shown on the screen. To calculate the 3D position of the object, 2 CCD cameras that can be tilted and panned using 3 stepping motors are used. The algorithm of 3D position calculation and the result of experiments are explained.

  • PDF