• Title/Summary/Keyword: Video Object

Search Result 1,055, Processing Time 0.025 seconds

Temporal Video Modeling of Cultural Video (교양비디오의 시간지원 비디오 모델링)

  • 강오형;이지현;고성현;김정은;오재철
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.439-442
    • /
    • 2004
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, 1 present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we tan use video information more efficiently by me variation.

  • PDF

A Robust Object Extraction Method for Immersive Video Conferencing (몰입형 화상 회의를 위한 강건한 객체 추출 방법)

  • Ahn, Il-Koo;Oh, Dae-Young;Kim, Jae-Kwang;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.11-23
    • /
    • 2011
  • In this paper, an accurate and fully automatic video object segmentation method is proposed for video conferencing systems in which the real-time performance is required. The proposed method consists of two steps: 1) accurate object extraction on the initial frame, 2) real-time object extraction from the next frame using the result of the first step. Object extraction on the initial frame starts with generating a cumulative edge map obtained from frame differences in the beginning. This is because we can estimate the initial shape of the foreground object from the cumulative motion. This estimated shape is used to assign the seeds for both object and background, which are needed for Graph-Cut segmentation. Once the foreground object is extracted by Graph-Cut segmentation, real-time object extraction is conducted using the extracted object and the double edge map obtained from the difference between two successive frames. Experimental results show that the proposed method is suitable for real-time processing even in VGA resolution videos contrary to previous methods, being a useful tool for immersive video conferencing systems.

Identifying the Moving Object to Recognize the Location of Zone in Multi-Video (구역단위 위치인식을 위한 다중카메라에서의 이동객체 식별 방법)

  • Lee, Seung-Cheol;Lee, Guee-Sang;Choi, Deok-Jai;Kim, Soo-Hyung
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1165-1168
    • /
    • 2005
  • The video device is used to gain lots of informations in indoor environment. The one of informations is the information to identify the moving object. The methods to identify the moving object are to recognize the face, the gait and to analyze the hue histogram of the clothes. The hue data is effective at the environment of multi-video. In this paper, we describe the existing research about to identify the moving object in the environment of multi-video and find its problems. finally, we present the enhanced methods to solve its problems. In the future, the method will be use for recognizing the location of object in ubiquitous home.

  • PDF

Resource Efficient AI Service Framework Associated with a Real-Time Object Detector

  • Jun-Hyuk Choi;Jeonghun Lee;Kwang-il Hwang
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.439-449
    • /
    • 2023
  • This paper deals with a resource efficient artificial intelligence (AI) service architecture for multi-channel video streams. As an AI service, we consider the object detection model, which is the most representative for video applications. Since most object detection models are basically designed for a single channel video stream, the utilization of the additional resource for multi-channel video stream processing is inevitable. Therefore, we propose a resource efficient AI service framework, which can be associated with various AI service models. Our framework is designed based on the modular architecture, which consists of adaptive frame control (AFC) Manager, multiplexer (MUX), adaptive channel selector (ACS), and YOLO interface units. In order to run only a single YOLO process without regard to the number of channels, we propose a novel approach efficiently dealing with multi-channel input streams. Through the experiment, it is shown that the framework is capable of performing object detection service with minimum resource utilization even in the circumstance of multi-channel streams. In addition, each service can be guaranteed within a deadline.

Multi-channel Video Analysis Based on Deep Learning for Video Surveillance (보안 감시를 위한 심층학습 기반 다채널 영상 분석)

  • Park, Jang-Sik;Wiranegara, Marshall;Son, Geum-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1263-1268
    • /
    • 2018
  • In this paper, a video analysis is proposed to implement video surveillance system with deep learning object detection and probabilistic data association filter for tracking multiple objects, and suggests its implementation using GPU. The proposed video analysis technique involves object detection and object tracking sequentially. The deep learning network architecture uses ResNet for object detection and applies probabilistic data association filter for multiple objects tracking. The proposed video analysis technique can be used to detect intruders illegally trespassing any restricted area or to count the number of people entering a specified area. As a results of simulations and experiments, 48 channels of videos can be analyzed at a speed of about 27 fps and real-time video analysis is possible through RTSP protocol.

A Semantic Video Object Tracking Algorithm Using Contour Refinement (윤곽선 재조정을 통한 의미 있는 객체 추적 알고리즘)

  • Lim, Jung-Eun;Yi, Jae-Youn;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.6
    • /
    • pp.1-8
    • /
    • 2000
  • This paper describes an algorithm for semantic video object tracking using semi automatic method. In the semi automatic method, a user specifies an object of interest at the first frame and then the specified object is to be tracked in the remaining frames. The proposed algorithm consists of three steps: object boundary projection, uncertain area extraction, and boundary refinement. The object boundary is projected from the previous frame to the current frame using the motion estimation. And uncertain areas are extracted via two modules: Me error-test and color similarity test. Then, from extracted uncertain areas, the exact object boundary is obtained by boundary refinement. The simulation results show that the proposed video object extraction method provides efficient tracking results for various video sequences compared to the previous methods.

  • PDF

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

A study on automatic extraction of a moving object using optical flow (Optical flow 이론을 이용한 움직이는 객체의 자동 추출에 관한 연구)

  • 정철곤;김경수;김중규
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.50-53
    • /
    • 2000
  • In this work, the new algorithm that automatically extracts moving object of the video image is presented. In order to extract moving object, it is that velocity vectors correspond to each frame of the video image. Using the estimated velocity vector, the position of the object are determined. the value of the coordination of the object is initialized to the seed, and in the image plane, the moving object is automatically segmented by the region growing method. As the result of an application in sequential images, it is available to extract a moving object.

  • PDF

Uncertain Region Based User-Assisted Segmentation Technique for Object-Based Video Editing System (객체기반 비디오 편집 시스템을 위한 불확실 영역기반 사용자 지원 비디오 객체 분할 기법)

  • Yu Hong-Yeon;Hong Sung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.529-541
    • /
    • 2006
  • In this paper, we propose a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the selected objects are continuously separated from the un selected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable and efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on this result, we have developed objects based video editing system with several convenient editing functions.

  • PDF

Temporal_based Video Retrival System (시간기반 비디오 검색 시스템)

  • Lee, Ji-Hyun;Kang, Oh-Hyung;Na, Do-Won;Lee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.631-634
    • /
    • 2005
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, I present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we can use video information more efficiently by time variation.

  • PDF