• Title/Summary/Keyword: moving object

Search Result 1,606, Processing Time 0.029 seconds

Moving Objects Modeling for Supporting Content and Similarity Searches (내용 및 유사도 검색을 위한 움직임 객체 모델링)

  • 복경수;김미희;신재룡;유재수;조기형
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.5
    • /
    • pp.617-632
    • /
    • 2004
  • Video Data includes moving objects which change spatial positions as time goes by. In this paper, we propose a new modeling method for a moving object contained in the video data. In order to effectively retrieve moving objects, the proposed modeling method represents the spatial position and the size of a moving object. It also represents the visual features and the trajectory by considering direction, distance and speed or moving objects as time goes by. Therefore, It allows various types of retrieval such as visual feature based similarity retrieval, distance based similarity retrieval and trajectory based similarity retrieval and their mixed type of weighted retrieval.

  • PDF

Method for Extracting Features of Conscious Eye Moving for Exploring Space Information (공간정보 탐색을 위한 의식적 시선 이동특성 추출 방법)

  • Kim, Jong-Ha;Jung, Jae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2016
  • This study has estimated the traits of conscious eye moving with the objects of the halls of subway stations. For that estimation, the observation data from eye-tracking were matched with the experiment images, while an independent program was produced and utilized for the analysis of the eye moving in the selected sections, which could provide the ground for clarifying the traits of space-users' eye moving. The outcomes can be defines as the followings. First, The application of the independently produced program provides the method for coding the great amount of observation data, which cut down a lot of analysis time for finding out the traits of conscious eye moving. Accordingly, the inclusion of eye's intentionality in the method for extracting the characteristics of eye moving enabled the features of entrance and exit of particular objects with the course of observing time to be organized. Second, The examination of eye moving at each area surrounding the object factors showed that [out]${\rightarrow}$[in], which the line of sight is from the surround area to the objects, characteristically moved from the left-top (Area I) of the selected object to the object while [in]${\rightarrow}$[out], which is from the inside of the object to the outside, also moved to the left-top (Area I). Overall, there were much eye moving from the tops of right and left (Area I, II) to the object, but the eye moving to the outside was found to move to the left-top (Area I), the right-middle (Area IV) and the right-top (Area II). Third, In order to find if there was any intense eye-moving toward a particular factor, the dominant standards were presented for analysis, which showed that there was much eye-moving from the tops (Area I, II) to the sections of 1 and 2. While the eye-moving of [in] was [I $I{\rightarrow}A$](23.0%), [$I{\rightarrow}B$](16.1%) and [$II{\rightarrow}B$](13.8%), that of [out] was [$A{\rightarrow}I$](14.8%), [$B{\rightarrow}I$](13.6%), [$A{\rightarrow}II$](11.4%), [$B{\rightarrow}IV$](11.4%) and [$B{\rightarrow}II$](10.2%). Though the eye-moving toward objects took place in specific directions (areas), that (out) from the objects to the outside was found to be dispersed widely to different areas.

Real-time Hausdorff Matching Algorithm for Tracking of Moving Object (이동물체 추적을 위한 실시간 Hausdorff 정합 알고리즘)

  • Jeon, Chun;Lee, Ju-Sin
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.707-714
    • /
    • 2002
  • This paper presents a real-time Hausdorff matching algorithm for tracking of moving object acquired from an active camera. The proposed method uses the edge image of object as its model and uses Hausdorff distance as the cost function to identify hypothesis with the model. To enable real-time processing, a high speed approach to calculate Hausdorff distance and half cross matching method to improve performance of existing search methods are also presented. the experimental results demonstrate that the proposed method can accurately track moving object in real-time.

Moving Object Trajectory based on Kohenen Network for Efficient Navigation of Mobile Robot

  • Jin, Tae-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.119-124
    • /
    • 2009
  • In this paper, we propose a novel approach to estimating the real-time moving trajectory of an object is proposed in this paper. The object's position is obtained from the image data of a CCD camera, while a state estimator predicts the linear and angular velocities of the moving object. To overcome the uncertainties and noises residing in the input data, a Extended Kalman Filter(EKF) and neural networks are utilized cooperatively. Since the EKF needs to approximate a nonlinear system into a linear model in order to estimate the states, there still exist errors as well as uncertainties. To resolve this problem, in this approach the Kohonen networks, which have a high adaptability to the memory of the input-output relationship, are utilized for the nonlinear region. In addition to this, the Kohonen network, as a sort of neural network, can effectively adapt to the dynamic variations and become robust against noises. This approach is derived from the observation that the Kohonen network is a type of self-organized map and is spatially oriented, which makes it suitable for determining the trajectories of moving objects. The superiority of the proposed algorithm compared with the EKF is demonstrated through real experiments.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

Implementation of Stereo Object Tracking Simulator using Optical JTC (광 JTC를 이용한 스테레오 물체추적 시뮬레이터의 구현)

  • Lee, Jae-Soo;Kim, Kyu-Tae;Kim, Eun-Soo
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.36D no.8
    • /
    • pp.68-78
    • /
    • 1999
  • In the typical stereo vision system, when the focus points of the left and right images are mismatched or the moving object is not in the center of the image, not only the observer can be fatigued & unconscious of three-dimensional effect, but also hard to track the moving object. Therefore, the stereo object tracking system can be used to track the moving object by controlling convergence angle to minimize stereo disparity and controlling pan/tilt to locate moving object in the center of the image. In this paper, as a new approach to stereo object tracking system we introduce a stereo object tracking simulator based on the optical JTC system capable of adaptive tracking. By using this simulator, any kinds of experimental results can be predicted & analyzed and the processing if real-time implementation of stereo object tracking system is suggested through some optical experiments even if background noises exist.

  • PDF

A Data Model for Past and Future Location Process of Moving Objects (이동 객체의 과거 및 미래 위치 연산을 위한 데이터 모델)

  • Jang, Seung-Youn;Ahn, Yoon-Ae;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.45-56
    • /
    • 2003
  • In the wireless environment, according to the development of technology, which is able to obtain location information of spatiotemporal moving object, the various application systems are developed such as vehicle tracking system, forest fire management system and digital battle field system. These application systems need the data model, which is able to represent and process the continuous change of moving object. However, if moving objects are expressed by a relational model, there is a problem which is not able to store all location information that changed per every time. Also, existing data models of moving object have a week point, which constrain the query time to the time that is managed in the database such as past or current and near future. Therefore, in this paper, we propose a data model, which is able to not only express the continuous movement of moving point and moving region but also process the operation at all query time by using shape-change process and location determination functions for past and future. In addition, we apply the proposed model to forest fire management system and evaluate the validity through the implementation result.

A study of object trace using sensor information (센서 정보를 이용한 객체 추적에 대한 연구)

  • Kim, Kwan-Joong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.4
    • /
    • pp.1921-1925
    • /
    • 2013
  • In this paper, we propose a method of object trace to real image object which enter into an area. The trace to a recognized object can be implemented to detect the moving pattern if the object enter into an area. Such as this mechanism can be applied to some applications to danger area or limited area where the invasion of unauthorized object or the moving pattern of an object is identified to achieve the trace and detection of an object.

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.