• 제목/요약/키워드: Motion Object Location

검색결과 63건 처리시간 0.027초

무향 변환 기반 필터링을 이용한 전술표적 추적 성능 연구 (Study on Tactical Target Tracking Performance Using Unscented Transform-based Filtering)

  • 변재욱;정효영;이새움;김기성;김기선
    • 한국군사과학기술학회지
    • /
    • 제17권1호
    • /
    • pp.96-107
    • /
    • 2014
  • Tracking the tactical object is a fundamental affair in network-equipped modern warfare. Geodetic coordinate system based on longitude, latitude, and height is suitable to represent the location of tactical objects considering multi platform data fusion. The motion of tactical object described as a dynamic model requires an appropriate filtering to overcome the system and measurement noise in acquiring information from multiple sensors. This paper introduces the filter suitable for multi-sensor data fusion and tactical object tracking, particularly the unscented transform(UT) and its detail. The UT in Unscented Kalman Filter(UKF) uses a few samples to estimate nonlinear-propagated statistic parameters, and UT has better performance and complexity than the conventional linearization method. We show the effects of UT-based filtering via simulation considering practical tactical object tracking scenario.

Recognizing Static Target in Video Frames Taken from Moving Platform

  • Wang, Xin;Sugisaka, Masanori;Xu, Wenli
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.673-676
    • /
    • 2003
  • This paper deals with the problem of moving object detection and location in computer vision. We describe a new object-dependent motion analysis method for tracking target in an image sequence taken from a moving platform. We tackle these tasks with three steps. First, we make an active contour model of a target in order to build some of low-energy points, which are called kernels. Then we detect interest points in two windows called tracking windows around a kernel respectively. At the third step, we decide the correspondence of those detected interest points between tracking windows by the probabilistic relaxation method In this algorithm, the detecting process is iterative and begins with the detection of all potential correspondence pair in consecutive image. Each pair of corresponding points is then iteratively recomputed to get a globally optimum set of pairwise correspondences.

  • PDF

객체 지향적 슬레이브 로봇들로 구성된 홈서비스 로봇 시스템의 구현 (Implementation of Home Service Robot System consisting of Object Oriented Slave Robots)

  • 고창건;고대건;권혜진;박정일;이석규
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.337-339
    • /
    • 2007
  • This paper proposes a new paradigm for cooperation of multi-robot system for home service. For localization of each robot. the master robot collects information of location of each robot based on communication of RFID tag on the floor and RFID reader attached on the bottom of the robot. The Master robot communicates with slave robots via wireless LAN to check the motion of robots and command to them based on the information from slave robots. The operator may send command to slave robots based on the HRI(Human-Robot Interaction) screened on masted robot using information from slave robots. The cooperation of multiple robots will enhance the performance comparing with single robot.

  • PDF

Apple II P.C.를 이용한 Video Image Processing과 인체계측 및 동작분석에의 응용 (Video Image Processing on Apple II P.C. and Its Applications to Anthropometry and Motion Analysis)

  • 이상도;정중선;이근부
    • 대한인간공학회지
    • /
    • 제4권1호
    • /
    • pp.11-16
    • /
    • 1985
  • The object of this research is to develop an Interactive Computerized Graphic Program for graphic output of velocity, acceleration and motion range of body-task reference point (e.g., C.O.G., joint location, etc.). Human motions can be reproduced by scanning (rate = 60Hz) the vidicon image, and the results are stored in an Apple II P.C. memory. The results of this study can be extended to simulation and reproduction of human motions for optimal task design.

  • PDF

Moving object Tracking Using U and FI

  • Song, Hag-hyun;Kwak, Yoon-shik;Kim, Yoon-ho;Ryu, Kwang-Ryol
    • 한국정보통신학회논문지
    • /
    • 제6권7호
    • /
    • pp.1126-1132
    • /
    • 2002
  • In this paper, we propose a new scheme of motion tracking based on fuzzy inference (Fl) and wavelet transform (WT) from image sequences. First, we present a WT to segment a feature extraction of dynamic image . The coefficient matrix for 2-level DWT tent to be clustered around the location of Important features in the images, such as edge discontinuities, peaks, and corners. But these features are time varying owing to the environment conditions. Second, to reduce the spatio-temperal error, We develop a fuzzy inference algorithm. Some experiments are performed 0 testify the validity and applicability of the proposed system As a result, proposed method is relatively simple compared with the traditional space domain method. It is also well suited for motion tracking under the conditions of variation of illumination.

Object Tracking Algorithm for Multimedia System

  • Kim, Yoon-ho;Kwak, Yoon-shik;Song, Hag-hyun;Ryu, Kwang-Ryol
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2002년도 추계종합학술대회
    • /
    • pp.217-221
    • /
    • 2002
  • In this paper, we propose a new scheme of motion tracking based on fuzzy inference (FI)and wavelet transform (WT) from image sequences. First, we present a WT to segment a feature extraction of dynamic image . The coefficient matrix for 2-level DWT tent to be clustered around the location of important features in the images, such as edge discontinuities, peaks, and corners. But these features are time varying owing to the environment conditions. Second, to reduce the spatio-temporal error, We develop a fuzzy inference algorithm. Some experiments are peformed to testify the validity and applicability of the proposed system. As a result, proposed method is relatively simple compared with the traditional space domain method. It is also well suited for motion tracking under the conditions of variation of illumination.

  • PDF

GIS를 이용한 시공간 이동 객체 관리 시스템 (A Spatiotemporal Moving Objects Management System using GIS)

  • 신기수;안윤애;배종철;정영진;류근호
    • 정보처리학회논문지D
    • /
    • 제8D권2호
    • /
    • pp.105-116
    • /
    • 2001
  • 이동객체는 시간에 따라 공간 객체의 위치 및 영역이 연속적으로 변경되는 시공간 데이터이다. 기존의 데이터베이스 시스템을 이용하여 시공간 이동 객체를 관리할 경우 다음의 두 가지 문제점을 가진다. 첫째, 시간에 따라 변화되는 위치 정보에 대한 빈번한 갱신이 발생된다. 둘째, 항상 객체의 현재 상태만이 저장되므로 시공간 이동 객체의 과거와 미래에 관한 정보를 제공하지 못한다. 따라서, 이 논문에서는 빈번한 갱신없이 이동 객체의 이력 정보를 관리할 뿐만 아니라 과거, 현재 그리고 가까운 미래에 관한 모든 위치 정보를 제공할 수 있는 시공간 이동 객체 관리 시스템을 제안한다. 제안 시스템에서 이동 객체 정보는 위치를 나타내는 위치정보와 이동 습성을 나타내는 행위 정보로 구분된다. 특히, 행위정보 변경 처리 알고리즘을 사용하여 최소한의 이력 정보만으로도 모든 객체의 위치 정보를 검색할 수 있는 방법을 제안한다. 그리고, 제안한 방법을 전장 분석 시스템에 적용하여 구현하였으며, 이를 통해 관계형 데이터베이스와 GIS 시스템을 이용하여 실세계의 시공간 이동 객체의 과거 , 현재 및 가까운 미래의 위치 정보를 관리할 수 있음을 알 수 있었다.

  • PDF

대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출 (Salient Object Extraction from Video Sequences using Contrast Map and Motion Information)

  • 곽수영;고병철;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권11호
    • /
    • pp.1121-1135
    • /
    • 2005
  • 본 논문에서는 시공간 정보를 이용하여 동영상에서 움직이는 객체를 자동으로 추출하는 방법을 제안한다. 본 논문에서 제안하는 방법은 다른 영역과 구별되는 현저한 장소에 무의식적으로 집중되는 시각주의 특성을 컴퓨터 시스템에 도입한 대비 지도(contrast map)와 중요 특징점(salient point)을 적용한 것이 큰 특징이라고 할 수 있다. 대비 지도는 밝기(luminance), 색상(color) 그리고 방향성(direction) 3가지의 특징 정보 중 자기와 방향성의 특징을 나타내는 자기 지도(luminance map)와 방향성 지도(directional map)를 결합하여 대비 지도를 생성한다. 또한, 사람이 시각적으로 볼 때 의미 있다고 생각하는 중요 특징점을 웨이블릿 변환을 이용하여 찾아낸다. 이렇게 생성된 대비 지도와 중요 특징점을 이용하여 대략적인 집중윈도우(AW:Attention Window)의 위치와 크기를 결정한다. 다음으로, 동영상의 가장 큰 특징인 움직임 정보를 추정하여 집중윈도우를 객체에 가장 근사하게 축소시키고, 윤곽선 정보를 이용하여 객체를 추출한다. 윤곽선을 추출하기 위해 캐니에지(canny edge)를 사용하였으며, 배경의 윤곽선 제거를 위하여 윤곽선의 차이(DE:Difference of Edge)를 이용하여 가로 후보영역과 세로 후보영역을 추출한다. 추출된 2개의 후보영역을 AND연산과 모폴로지 연산을 이용하여 객체를 자동으로 추출하는 방법을 제안한다. 실험은 카메라가 고정된 상태에서 촬영한 동영상에 대해 이루어 졌으며, 객체와 배경이 효과적으로 분리되는 것을 확인하였다.

참조표를 이용한 재파지 계획기 (Regrasp Planner Using Look-up Table)

  • 조경래;이종원;김문상;송재복
    • 대한기계학회논문집A
    • /
    • 제24권4호
    • /
    • pp.848-857
    • /
    • 2000
  • A pick-and-place operation in 3-dimensional environment is basic operation for human and multi-purpose manipulators. However, there may be a difficult problem for such manipulators. Especially, if the object cannot be moved with a single grasp, regrasping, which can be a time-consuming process, should be carried out. Regrasping, given initial and final pose of the target object, is a construction of sequential transition of object poses that are compatible with two poses in the point of grasp configuration. This paper presents a novel approach for solving regrasp problem. The approach consists of a preprocessing and a planning stage. Preprocessing, which is done only once for a given robot, generates a look-up table which has information of kinematically feasible task space of end-effector through all the workspace. Then, using the table planning automatically determines possible intermediate location, pose and regrasp sequence leading from the pick-up to put-down grasp. Experiments show that the presented is complete in the total workspace. The regrasp planner was combined with existing path.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.