• 제목/요약/키워드: 3-D localization

검색결과 355건 처리시간 0.033초

The 3 Dimensional Triangulation Scheme based on the Space Segmentation in WPAN

  • 이동명;이호철
    • 공학교육연구
    • /
    • 제15권5호
    • /
    • pp.93-97
    • /
    • 2012
  • Most of ubiquitous computing devices such as stereo camera, ultrasonic sensor based MIT cricket system and other wireless sensor network devices are widely applied to the 2 Dimensional(2D) localization system in today. Because stereo camera cannot estimate the optimal location between moving node and beacon node in Wireless Personal Area Network(WPAN) under Non Line Of Sight(NLOS) environment, it is a great weakness point to the design of the 2D localization system in indoor environment. But the conventional 2D triangulation scheme that is adapted to the MIT cricket system cannot estimate the 3 Dimensional(3D) coordinate values for estimation of the optimal location of the moving node generally. Therefore, the 3D triangulation scheme based on the space segmentation in WPAN is suggested in this paper. The measuring data in the suggested scheme by computer simulation is compared with that of the geographic measuring data in the AutoCAD software system. The average error of coordinates values(x,y,z) of the moving node is calculated to 0.008m by the suggested scheme. From the results, it can be seen that the location correctness of the suggested scheme is very excellent for using the localization system in WPAN.

무선 센서 네트워크 환경에서 3차원 근사 위치추적 기법 (Approximate 3D Localization Mechanism in Wireless Sensor Network)

  • 심재석;임유진;박재성
    • 한국통신학회논문지
    • /
    • 제39B권9호
    • /
    • pp.614-619
    • /
    • 2014
  • WSN (Wireless Sensor Networks)기반 보안 감시 시스템에서는 센서들이 수집한 이벤트 발생 정보를 전송함에 있어서 이벤트가 발생한 지역의 위치 정보를 함께 제공하는 것이 요구된다. 기존에 많은 연구가 진행되었던 2D기반 위치추적 기법들은 고도가 일정한 환경에서는 꽤 높은 정확도를 보이나, 높이 개념이 추가된 실제 환경에서는 많은 오류를 발생시킬 수 있다. 또한 기존의 3D 위치추적 기법들은 많은 참조노드를 요구하거나, 복잡한 수식 계산을 요구하는 문제점을 가지고 있다. 그러나 본 논문에서 고려하는 실내 보안 감시 시스템에서는 감지된 대상체가 침입자인지 여부를 판단하기 위한 대상체의 높이 예측만을 요구한다. 따라서 본 논문에서는 복잡한 수식 계산이나 많은 참조노드들을 요구하지 않는 대상체 높이예측 기법을 제안한다. 또한 제안 기법의 성능분석을 위하여 여러 가지 시나리오에서 예측 정확도를 측정하였다.

키넥트 거리센서를 이용한 실내 이동로봇의 위치인식 및 3 차원 다각평면 지도 작성 (Localization and 3D Polygon Map Building Method with Kinect Depth Sensor for Indoor Mobile Robots)

  • 권대현;김병국
    • 제어로봇시스템학회논문지
    • /
    • 제22권9호
    • /
    • pp.745-752
    • /
    • 2016
  • We suggest an efficient Simultaneous Localization and 3D Polygon Map Building (SLAM) method with Kinect depth sensor for mobile robots in indoor environments. In this method, Kinect depth data is separated into row planes so that scan line segments are on each row plane. After grouping all scan line segments from all row planes into line groups, a set of 3D Scan polygons are fitted from each line group. A map matching algorithm then figures out pairs of scan polygons and existing map polygons in 3D, and localization is performed to record correct pose of the mobile robot. For 3D map-building, each 3D map polygon is created or updated by merging each matched 3D scan polygon, which considers scan and map edges efficiently. The validity of the proposed 3D SLAM algorithm is revealed via experiments.

스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정 (Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features)

  • 길세기;이종실;유제군;이응혁;홍승홍;신동범
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제55권4호
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

확장칼만필터를 이용한 무인잠수정의 3차원 위치평가 (3-D Localization of an Autonomous Underwater Vehicle Using Extended Kalman Filter)

  • 임종환;강철웅
    • 한국정밀공학회지
    • /
    • 제21권7호
    • /
    • pp.130-135
    • /
    • 2004
  • This paper presents a 3-D localization of an autonomous underwater vehicle(AUV). Conventional methods of localization, such as LBL or SBL, require additional beacon systems, which reduces the flexibility and availability of the AUV We use a digital compass, a pressure sensor, a clinometer and ultrasonic sensors for localization. From the orientation and velocity information, a priori position of the AUV is estimated based on the dead reckoning. With the aid of extended Kalman filter algorithm, a posteriori position of the AUV is estimated by using the distance between the AUV and a mother ship on the surface of the water together with the water depth information from the pressure sensor. Simulation results show the possibility of practical application of the method to autonomous navigation of the AUV.

무인모선기반 무인잠수정의 3차원 위치계측 기법에 관한 연구 (A Study on a 3-D Localization of a AUV Based on a Mother Ship)

  • 임종환;강철웅;김성근
    • 한국해양공학회지
    • /
    • 제19권2호
    • /
    • pp.74-81
    • /
    • 2005
  • A 3-D localization method of an autonomous underwater vehicle (AUV) has been developed, which can solve the limitations oj the conventional localization, such as LBL or SBL that reduces the flexibility and availability of the AUV. The system is composed of a mother ship (small unmanned marine prober) on the surface of the water and an unmanned underwater vehicle in the water. The mother ship is equipped with a digital compass and a GPS for position information, and an extended Kalman filter is used for position estimation. For the localization of the AUV, we used only non-inertial sensors, such as a digital compass, a pressure sensor, a clinometer, and ultrasonic sensors. From the orientation and velocity information, a priori position of the AUV is estimated by applying the dead reckoning method. Based on the extended Kalman filter algorithm, a posteriori position of the AUV is, then, updated by using the distance between the AUV and a mother ship on the surface of the water, together with the depth information from the pressure sensor.

특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법 (Localization of a Monocular Camera using a Feature-based Probabilistic Map)

  • 김형진;이동화;오택준;명현
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

LOCALIZATION OF THE VORTICITY DIRECTION CONDITIONS FOR THE 3D SHEAR THICKENING FLUIDS

  • Yang, Jiaqi
    • 대한수학회보
    • /
    • 제57권6호
    • /
    • pp.1481-1490
    • /
    • 2020
  • It is obtained that a localization of the vorticity direction coherence conditions for the regularity of the 3D shear thickening fluids to an arbitrarily small space-time cylinder. It implies the regularity of any geometrically constrained weak solution of the system considered independently of the type of the spatial domain or the boundary conditions.

2차원 레이저 거리계를 이용한 수직/수평 다각평면 기반의 위치인식 및 3차원 지도제작 (3D Simultaneous Localization and Map Building (SLAM) using a 2D Laser Range Finder based on Vertical/Horizontal Planar Polygons)

  • 이승은;김병국
    • 제어로봇시스템학회논문지
    • /
    • 제20권11호
    • /
    • pp.1153-1163
    • /
    • 2014
  • An efficient 3D SLAM (Simultaneous Localization and Map Building) method is developed for urban building environments using a tilted 2D LRF (Laser Range Finder), in which a 3D map is composed of perpendicular/horizontal planar polygons. While the mobile robot is moving, from the LRF scan distance data in each scan period, line segments on the scan plane are successively extracted. We propose an "expected line segment" concept for matching: to add each of these scan line segments to the most suitable line segment group for each perpendicular/horizontal planar polygon in the 3D map. After performing 2D localization to determine the pose of the mobile robot, we construct updated perpendicular/horizontal infinite planes and then determine their boundaries to obtain the perpendicular/horizontal planar polygons which constitute our 3D map. Finally, the proposed SLAM algorithm is validated via extensive simulations and experiments.

3D영상에 정합되는 스테레오 오디오 (Stereo Audio Matched with 3D Video)

  • 박성욱;정태윤
    • 한국지능시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.153-158
    • /
    • 2011
  • 본 연구에서는 동일한 내용의 영상을 2D로 감상할 때와 3D로 감상할 때 함께 재생되는 음향이 어떻게 달라져야하는지를 확인하는 주관적 실험을 수행하고 그 결과를 고찰하였다. 먼저 음향 정보는 음원이 자체적으로 제공하는 정보인 음원의 거리와 방위각(즉 위치) 그리고 음원의 환경 혹은 장면(scene)이 제공하는 정보인 공간감으로 분리가 가능하므로 이에 맞게 동일 내용의 2D/3D 영상이 음원의 위치 선정에 미치는 영향 평가 실험과 동일한 내용의 2D/3D 장면이 음향 공간감에 주는 영향 평가 실험을 수행하였다. 첫 번째 실험 결과 3D 영상을 감상하는 경우 2D 영상을 감상할 때 보다 스크린을 기준으로 음원의 거리와 방위각을 확대하여 인지한다는 결과를 얻을 수 있었다. 이는 2D 영상용 소리보다 거리와 방위각이 큰 3D 영상용 소리를 만들어야 한다는 것을 의미한다. 또한 3D 영상용 소리는 3D 영상뿐만 아니라 2D 영상과도 잘 어울린다는 결과를 얻었다. 두 번째 실험 결과, 3D 영상을 감상하는 경우 2D 영상을 감상할 때 보다 잔향이 더 많은 소리를 선호함을 알 수 있었다. 이는 3D 영상을 감상할때 공간감이 강화되기 때문으로 해석된다. 본 연구의 결과는 기본적으로 2D 영상용 음향을 제작하던 음향엔지니어가 3D영상용 음향을 제작하는 데 활용할 수 있으며, 2D to 3D 음향을 자동으로 변형하는 연구의 기초가 될 것이다. 더 나아가서 본 연구의 결과를 기반으로 제한된 대역폭에서 2D 와 3D를 동시에 지원하는 방송 시스템을 설계하는데 적용해 본다면, 방송 데이터 규격은 스테레오 영상, 음원의 위치가 강조된 3D 음향과 공간감을 주는 잔향 정보로 구성하는 것이 적절하다고 할 수 있다.