• Title/Summary/Keyword: estimation of 3-D position

Search Result 171, Processing Time 0.031 seconds

Mixed reality system using adaptive dense disparity estimation (적응적 미세 변이추정기법을 이용한 스테레오 혼합 현실 시스템 구현)

  • 민동보;김한성;양기선;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.171-174
    • /
    • 2003
  • In this paper, we propose the method of stereo images composition using adaptive dense disparity estimation. For the correct composition of stereo image and 3D virtual object, we need correct marker position and depth information. The existing algorithms use position information of markers in stereo images for calculating depth of calibration object. But this depth information may be wrong in case of inaccurate marker tracking. Moreover in occlusion region, we can't know depth of 3D object, so we can't composite stereo images and 3D virtual object. In these reasons, the proposed algorithm uses adaptive dense disparity estimation for calculation of depth. The adaptive dense disparity estimation is the algorithm that use pixel-based disparity estimation and the search range is limited around calibration object.

  • PDF

3D Pose Estimation of a Circular Feature With a Coplanar Point (공면 점을 포함한 원형 특징의 3차원 자세 및 위치 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.13-24
    • /
    • 2011
  • This paper deals with a 3D-pose (orientation and position) estimation problem of a circular object in 3D-space. Circular features can be found with many objects in real world, and provide crucial cues in vision-based object recognition and location. In general, as a circular feature in 3D space is perspectively projected when imaged by a camera, it is difficult to recover fully three-dimensional orientation and position parameters from the projected curve information. This paper therefore proposes a 3D pose estimation method of a circular feature using a coplanar point. We first interpret a circular feature with a coplanar point in both the projective space and 3D space. A procedure for estimating 3D orientation/position parameters is then described. The proposed method is verified by a numerical example, and evaluated by a series of experiments for analyzing accuracy and sensitivity.

Estimation of Human Height and Position using a Single Camera (단일 카메라를 이용한 보행자의 높이 및 위치 추정 기법)

  • Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.20-31
    • /
    • 2008
  • In this paper, we propose a single view-based technique for the estimation of human height and position. Conventional techniques for the estimation of 3D geometric information are based on the estimation of geometric cues such as vanishing point and vanishing line. The proposed technique, however, back-projects the image of moving object directly, and estimates the position and the height of the object in 3D space where its coordinate system is designated by a marker. Then, geometric errors are corrected by using geometric constraints provided by the marker. Unlike most of the conventional techniques, the proposed method offers a framework for simultaneous acquisition of height and position of an individual resident in the image. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences from outdoor environments.

3D City Modeling Using Laser Scan Data

  • Kim, Dong-Suk;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.505-507
    • /
    • 2003
  • This paper describes techniques for the automated creation of geometric 3D models of the urban area us ing two 2D laser scanners and aerial images. One of the laser scanners scans an environment horizontally and the other scans vertically. Horizontal scanner is used for position estimation and vertical scanner is used for building 3D model. Aerial image is used for registration with scan data. Those models can be used for virtual reality, tele-presence, digital cinematography, and urban planning applications. Results are shown with 3D point cloud in urban area.

  • PDF

Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control (차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정)

  • Park, Joonsang;Park, Hyungwook
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.3
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

Stabilized 3D Pose Estimation of 3D Volumetric Sequence Using 360° Multi-view Projection (360° 다시점 투영을 이용한 3D 볼류메트릭 시퀀스의 안정적인 3차원 자세 추정)

  • Lee, Sol;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.76-77
    • /
    • 2022
  • In this paper, we propose a method to stabilize the 3D pose estimation result of a 3D volumetric data sequence by matching the pose estimation results from multi-view. Draw a circle centered on the volumetric model and project the model from the viewpoint at regular intervals. After performing Openpose 2D pose estimation on the projected 2D image, the 2D joint is matched to localize the 3D joint position. The tremor of 3D joints sequence according to the angular spacing was quantified and expressed in graphs, and the minimum conditions for stable results are suggested.

  • PDF

Rotor Position and Speed Estimation of Interior Permanent Magnet Synchronous Motor using Unscented Kalman Filter

  • An, Lu;Hameyer, Kay
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.3 no.4
    • /
    • pp.458-464
    • /
    • 2014
  • This paper proposes the rotor position and rotor speed estimation for an interior permanent magnet synchronous machines (IPMSM) using Unscented Kalman Filter (UKF) in alpha-beta coordinate system. Conventional algorithms using UKF are based on the simple observer model of IPMSM in d-q coordinate system. Rotor acceleration is neglected within the sampling step. An expansion of the observer model in an alpha-beta coordinate system with the consideration of the rotor speed variation provides the improved rotor position and speed estimation. The results show good stability concerning the expansion of observer model for the IPMSM.

Development of a 3D Localization Algorithm Using Hull Geometry Information (선체 형상 정보를 활용한 3차원 위치인식 알고리즘 개발)

  • Mingyu Jang;Jinhyun Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.300-306
    • /
    • 2023
  • A hull-cleaning robot sticks to the surface of a vessel and moves for efficient cleaning. Precise path planning and tracking using the current position is crucial. Many robots rely on the INS algorithm, but errors accumulate. To fix this, GPS, sonar, and USBL are used, though with limitations. Selecting suitable sensors for the surface operation and accurate positioning algorithm are vital. In this study, we developed a robot position estimation algorithm using the structure of a ship. Problems that arise when expanding the 2D position estimation algorithm used in existing wall structures to 3D were evaluated and methods for solving them were proposed. In addition, we aimed to improve performance by deriving singularities that exist in the robot path and proposing an error correction algorithm based on the singularities.

POSITION AND POSTURE ESTIMATION OF 3D-OBJECT USING COLOR AND DISTANCE INFORMATION

  • Ji, Hyun-Jong;Takahashi, Rina;Nagao, Tomoharu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.535-540
    • /
    • 2009
  • Recently, autonomous robots which can achieve the complex tasks have been required with the advance of robotics. Advanced robot vision for recognition is necessary for the realization of such robots. In this paper, we propose a method to recognize an object in the actual environment. We assume that a 3D-object model used in our proposal method is the voxel data. Its inside is full up and its surface has color information. We also define the word "recognition" as the estimation of a target object's condition. This condition means the posture and the position of a target object in the actual environment. The proposal method consists of three steps. In Step 1, we extract features from the 3D-object model. In Step 2, we estimate the position of the target object. At last, we estimate the posture of the target object in Step 3. And we experiment in the actual environment. We also confirm the performance of our proposal method from results.

  • PDF

The Estimation of the Transform Parameters Using the Pattern Matching with 2D Images (2차원 영상에서 패턴매칭을 이용한 3차원 물체의 변환정보 추정)

  • 조택동;이호영;양상민
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.7
    • /
    • pp.83-91
    • /
    • 2004
  • The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision or space resection in photogrammetry. This paper discusses estimation of transform parameters using the pattern matching method with 2D images only. In general, the 3D reference points or lines are needed to find out the 3D transform parameters, but this method is applied without the 3D reference points or lines. It uses only two images to find out the transform parameters between two image. The algorithm is simulated using Visual C++ on Windows 98.