• Title/Summary/Keyword: Moving monocular camera

Search Result 14, Processing Time 0.024 seconds

Lattice-Based Background Motion Compensation for Detection of Moving Objects with a Single Moving Camera (이동하는 단안 카메라 환경에서 이동물체 검출을 위한 격자 기반 배경 움직임 보상방법)

  • Myung, Yunseok;Kim, Gyeonghwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.52-54
    • /
    • 2015
  • In this paper we propose a new background motion compensation method which can be applicable to moving object detection with a moving monocular camera. To estimate the background motion, a series of image warpings are carried out for each pair of the corresponding patches, defined by the fixed-size lattice, based on the motion information extracted from the feature points surrounded by the patches and the estimated camera motion. Experiment results proved that the proposed has approximately 50% faster in execution time and 8dB higher in PSNR comparing to a conventional method.

A Study on Estimating Smartphone Camera Position (스마트폰 카메라의 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Yoon, Sojung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.99-104
    • /
    • 2021
  • The technology of estimating a movement trajectory using a monocular camera such as a smartphone and composing a surrounding 3D image is key not only in indoor positioning but also in the metaverse service. The most important thing in this technique is to estimate the coordinates of the moving camera center. In this paper, a new algorithm for geometrically estimating the moving distance is proposed. The coordinates of the 3D object point are obtained from the first and second photos, and the movement distance vector is obtained using the matching feature points of the first and third photos. Then, while moving the coordinates of the origin of the third camera, a position where the 3D object point and the feature point of the third picture coincide is obtained. Its possibility and accuracy were verified by applying it to actual continuous image data.

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.

Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner (카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종)

  • Kim, Hyoung-Rae;Cui, Xue-Nan;Lee, Jae-Hong;Lee, Seung-Jun;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

Detection of Objects Temporally Stop Moving with Spatio-Temporal Segmentation (시공간 영상분할을 이용한 이동 및 이동 중 정지물체 검출)

  • Kim, Do-Hyung;Kim, Gyeong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.142-151
    • /
    • 2015
  • This paper proposes a method for detection of objects temporally stop moving in video sequences taken by a moving camera. Even though the consequence of missed detection of those objects could be catastrophic in terms of application level requirements, not much attention has been paid in conventional approaches. In the proposed method, we introduce cues for consistent detection and tracking of objects: motion potential, position potential, and color distribution similarity. Integration of the three cues in the graph-cut algorithm makes possible to detect objects that temporally stop moving and are newly appearing. Experiment results prove that the proposed method can not only detect moving objects but also track objects stop moving.

Multi-focus 3D Display (다초점 3차원 영상 표시 장치)

  • Kim, Seong-Gyu;Kim, Dong-Uk;Gwon, Yong-Mu;Son, Jeong-Yeong
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.07a
    • /
    • pp.119-120
    • /
    • 2008
  • A HMD type multi-focus 3D display system is developed and proof about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects. And the HMD type multi-focus 3D display can be applied to a monocular 3D display and monocular AR 3D display.

  • PDF

Localization of A Moving Vehicle using Backward-looking Camera and 3D Road Map (후방 카메라 영상과 3차원 도로지도를 이용한 이동차량의 위치인식)

  • Choi, Sung-In;Park, Soon-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.160-173
    • /
    • 2013
  • In this paper, we propose a new visual odometry technique by combining a forward-looking stereo camera and a backward-looking monocular camera. The main goal of the proposed technique is to identify the location of a moving vehicle which travels long distance and comes back to the initial position in urban road environments. While the vehicle is moving to the destination, a global 3D map is updated continuously by a stereo visual odometry technique using a graph theorem. Once the vehicle reaches the destination and begins to come back to the initial position, a map-based monocular visual odometry technqieu is used. To estimate the position of the returning vehicle accurately, 2D features in the backward-looking camera image and the global map are matched. In addition, we utilize the previous matching nodes to limit the search ranges of the next vehicle position in the global map. Through two navigation paths, we analyze the accuracy of the proposed method.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

3D Range Finding Algorithm Using Small Translational Movement of Stereo Camera (스테레오 카메라의 미소 병진운동을 이용한 3차원 거리추출 알고리즘)

  • Park, Kwang-Il;Yi, Jae-Woong;Oh, Jun-Ho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.8
    • /
    • pp.156-167
    • /
    • 1995
  • In this paper, we propose a 3-D range finding method for situation that stereo camera has small translational motion. Binocular stereo generally tends to produce stereo correspondence errors and needs huge amount of computation. The former drawback is because the additional constraints to regularize the correspondence problem are not always true for every scene. The latter drawback is because they use either correlation or optimization to find correct disparity. We present a method which overcomes these drawbacks by moving the stereo camera actively. The method utilized a motion parallax acquired by monocular motion stereo to restrict the search range of binocular disparity. Using only the uniqueness of disparity makes it possible to find reliable binocular disparity. Experimental results with real scene are presented to demonstrate the effectiveness of this method.

  • PDF

Mixing Collaborative and Hybrid Vision Devices for Robotic Applications (로봇 응용을 위한 협력 및 결합 비전 시스템)

  • Bazin, Jean-Charles;Kim, Sung-Heum;Choi, Dong-Geol;Lee, Joon-Young;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.