• Title/Summary/Keyword: Monocular

Search Result 236, Processing Time 0.026 seconds

One Idea on a Three Dimensional Measuring System Using Light Intensity Modulation

  • Fujimoto Ikumatsu;Cho In-Ho;Pak Jeong-Hyeon;Pyoun Young-Sik
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.130-136
    • /
    • 2005
  • A new optical digitizing system for determining the position of a cursor in three dimensions(3D) and an experimental device for its measurement are presented. A semi-passive system using light intensity modulation, a technology that is well known in radar ranging, is employed in order to overcome precision limitations imposed by background light. This system consists of a charge-coupled device camera placed before a rotating mirror and a light-emitting diode whose intensity is modulated. Using a Fresnel pattern for light modulation, it is verified that a substantial improvement of the signal to noise ratio is realized for the background noise and that a resolution of less than a single pixel can be achieved. This opens the doorway to the realization of high precision 3D digitized measurement. We further propose that a 3D position measurement with a monocular optical system can be realized by a numerical experiment if a linear-period modulated waveform is adopted as the light-modulating one.

Image-based Visual Servoing Through Range and Feature Point Uncertainty Estimation of a Target for a Manipulator (목표물의 거리 및 특징점 불확실성 추정을 통한 매니퓰레이터의 영상기반 비주얼 서보잉)

  • Lee, Sanghyob;Jeong, Seongchan;Hong, Young-Dae;Chwa, Dongkyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.6
    • /
    • pp.403-410
    • /
    • 2016
  • This paper proposes a robust image-based visual servoing scheme using a nonlinear observer for a monocular eye-in-hand manipulator. The proposed control method is divided into a range estimation phase and a target-tracking phase. In the range estimation phase, the range from the camera to the target is estimated under the non-moving target condition to solve the uncertainty of an interaction matrix. Then, in the target-tracking phase, the feature point uncertainty caused by the unknown motion of the target is estimated and feature point errors converge sufficiently near to zero through compensation for the feature point uncertainty.

Efficient Lane Detection for Preceding Vehicle Extraction by Limiting Search Area of Sequential Images (전방의 차량포착을 위한 연속영상의 대상영역을 제한한 효율적인 차선 검출)

  • Han, Sang-Hoon;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.705-717
    • /
    • 2001
  • In this paper, we propose a rapid lane detection method to extract a preceding vehicle from sequential images captured by a single monocular CCD camera. We detect positions of lanes for an individual image within the limited area that would not be hidden and thereby compute the slopes of the detected lanes. Then we find a search area where vehicles would exist and extract the position of the preceding vehicle within the area with edge component by applying a structured method. To verify the effects of the proposed method, we capture the road images with a notebook PC and a CCD camera for PC and present the results such as processing time for lane detection, accuracy and vehicles detection against the images.

  • PDF

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

Autonomous Control System of Compact Model-helicopter

  • Kang, Chul-Ung;Jun Satake;Takakazu Ishimatsu;Yoichi Shimomoto;Jun Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.95-99
    • /
    • 1998
  • We introduce an autonomous flying system using a model-helicopter. A feature of the helicopter is that autonomous flight is realized on the low-cost compact model-helicopter. Our helicopter system is divided into two parts. One is on the helicopter, and the other is on the land. The helicopter is loaded with a vision sensor and an electronic compass including a tilt sensor. The control system on the land monitors the helicopter movement and controls. We firstly introduce the configuration of our helicopter system with a vision sensor and an electronic compass. To determine the 3-D position and posture of helicopter, a technique of image recognition using a monocular image is described based on the idea of the sensor fusion of vision and electronic compass. Finally, we show an experiment result, which we obtained in the hovering. The result shows the effectiveness of our system in the compact model-helicopter.

  • PDF

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Detection of Objects Temporally Stop Moving with Spatio-Temporal Segmentation (시공간 영상분할을 이용한 이동 및 이동 중 정지물체 검출)

  • Kim, Do-Hyung;Kim, Gyeong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.142-151
    • /
    • 2015
  • This paper proposes a method for detection of objects temporally stop moving in video sequences taken by a moving camera. Even though the consequence of missed detection of those objects could be catastrophic in terms of application level requirements, not much attention has been paid in conventional approaches. In the proposed method, we introduce cues for consistent detection and tracking of objects: motion potential, position potential, and color distribution similarity. Integration of the three cues in the graph-cut algorithm makes possible to detect objects that temporally stop moving and are newly appearing. Experiment results prove that the proposed method can not only detect moving objects but also track objects stop moving.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.