• 제목/요약/키워드: mobile vision system

Search Result 290, Processing Time 0.025 seconds

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.3
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

The Study of Mobile Robot Self-displacement Recognition Using Stereo Vision (스테레오 비젼을 이용한 이동로봇의 자기-이동변위인식 시스템에 관한 연구)

  • 심성준;고덕현;김규로;이순걸
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.934-937
    • /
    • 2003
  • In this paper, authors use a stereo vision system based on the visual model of human and establish inexpensive method that recognizes moving distance using characteristic points around the robot. With the stereovision. the changes of the coordinate values of the characteristic points that are fixed around the robot are measured. Self-displacement and self-localization recognition system is proposed from coordination reconstruction with those changes. To evaluate the proposed system, several characteristic points that is made with a LED around the robot and two cheap USB PC cameras are used. The mobile robot measures the coordinate value of each characteristic point at its initial position. After moving, the robot measures the coordinate values of the characteristic points those are set at the initial position. The mobile robot compares the changes of these several coordinate values and converts transformation matrix from these coordinate changes. As a matrix of the amount and the direction of moving displacement of the mobile robot, the obtained transformation matrix represents self-displacement and self-localization by the environment.

  • PDF

Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System (비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정)

  • Kwon, Ji-Wook;Park, Mun-Soo;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.12
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

Obstacle Detection using Laser Scanner and Vision System for Path Planning on Autonomous Mobile Agents (무인 이동 개체의 경로 생성을 위한 레이저 스캐너와 비전 시스템의 데이터 융합을 통한 장애물 감지)

  • Jeong, Jin-Gu;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.7
    • /
    • pp.1260-1272
    • /
    • 2008
  • This paper proposes object detection algorithm using laser scanner and vision system for the path planning of autonomous mobile agents. As the scanner-based method can observe the obstacles in only two dimensions, it is hard to detect the shape and the number of obstacles. On the other hand, vision-based method is sensitive to the environment and has its difficulty in the accurate distance measurement. Thus, we combine these two methods based on K-means algorithm such that the obstacle avoidance and optimal path planning of autonomous mobile agents can be achieved.

Effective Navigation of a Mobile Robot Using Guide-Marks Sensed through Vision (사각장치에 의해 감지된 가이드 마크를 이용한 이동 로보트의 효과적인 항법)

  • ;;;ZeungNam Bien
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.38 no.12
    • /
    • pp.963-970
    • /
    • 1989
  • The navigation problem for a mobile robot is investigated. Specifically, it is proposed that simple guide-marks be introduced and the navigation scheme be generated in conjunction with the guide-marks sensed through camera vision. For autonomous navigation, it was shown that a triple guide-mark system is more effective than a single guide-mark in estimating the position and orientation of mobile robot itself. The navigation system is tested via a mobile robot HERO-I equipped with a single camera in laboratory environment.

  • PDF

Corridor Navigation of the Mobile Robot Using Image Based Control

  • Han, Kyu-Bum;Kim, Hae-Young;Baek, Yoon-Su
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.8
    • /
    • pp.1097-1107
    • /
    • 2001
  • In this paper, the wall following navigation algorithm of the mobile robot using a mono vision system is described. The key points of the mobile robot navigation system are effective acquisition of the environmental information and fast recognition of the robot position. Also, from this information, the mobile robot should be appropriately controlled to follow a desired path. For the recognition of the relative position and orientation of the robot to the wall, the features of the corridor structure are extracted using the mono vision system, then the relative position, the offset distance and steering angle of the robot from the wall, is derived for a simple corridor geometry. For the alleviation of the computation burden of the image processing, the Kalman filter is used to reduce search region in the image space for line detection. Next, the robot is controlled by this information to follow the desired path. The wall following control scheme by the PD control scheme is composed of two control parts, the approaching control and the orientation control, and each control is performed by steering and forward-driving motion of the robot. To verify the effectiveness of the proposed algorithm, the real time navigation experiments are performed. Through the result of the experiments, the effectiveness and flexibility of the suggested algorithm are verified in comparison with a pure encoder-guided mobile robot navigation system.

  • PDF

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

Development of an Embedded Vision Platform for Internet-based Robot Control

  • Kim, Tae-Hee;Jeon, Jae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.116.4-116
    • /
    • 2002
  • $\textbullet$In case of using overhead camera system, mobile robot moves under static working area. $\textbullet$Mobile robot must use onboard camera system to work under wide working area. $\textbullet$Mobile robot must have wireless LAN to remove restriction of movement. $\textbullet$Onboard camera system must have wireless LAN environment. $\textbullet$We develop embedded vision platform using onboard camera.

  • PDF

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Development of a Sensor System for Real-Time Posture Measurement of Mobile Robots (이동 로봇의 실시간 자세 추정을 위한 센서 시스템의 개발)

  • 이상룡;권승만
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.9
    • /
    • pp.2191-2204
    • /
    • 1993
  • A sensor system has been developed to measure the posture(position and orientation) of mobile robots working in industrial environments. The proposed sensor system consists of a CCD camera, retro-reflective landmarks, a strobe unit and an image processing board. The proposed hardware system can be built in economic price compared to commercial vision systems. The system has the capability of measuring the posture of mobile robots within 60 msec when a 386 personal computer is used as the host computer. The experimental results demonstrated a remarkable performance of the proposed sensor system in the posture measurement of mobile robots - the average error in position is less than 3 mm and the average error in orientation is less than 1.5.