• Title/Summary/Keyword: Robot navigation

Search Result 823, Processing Time 0.026 seconds

A Study on Infra-Technology of RCP Mobility System

  • Kim, Seung-Woo;Choe, Jae-Il;Im, Chan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1435-1439
    • /
    • 2004
  • Most recently, CP(Cellular Phone) has been one of the most important technologies in the IT(Information Tech-nology) field, and it is situated in a position of great importance industrially and economically. To produce the best CP in the world, a new technological concept and its advanced implementation technique is required, due to the extreme level of competition in the world market. The RT(Robot Technology) has been developed as the next generation of a future technology. Current robots require advanced technology, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition etc. unlike the industrial robots of the past. Therefore, this paper explains conceptual research for development of the RCP(Robotic Cellular Phone), a new technological concept, in which a synergy effect is generated by the merging of IT & RT. RCP infra consists of $RCP^{Mobility}$ $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP mobility system is focused in this paper. $RCP^{Mobility}$ is to apply a mobility technology, which is popular robot technology, to CP and combine human-friendly motion and navigation function to CP. It develops a new technological application system of auto-charging and real-world entertainment function etc. This technology can make a CP companion pet robot. It is an automation of human-friendly motions such as opening and closing of CPs, rotation of antenna, manipulation and wheel-walking. It's target is the implementation of wheel and manipulator functions that can give service to humans with human-friendly motion. So, this paper presents the definition, the basic theory and experiment results of the RCP mobility system. We confirm a good performance of the RCP mobility system through the experiment results.

  • PDF

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Development of Force Feedback Joystick for Remote Control of a Mobile Robot (이동로봇의 원격제어를 위한 힘 반향 조이스틱의 개발)

  • Suh, Se-Wook;Yoo, Bong-Soo;Joh, Joong-Seon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2003
  • The main goal of existing mobile robot system was a complete autonomous navigation and the vision information was just used as an assistant way such as monitoring For this reason, the researches have been going towards sophistication of autonomousness gradually and the production costs also has been risen. However, it is also important to control remotely an inexpensive mobile robot system which has no intelligence at all. Such systems may be much more effective than fully autonomous systems in practice. Visual information from a simple camera and distance information from ultrasonic sensors are used for this system. Collision avoidance becomes the most important problem for this system. In this paper, we developed a force feedback joystick to control the robot system remotely with collision avoiding capability. Fuzzy logic is used for the algorithm in order to implement the expert s knowledge intelligently. Some experimental results show the force feedback joystick werks very well.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Evaluation of Position Error and Sensitivity for Ultrasonic Wave and Radio Frequency Based Localization System (초음파와 무선 통신파 기반 위치 인식 시스템의 위치 오차와 민감도 평가)

  • Shin, Dong-Hun;Lee, Yang-Jae
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.2
    • /
    • pp.183-189
    • /
    • 2010
  • A localization system for indoor robots is an important technology for robot navigation in a building. Our localization system imports the GPS system and consists of more than 3 satellite beacons and a receiver. Each beacon emits both an ultrasonic wave and radio frequency. The receiver in the robot computes the distance from it to the beacon by measuring the flying time difference between ultrasonic wave and radio frequency. It then computes its position with the distance information from more than 3 beacons whose positions are known. However, the distance information includes errors caused from the ultrasonic sensors; we found it to be limited to within one period of a wave (${\pm}2\;cm$ tolerance). This paper presents a method for predicting the maximum position error due to distance information errors by using Taylor expansion and singular value decomposition (SVD). The paper also proposes a measuring parameter such as sensitivity to represent the accuracy of the indoor robot localization system in determining the robot's position with regards to the distance error.

A Survey Study on the development of Omni-Wheel Drive Rider Robot with autonomous driving systems for Disabled People and Senior Citizens (자율주행 탑승용 옴니 드라이브 라이더 로봇 개발에 대한 장애인과 고령자의 욕구조사)

  • Rhee, G.M.;Kim, D.O.;Lee, S.C.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.6 no.1
    • /
    • pp.17-27
    • /
    • 2012
  • This study provides development information on Omni-Wheel Drive Rider Robot, futuristic electric scooters, with autonomous driving systems that are used for people including the disabled and senior. Also, it is meaningful in suggesting alternatives to replace motorized wheelchairs or electric scooters for the future. Prior to development of Omni-Wheel Drive Rider Robot with autonomous driving systems, it surveyed 49 people, including 18 people who own electric scooters and 31 senior people who have not. The summary of the survey is as follows. First, inconveniences during riding and exiting and short mileage due and safety driving to problems of recharging batteries are the most urgent task. For these problems, the study shows that charging time of batteries, mileage, armrests, footrests, angle of a seat are the primary considerations. Second, drivers prefer joystick over steering wheels because of convenience in one-handed driving against dangers from footrest and carriageways sloping roads, paving blocks. One-handed driving can reduce driving fatigues with automatic stop systems. Moreover, the study suggests many design factors related to navigation systems, obstacle avoidance systems, omni-wheels, automatic cover-opening systems in rainy.

  • PDF

Development of Adaptive Moving Obstacle Avoidance Algorithm Based on Global Map using LRF sensor (LRF 센서를 이용한 글로벌 맵 기반의 적응형 이동 장애물 회피 알고리즘 개발)

  • Oh, Se-Kwon;Lee, You-Sang;Lee, Dae-Hyun;Kim, Young-Sung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.377-388
    • /
    • 2020
  • In this paper, the autonomous mobile robot whit only LRF sensors proposes an algorithm for avoiding moving obstacles in an environment where a global map containing fixed obstacles. First of all, in oder to avoid moving obstacles, moving obstacles are extracted using LRF distance sensor data and a global map. An ellipse-shaped safety radius is created using the sum of relative vector components between the extracted moving obstacles and of the autonomuos mobile robot. Considering the created safety radius, the autonomous mobile robot can avoid moving obstacles and reach the destination. To verify the proposed algorithm, use quantitative analysis methods to compare and analyze with existing algorithms. The analysis method compares the length and run time of the proposed algorithm with the length of the path of the existing algorithm based on the absence of a moving obstacle. The proposed algorithm can be avoided by taking into account the relative speed and direction of the moving obstacle, so both the route and the driving time show higher performance than the existing algorithm.

Indoor Localization for Mobile Robot using Extended Kalman Filter (확장 칼만 필터를 이용한 로봇의 실내위치측정)

  • Kim, Jung-Min;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.706-711
    • /
    • 2008
  • This paper is presented an accurate localization scheme for mobile robots based on the fusion of ultrasonic satellite (U-SAT) with inertial navigation system (INS), i.e., sensor fusion. Our aim is to achieve enough accuracy less than 100 mm. The INS consist of a yaw gyro, two wheel-encoders. And the U-SAT consist of four transmitters, a receiver. Besides the localization method in this paper fuse these in an extended Kalman filter. The performance of the localization is verified by simulation and two actual data(straight, curve) gathered from about 0.5 m/s of driving actual driving data. localization methods used are general sensor fusion and sensor fusion through Kalman filter using data from INS. Through the simulation and actual data studies, the experiment show the effectiveness of the proposed method for autonomous mobile robots.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot (LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행)

  • Kim, Hyun Woo;Hawng, Yo-Seup;Kim, Yun-Ki;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.