• Title/Summary/Keyword: Probability Robot

Search Result 93, Processing Time 0.032 seconds

Improved Map construction for Mobile Robot using Genetic Algorithm and Fuzzy (진화 알고리즘과 퍼지 논리를 이용한 이동로봇의 개선된 맵 작성)

  • Son, Jung-Su;Jung, Suk-Yoon;Jin, Kwang-Sik;Yoon, Tae-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2451-2453
    • /
    • 2002
  • In this paper, we present an infrared sensors aided map building method for mobile robot using genetic algorithm and fuzzy logic. Existing Bayesian update model using ultrasonic sensors only has a problem of the quality of map being degraded in the wall with irregularity which is caused by the wide beam width of sonar waves and Gaussian probability distribution. In order to solve this problem we propose an improved method of map building using supplementary infrared sensors. In the method, wide beam width of sonar waves is divided by infrared sensors and probability is distributed according to infrared sensors' information using fuzzy logic and genetic algorithm.

  • PDF

Probability Distribution-Based Object Avoidance with a Laser Scanner (확률 분포 기반의 레이저 스캐너를 이용한 장애물 회피)

  • Lee, Jin-Seob;Kwon, Ji-Wook;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.339-340
    • /
    • 2007
  • This paper proposes a object avoidance algorithm that enables a mobile robot with laser scanner. Object detecting system has a function to detect object in front of the mobile robot by using a laser scanner. The proposed method based on probability distribution and finds local-paths to avoid collisions. Simulation results show the feasibility of the proposed method.

  • PDF

Experimental result of Real-time Sonar-based SLAM for underwater robot (소나 기반 수중 로봇의 실시간 위치 추정 및 지도 작성에 대한 실험적 검증)

  • Lee, Yeongjun;Choi, Jinwoo;Ko, Nak Yong;Kim, Taejin;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.108-118
    • /
    • 2017
  • This paper presents experimental results of realtime sonar-based SLAM (simultaneous localization and mapping) using probability-based landmark-recognition. The sonar-based SLAM is used for navigation of underwater robot. Inertial sensor as IMU (Inertial Measurement Unit) and DVL (Doppler Velocity Log) and external information from sonar image processing are fused by Extended Kalman Filter (EKF) technique to get the navigation information. The vehicle location is estimated by inertial sensor data, and it is corrected by sonar data which provides relative position between the vehicle and the landmark on the bottom of the basin. For the verification of the proposed method, the experiments were performed in a basin environment using an underwater robot, yShark.

Artificial Neural Network for Stable Robotic Grasping (안정적 로봇 파지를 위한 인공신경망)

  • Kim, Kiseo;Kim, Dongeon;Park, Jinhyun;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.94-103
    • /
    • 2019
  • The optimal grasping point of the object varies depending on the shape of the object, such as the weight, the material, the grasping contact with the robot hand, and the grasping force. In order to derive the optimal grasping points for each object by a three fingered robot hand, optimal point and posture have been derived based on the geometry of the object and the hand using the artificial neural network. The optimal grasping cost function has been derived by constructing the cost function based on the probability density function of the normal distribution. Considering the characteristics of the object and the robot hand, the optimum height and width have been set to grasp the object by the robot hand. The resultant force between the contact area of the robot finger and the object has been estimated from the grasping force of the robot finger and the gravitational force of the object. In addition to these, the geometrical and gravitational center points of the object have been considered in obtaining the optimum grasping position of the robot finger and the object using the artificial neural network. To show the effectiveness of the proposed algorithm, the friction cone for the stable grasping operation has been modeled through the grasping experiments.

Multiple Target Tracking and Forward Velocity Control for Collision Avoidance of Autonomous Mobile Robot (실외 자율주행 로봇을 위한 다수의 동적 장애물 탐지 및 선속도 기반 장애물 회피기법 개발)

  • Kim, Sun-Do;Roh, Chi-Won;Kang, Yeon-Sik;Kang, Sung-Chul;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.635-641
    • /
    • 2008
  • In this paper, we used a laser range finder (LRF) to detect both the static and dynamic obstacles for the safe navigation of a mobile robot. LRF sensor measurements containing the information of obstacle's geometry are first processed to extract the characteristic points of the obstacle in the sensor field of view. Then the dynamic states of the characteristic points are approximated using kinematic model, which are tracked by associating the measurements with Probability Data Association Filter. Finally, the collision avoidance algorithm is developed by using fuzzy decision making algorithm depending on the states of the obstacles tracked by the proposed obstacle tracking algorithm. The performance of the proposed algorithm is evaluated through experiments with the experimental mobile robot.

Non-parametric Density Estimation with Application to Face Tracking on Mobile Robot

  • Feng, Xiongfeng;Kubik, K.Bogunia
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.49.1-49
    • /
    • 2001
  • The skin color model is a very important concept in face detection, face recognition and face tracking. Usually, this model is obtained by estimating a probability density function of skin color distribution. In many cases, it is assumed that the underlying density function follows a Gaussian distribution. In this paper, a new method for non-parametric estimation of the probability density function, by using feed-forward neural network, is used to estimate the underlying skin color model. By using this method, the resulting skin color model is better than the Gaussian estimation and substantially approaches the real distribution. Applications to face detection and face ...

  • PDF

A Probabilistic Approach for Mobile Robot Localization under RFID Tag Infrastructures

  • Seo, Dae-Sung;Won, Dae-Heui;Yang, Gwang-Woong;Choi, Moo-Sung;Kwon, Sang-Ju;Park, Joon-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1797-1801
    • /
    • 2005
  • SLAM(Simultaneous localization and mapping) and AI(Artificial intelligence) have been active research areas in robotics for two decades. In particular, localization is one of the most important issues in mobile robot research. Until now expensive sensors like a laser sensor have been used for the mobile robot's localization. Currently, as the RFID reader devices like antennas and RFID tags become increasingly smaller and cheaper, the proliferation of RFID technology is advancing rapidly. So, in this paper, the smart floor using passive RFID tags is proposed and, passive RFID tags are mainly used to identify the mobile robot's location on the smart floor. We discuss a number of challenges related to this approach, such as RFID tag distribution (density and structure), typing and clustering. In the smart floor using RFID tags, because the reader just can senses whether a RFID tag is in its sensing area, the localization error occurs as much as the sensing area of the RFID reader. And, until now, there is no study to estimate the pose of mobile robot using RFID tags. So, in this paper, two algorithms are suggested to. We use the Markov localization algorithm to reduce the location(X,Y) error and the Kalman Filter algorithm to estimate the pose(q) of a mobile robot. We applied these algorithms in our experiment with our personal robot CMR-P3. And we show the possibility of our probability approach using the cheap sensors like odometers and RFID tags for the mobile robot's localization on the smart floor.

  • PDF

Development of Sensor Device and Probability-based Algorithm for Braille-block Tracking (확률론에 기반한 점자블록 추종 알고리즘 및 센서장치의 개발)

  • Roh, Chi-Won;Lee, Sung-Ha;Kang, Sung-Chul;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.249-255
    • /
    • 2007
  • Under the situation of a fire, it is difficult for a rescue robot to use sensors such as vision sensor, ultrasonic sensor or laser distance sensor because of diffusion, refraction or block of light and sound by dense smoke. But, braille blocks that are installed for the visaully impaired at public places such as subway stations can be used as a map for autonomous mobile robot's localization and navigation. In this paper, we developed a laser sensor stan device which can detect braille blcoks in spite of dense smoke and integrated the device to the robot developed to carry out rescue mission in various hazardous disaster areas at KIST. We implemented MCL algorithm for robot's attitude estimation according to the scanned data and transformed a braille block map to a topological map and designed a nonlinear path tracking controller for autonomous navigation. From various simulations and experiments, we could verify that the developed laser sensor device and the proposed localization method are effective to autonomous tracking of braille blocks and the autonomous navigation robot system can be used for rescue under fire.

Experimental Result on Map Expansion of Underwater Robot Using Acoustic Range Sonar (수중 초음파 거리 센서를 이용한 수중 로봇의 2차원 지도 확장 실험)

  • Lee, Yeongjun;Choi, Jinwoo;Lee, Yoongeon;Choi, Hyun-Taek
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.2
    • /
    • pp.79-85
    • /
    • 2018
  • This study focuses on autonomous exploration based on map expansion for an underwater robot equipped with acoustic sonars. Map expansion is applicable to large-area mapping, but it may affect localization accuracy. Thus, as the key contribution of this paper, we propose a method for underwater autonomous exploration wherein the robot determines the trade-off between map expansion ratio and position accuracy, selects which of the two has higher priority, and then moves to a mission step. An occupancy grid map is synthesized by utilizing the measurements of an acoustic range sonar that determines the probability of occupancy. This information is then used to determine a path to the frontier, which becomes the new search point. During area searching and map building, the robot revisits artificial landmarks to improve its position accuracy as based on imaging sonar-based recognition and EKF-SLAM if the position accuracy is above the predetermined threshold. Additionally, real-time experiments were conducted by using an underwater robot, yShark, to validate the proposed method, and the analysis of the results is discussed herein.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.