• Title/Summary/Keyword: Range finder

Search Result 181, Processing Time 0.029 seconds

Lane Marking Detection of Mobile Robot with Single Laser Rangefinder (레이저 거리 센서만을 이용한 자율 주행 모바일 로봇의 도로 위 정보 획득)

  • Jung, Byung-Jin;Park, Jun-Hyung;Kim, Taek-Young;Kim, Deuk-Young;Moon, Hyung-Pil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.521-525
    • /
    • 2011
  • Lane marking detection is one of important issues in the field of autonomous mobile robot. Especially, in urban environment, like pavement roads of downtown or tour tracks of Science Park, which have continuous patterns on the surface of the road, the lane marking detection becomes more important ability. Although there were many researches about lane detection and lane tracing, many of them used vision sensors mainly to detect lane marking. In this paper, we obtain 2 dimensional library data of 'Intensity' and 'Distance' using one laser rangefinder only. We design a simple classifier and filtering algorithm for the lane detection which uses only one LRF (Laser Range Finder). Allowing extended usage of LRF, this research provides more functionality not only in range finding but also in lane detecting to mobile robots. This work will be technically helpful for robot developers to design more simple and efficient autonomous driving system using LRF.

Distance Data Analysis of Indoor Environment for Ultrasonic Sensor Error Decrease (초음파 센서 오차 감소를 위한 실내 환경의 거리 자료 분석)

  • Lim, Byung-Hyun;Ko, Nak-Yong;Hwang, Jong-Sun;Kim, Yeong-Min;Park, Hyun-Chul
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.05b
    • /
    • pp.62-65
    • /
    • 2003
  • When a mobile robot moves around autonomously without man-made corrupted bye landmarks, it is essential to recognize the placement of surrounding objects especially for self localization, obstacle avoidance, and target classification and localization. To recognize the environment we use many Kinds of sensors, such as ultrasonic sensors, laser range finder, CCD camera, and so on. Among the sensors, ultra sonic sensors(sonar)are unexpensive and easy to use. In this paper, we analyze the sonar data and propose a method to recognize features of indoor environment. It is supposed that the environments are consisted of features of planes, edges, and corners, For the analysis, sonar data of plane, edge, and corner are accumulated for several given ranges. The data are filtered to eliminate some noise using the Kalman filter algorithm. Then, the data for each feature are compared each other to extract the character is ties of each feature. We demonstrate the applicability of the proposed method using the sonar data obtained form a sonar transducer rotating and scanning the range information around a indoor environment.

  • PDF

Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure (평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션)

  • Joung, Ji Hoon;Kang, Tae-Sun;Shin, Hyeon-Ho;Kim, SooJong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

An Algorithm for Detecting Linear Velocity and Angular Velocity for Improve Convenience of Assistive Walking System (보행보조시스템의 조작 편리성 향상을 위한 사용자의 선속도 및 회전각속도 검출 알고리즘)

  • Kim, Byeong-Cheol;Lee, Won-Young;Eom, Su-Hong;Jang, Mun-Seok;Kim, Pyeong-Su;Lee, Eung-Hyuk
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.4
    • /
    • pp.321-328
    • /
    • 2016
  • In this paper, we propose a walk status method which can be fused with conventional walk intention method to improve convenience of an electric assistive walking system for elder people with restricted walking capabilities. The system uses a handlebar as a trigger and regards grabbing a handlebar as expressing will to walk. And the system uses a user's linear velocity and angular velocity as linear velocity and angular velocity of a system, checked by laser range finder. To achieve this, we propose a method to find a virtual central point of a human body by estimating a central point between two legs. The experiments are carried out by comparing user's linear velocity and angular velocity, and system's linear velocity and angular velocity. The results show that the error of linear velocity and angular velocity between a user and a system are 1% and 2.77%, which means the linear velocity and angular velocity of a user can be applied to a system. And it is confirmed that the proposed fusion method can prevent a user from being dragged by an assistive walking system or a malfunction caused by lack of experience

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

Development of Acquisition and Analysis System of Radar Information for Small Inshore and Coastal Fishing Vessels - Position Tracking and Real-Time Monitoring- (연근해 소형 어선의 레이더 정보 수록 및 해석 시스템 개발 -위치 추적 및 실시간 모니터링 -)

  • 이대재;김광식;신형일;변덕수
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.337-346
    • /
    • 2003
  • This paper describes on the system and method for automatically tracking and real-time monitoring the position of target ships relative to the own ship using a PC based radar system that displays radar images and electronic charts together on a single PC screen. This system includes a simulator for generating the GGA and VTG information of target ships and a simulator for generating the TTM and OSD outputs from a ARPA radar and then host computer accepts NMEA0183 sentences on the maneuvering information of target ships from these simulators. The results obtained are summarized as follows;1. The system developed this study can be used as a range finder for measuring the distance between two ships and as a device for providing the maneuvering information such as distance and bearing to target ships from own ship on ECS screen. 2. From the result of position tracking for a selected target ship tracked with an update rate of 5 seconds using the $\alpha$-$\beta$ tracker, we concluded that the smoothing effect by the $\alpha$-$\beta$tracker was very effective and stable except in the time interval until about one minute after the target is detected. 3. From the fact that the real-time maneuvering information of tracked ship targets via a local area network (LAN) from a host computer installed a radar target extractor was successfully transferred to various monitoring computers of ship, we concluded that this system can be used as a sub-monitoring system of ARPA radar.

Localization of a Mobile Robot Using Ceiling Image with Identical Features (동일한 형태의 특징점을 갖는 천장 영상 이용 이동 로봇 위치추정)

  • Noh, Sung Woo;Ko, Nak Yong;Kuc, Tae Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.160-167
    • /
    • 2016
  • This paper reports a localization method of a mobile robot using ceiling image. The ceiling has landmarks which are not distinguishablefrom one another. The location of every landmark in a map is given a priori while correspondence is not given between a detected landmark and a landmark in the map. Only the initial pose of the robot relative to the landmarks is given. The method uses particle filter approach for localization. Along with estimating robot pose, the method also associates a landmark in the map to a landmark detected from the ceiling image. The method is tested in an indoor environment which has circular landmarks on the ceiling. The test verifies the feasibility of the method in an environment where range data to walls or to beacons are not available or severely corrupted with noise. This method is useful for localization in a warehouse where measurement by Laser range finder and range data to beacons of RF or ultrasonic signal have large uncertainty.

A Study for Vision-based Estimation Algorithm of Moving Target Using Aiming Unit of Unguided Rocket (무유도 로켓의 조준 장치를 이용한 영상 기반 이동 표적 정보 추정 기법 연구)

  • Song, Jin-Mo;Lee, Sang-Hoon;Do, Joo-Cheol;Park, Tai-Sun;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.315-327
    • /
    • 2017
  • In this paper, we present a method for estimating of position and velocity of a moving target by using the range and the bearing measurements from multiple sensors of aiming unit. In many cases, conventional low cost gyro sensor and a portable laser range finder(LRF) degrade the accuracy of estimation. To enhance these problems, we propose two methods. The first is background image tracking and the other is principal component analysis (PCA). The background tracking is used to assist the low cost gyro censor. And the PCA is used to cope with the problems of a portable LRF. In this paper, we prove that our method is robust with respect to low-frequency, biased and noisy inputs. We also present a comparison between our method and the extended Kalman filter(EKF).

Experimental Studies of a Cascaded Controller with a Neural Network for Position Tracking Control of a Mobile Robot Based on a Laser Sensor (레이저 센서 기반의 Cascaded 제어기 및 신경회로망을 이용한 이동로봇의 위치 추종 실험적 연구)

  • Jang, Pyung-Soo;Jang, Eun-Soo;Jeon, Sang-Woon;Jung, Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.7
    • /
    • pp.625-633
    • /
    • 2004
  • In this paper, position control of a car-like mobile robot using a neural network is presented. positional information of the mobile robot is given by a laser range finder located remotely through wireless communication. The heading angle is measured by a gyro sensor. Considering these two sensor information as a reference, the robot posture is corrected by a cascaded controller. To improve the tracking performance, a neural network with a cascaded controller is used to compensate for any uncertainty in the robot. The neural network functions as a compensator to minimize the positional errors in on-line fashion. A car-like mobile robot is built as a test-bed and experimental studies of several controllers are conducted and compared. Experimental results show that the best position control performance can be achieved by a cascaded controller with a neural network.