• 제목/요약/키워드: robot laser scanning system

검색결과 16건 처리시간 0.038초

3차원 로봇 레이저 스캐닝 시스템의 모델링과 캘리브레이션 (Modeling and Calibration of a 3D Robot Laser Scanning System)

  • 이종광;윤지섭;강이석
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.34-40
    • /
    • 2005
  • In this paper, we describe the modeling for the 3D robot laser scanning system consisting of a laser stripe projector, camera, and 5-DOF robot and propose its calibration method. Nonlinear radial distortion in the camera model is considered for improving the calibration accuracy. The 3D range data is calculated using the optical triangulation principle which uses the geometrical relationship between the camera and the laser stripe plane. For optimal estimation of the system model parameters, real-coded genetic algorithm is applied in the calibration process. Experimental results show that the constructed system is able to measure the 3D position within about 1mm error. The proposed scheme could be applied to the kinematically dissimilar robot system without losing the generality and has a potential for recognition for the unknown environment.

자율주행 로봇을 위한 Laser Range Finder

  • 차영엽;권대갑
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1992년도 추계학술대회 논문집
    • /
    • pp.266-270
    • /
    • 1992
  • In this study an active vision system using a laser range finder is proposed for the navigation of a mobile robot in unknown environment. The laser range finder consists of a slitted laser beam generator, a scanning mechanism, CCD camera, and a signal processing unit. A laser beam from laser source is slitted by a set of cylindrical lenses and the slitted laser beam is emitted up and down and rotates around the robot by the scanning mechanism. The image of laser beam reflected on the surface of an object is engraved on the CCD array. A high speed image processing algorithm is proposed for the real-time navigation of the mobile robot. Through experiments it is proved that the accurate and real-time recognition of environment is able to be realized using the proposed laser range finder.

용접부 자동 탐상을 위한 이동 로봇의 개발 (Development of a magnetic caterpillar based robot for autonomous scanning in the weldment)

  • 장준우;정경민;김호철;이정기
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.713-716
    • /
    • 2000
  • In this study, we present a mobile robot for ultrasonic scanning of weldment. magnetic Caterpillar mechanism is selected in order to travel on the inclined surface and vertical wall. A motion control board and motor driver are developed to control four DC-servo motors. A virtual device driver is also developed for the purpose of communicating between the control board and a host PC with Dual 'port ram. To provide the mobile robot with stable and accurate movement, PID control algorithm is applied to the mobile robot control. And a vision system for detecting the weld-line are developed with laser slit beam as a light source. In the experiments, movement of the mobile robot is tested inclined on a surface and a vertical wall.

  • PDF

3 차원 곡면 데이터 획득을 위한 멀티 레이져 비젼 시스템 개발 (Development of Multi-Laser Vision System For 3D Surface Scanning)

  • 이정환;권기연;이현철;도영칠;최두진;박진형;김대경;박영준
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2008년도 추계학술대회A
    • /
    • pp.768-772
    • /
    • 2008
  • Various scanning systems have been studied in many industrial areas to acquire a range data or to reconstruct an explicit 3D model. Currently optical technology has been used widely by virtue of noncontactness and high-accuracy. In this paper, we describe a 3D laser scanning system developped to reconstruct the 3D surface of a large-scale object such as a curved-plate of ship-hull. Our scanning system comprises of 4ch-parallel laser vision modules using a triangulation technique. For multi laser vision, calibration method based on least square technique is applied. In global scanning, an effective method without solving difficulty of matching problem among the scanning results of each camera is presented. Also minimal image processing algorithm and robot-based calibration technique are applied. A prototype had been implemented for testing.

  • PDF

착유로봇 매니퓰레이터와 구동제어장치 설계 (Design of Driving Control Unit and Milking Robot Manipulator)

  • 신규재
    • 전자공학회논문지
    • /
    • 제51권9호
    • /
    • pp.238-247
    • /
    • 2014
  • 착유로봇 시스템은 움직이는 젖소의 유두 위치를 정확하게 검출해야 하고, 로봇 매니퓰레이터는 검출된 유두 위치값을 추적하여 착유컵이 유두에 장착하도록 제어되어야 한다. 제안된 착유로봇 매니퓰레이터는 위치검출 레이저 센서를 이용하여 유두를 스캐닝하고 임베디드 구동제어장치를 통하여 독립된 3축 브러쉬리스 서보 구동제어 메커니즘에 의하여 구현된다. 이 로봇 매니퓰레이터는 유두 위치검출용 레이저센서, 4개의 착유컵, 3축 x, y, z축의 매니퓰레이터, g축 방향 이송기능을 가진 유두인 식장치와 착유컵 구동장치, 임베디드 구동제어장치와 자동 밀크 제어라인으로 구성된다. 제안된 로봇시스템은 구동시스템 전체가 전기구동방식으로 설계되어 있기 때문에 구조가 간단하고, 저가로 제작이 가능하며, 구동시에 소음이 적기 때문에 젖소의 심적 안정성을 줄 수 있는 장점을 가지고 있다. 설계된 로봇은 축산과학원 농장에서 젖소를 대상으로 실험을 실시하였으며, 실험결과에 의하여 설계사양의 성능조건이 만족됨을 확인하였다.

3D Map Building of The Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.1-123
    • /
    • 2001
  • For Autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use an sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate $\pm$ $30{\Circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center poings. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

3D Map Building of the Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.5-123
    • /
    • 2001
  • For autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use a sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate$\pm$30$^{\circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center points. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

2차원 레이저 스캔을 이용한 로봇의 산악 주행 장애물 판단 (Obstacle Classification for Mobile Robot Traversability using 2-dimensional Laser Scanning)

  • 김민희;곽경운;김수현
    • 한국군사과학기술학회지
    • /
    • 제15권1호
    • /
    • pp.1-8
    • /
    • 2012
  • Obstacle detection is much studied by using sensors such as laser, vision, radar and ultrasonic in path planning for UGV(Unmanned Ground Vehicle), but not much reported about its characterization. In this paper not only an obstacle classification method using 2-dimensional LMS(Laser Measurement System) but also a decision making method whether to avoid or traverse the obstacle is proposed. The basic idea of decision making is to classify the characteristics by 2D laser scanned data and intensity data. Roughness features are obtained by range data using a simple linear regression model. The standard deviations of roughness and intensity data are used as measures for decision making by comparing with those of reference data. The obstacle classification and decision making for the UGV can facilitate a short path to the target position and the survivability of the robot.

자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링 (Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots)

  • 김민영;조형석;김재훈
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

Mobile Robot Localization in Geometrically Similar Environment Combining Wi-Fi with Laser SLAM

  • Gengyu Ge;Junke Li;Zhong Qin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권5호
    • /
    • pp.1339-1355
    • /
    • 2023
  • Localization is a hot research spot for many areas, especially in the mobile robot field. Due to the weak signal of the global positioning system (GPS), the alternative schemes in an indoor environment include wireless signal transmitting and receiving solutions, laser rangefinder to build a map followed by a re-localization stage and visual positioning methods, etc. Among all wireless signal positioning techniques, Wi-Fi is the most common one. Wi-Fi access points are installed in most indoor areas of human activities, and smart devices equipped with Wi-Fi modules can be seen everywhere. However, the localization of a mobile robot using a Wi-Fi scheme usually lacks orientation information. Besides, the distance error is large because of indoor signal interference. Another research direction that mainly refers to laser sensors is to actively detect the environment and achieve positioning. An occupancy grid map is built by using the simultaneous localization and mapping (SLAM) method when the mobile robot enters the indoor environment for the first time. When the robot enters the environment again, it can localize itself according to the known map. Nevertheless, this scheme only works effectively based on the prerequisite that those areas have salient geometrical features. If the areas have similar scanning structures, such as a long corridor or similar rooms, the traditional methods always fail. To address the weakness of the above two methods, this work proposes a coarse-to-fine paradigm and an improved localization algorithm that utilizes Wi-Fi to assist the robot localization in a geometrically similar environment. Firstly, a grid map is built by using laser SLAM. Secondly, a fingerprint database is built in the offline phase. Then, the RSSI values are achieved in the localization stage to get a coarse localization. Finally, an improved particle filter method based on the Wi-Fi signal values is proposed to realize a fine localization. Experimental results show that our approach is effective and robust for both global localization and the kidnapped robot problem. The localization success rate reaches 97.33%, while the traditional method always fails.