• Title/Summary/Keyword: Vision Based Sensor

Search Result 428, Processing Time 0.022 seconds

Development of a Monitoring Module for a Steel Bridge-repainting Robot Using a Vision Sensor (비전센서를 이용한 강교량 재도장 로봇의 주행 모니터링 모듈 개발)

  • Seo, Myoung Kook;Lee, Ho Yeon;Jang, Dong Wook;Chang, Byoung Ha
    • Journal of Drive and Control
    • /
    • v.19 no.1
    • /
    • pp.1-7
    • /
    • 2022
  • Recently, a re-painting robot was developed to semi-automatically conduct blasting work in bridge spaces to improve work productivity and worker safety. In this study, a vision sensor-based monitoring module was developed to automatically move the re-painting robot along the path. The monitoring module provides direction information to the robot by analyzing the boundary between the painting surface and the metal surface. To stably measure images in unstable environments, various techniques for improving image visibility were applied in this study. Then, the driving performance was verified in a similar environment.

Development of a Double-blades Road Cutter with Automatic Cutting and Load Sensing Control Technology (자동 절단과 부하 감응 제어 기술을 적용한 양날 도로절단기 개발)

  • Myoung Kook Seo;Myeong Cheol Kang;Jong Ho Park;Young Jin Kim
    • Journal of Drive and Control
    • /
    • v.21 no.1
    • /
    • pp.53-58
    • /
    • 2024
  • With the recent development of intelligence and automation technologies for construction machinery, the demand for safety and efficiency of road-cutting operations has continued to increase. In response to this, a double-blade road cutter has been developed that can automatically cut roads. However, a double-blade road cutter has a load difference between the two blades due to the ground and wear conditions of the cutting blades. The difference in load between the two blades distorts the direction of travel of the cutter. In this study, a vision sensor-based driving guide technology was developed to correct the driving path of road cutters. In addition, we developed a load-sensing technology that detects blade loads in real-time and controls driving speed in the event of overload.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Odor Cognition and Source Tracking of an Intelligent Robot based upon Wireless Sensor Network (센서 네트워크 기반 지능 로봇의 냄새 인식 및 추적)

  • Lee, Jae-Yeon;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.49-54
    • /
    • 2011
  • In this paper, we represent a mobile robot which can recognize chemical odor, measure concentration, and track its source indoors. The mobile robot has the function of smell that can sort several gases in experiment such as ammonia, ethanol, and their mixture with neural network algorithm and measure each gas concentration with fuzzy rules. In addition, it can not only navigate to the desired position with vision system by avoiding obstacles but also transmit odor information and warning messages earned from its own operations to other nodes by multi-hop communication in wireless sensor network. We suggest the way of odor sorting, concentration measurement, and source tracking for a mobile robot in wireless sensor network using a hybrid algorithm with vision system and gas sensors. The experimental studies prove that the efficiency of the proposed algorithm for odor recognition, concentration measurement, and source tracking.

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Development of Autonomous Loading and Unloading for Network-based Unmanned Forklift (네트워크 기반 무인지게차를 위한 팔레트 자율적재기술의 개발)

  • Park, Jee-Hun;Kim, Min-Hwan;Lee, Suk;Lee, Kyung-Chang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.1051-1058
    • /
    • 2011
  • Unmanned autonomous forklifts have a great potential to enhance the productivity of material handling in various applications because these forklifts can pick up and deliver loads without an operator and any fixed guide. Especially, automation of pallet loading and unloading technique is useful for enhancing performance of logistics and reducing cost for automation system. There are, however, many technical difficulties in developing such forklifts including localization, map building, sensor fusion, control, and so on. This is because the system requires numerous sensors, actuators, and controllers that need to be connected with each other, and the number of connections grows very rapidly as the number of devices grows. This paper presents a vision sensorbased autonomous loading and unloading for network-based unmanned forklift where system components are connected to a shared CAN network. Functions such as image processing and control algorithm are divided into small tasks that are distributed over a number of microcontrollers with a limited computing capacity. And the experimental results show that proposed architecture can be an appropriate choice for autonomous loading in the unmanned forklift.

UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments (야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정)

  • Choi, Ji-Hoon;Park, Yong Woon;Joo, Sang Hyeon;Shim, Seong Dae;Min, Ji Hong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.5
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

Indoor Positioning System using Incident Angle Detection of Infrared sensor (적외선 센서의 입사각을 이용한 실내 위치인식 시스템)

  • Kim, Su-Yong;Choi, Ju-Yong;Lee, Man-Hyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.991-996
    • /
    • 2010
  • In this paper, a new indoor positioning system based on incident angle measurement of infrared sensor has been suggested. Though there have been various researches on indoor positioning systems using vision sensor or ultrasonic sensor, they have not only advantages, but also disadvantages. In a new positioning system, there are three infrared emitters on fixed known positions. An incident angle sensor measures the angle differences between each two emitters. Mathematical problems to determine the position with angle differences and position information of emitters has been solved. Simulations and experiments have been implemented to show the performance of this new positioning system. The results of simulation were good. Since there existed problems of noise and signal conditioning, the experimented has been implemented in limited area. But the results were acceptable. This new positioning method can be applied to any indoor systems that need absolute position information.

Simultaneous Localization and Mobile Robot Navigation using a Sensor Network

  • Jin Tae-Seok;Bashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.161-166
    • /
    • 2006
  • Localization of mobile agent within a sensing network is a fundamental requirement for many applications, using networked navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, This paper describes a networked sensor-based navigation method in an indoor environment for an autonomous mobile robot which can navigate and avoid obstacle. In this method, the self-localization of the robot is done with a model-based vision system using networked sensors, and nonstop navigation is realized by a Kalman filter-based STSF(Space and Time Sensor Fusion) method. Stationary obstacles and moving obstacles are avoided with networked sensor data such as CCD camera and sonar ring. We will report on experiments in a hallway using the Pioneer-DX robot. In addition to that, the localization has inevitable uncertainties in the features and in the robot position estimation. Kalman filter scheme is used for the estimation of the mobile robot localization. And Extensive experiments with a robot and a sensor network confirm the validity of the approach.