• Title/Summary/Keyword: Visual sensor

Search Result 451, Processing Time 0.024 seconds

Simulation of Mobile Robot Navigation based on Multi-Sensor Data Fusion by Probabilistic Model

  • Jin, Tae-seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.4
    • /
    • pp.167-174
    • /
    • 2018
  • Presently, the exploration of an unknown environment is an important task for the development of mobile robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, In mobile robotics, multi-sensor data fusion(MSDF) became useful method for navigation and collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within indoor environments. Simulation results with a mobile robot will demonstrate the effectiveness of the discussed methods.

A Study on Taekwondo Training System using Hybrid Sensing Technique

  • Kwon, Doo Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1439-1445
    • /
    • 2013
  • We present a Taekwondo training system using a hybrid sensing technique of a body sensor and a visual sensor. Using a body sensor (accelerometer), rotational and inertial motion data are captured which are important for Taekwondo motion detection and evaluation. A visual sensor (camera) captures and records the sequential images of the performance. Motion chunk is proposed to structuralize Taekwondo motions and design HMM (Hidden Markov Model) for motion recognition. Trainees can evaluates their trial motions numerically by computing the distance to the standard motion performed by a trainer. For motion training video, the real-time video images captured by a camera is overlayed with a visualized body sensor data so that users can see how the rotational and inertial motion data flow.

Development of Underground Displacement and Convergence Auto-Measuring Program for the Tunnel Using the Fiber Optic Sensor (광섬유 센서를 이용한 터널 지중 및 내공변위 자동계측 프로그램 개발)

  • Choi, Myong-Ho;Yoon, Ji-Son;Kwon, Oh-Duk;Kwon, Oh-Jun
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2005.03a
    • /
    • pp.1361-1368
    • /
    • 2005
  • In this paper, the theoretical method of measuring the tunnel convergence and underground displacement, the objective indices of assessing safety for tunnel construction, using the fiber optic sensor is studied by developing the program to automatically measure them. The model test of Con'c beam is conducted to evaluate reliability of the fiber optic sensor. Furthermore, using the RS232 communication protocol as well as Visual C# and Visual C++, the programming tools, the program was developed to detect automatically the measured value of the fiber optic sensor, calculate the tunnel convergence and underground displacement, predict the deformed shape of the tunnel, and evaluate loosening zone due to the tunnel excavation.

  • PDF

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Contrast Enhancement Method for Images from Visual Sensors (비주얼 센서 영상에 대한 대비 개선 방법)

  • Park, Sang-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.3
    • /
    • pp.525-532
    • /
    • 2018
  • Recently, due to the advancements of sensor network technologies and camera technologies, there are increasing needs to effectively monitor the environment in a region that is difficult to access by using the visual sensor network that combines these two technologies. Since the image captured by the visual sensor reflects the natural phenomenon as it is, the quality of the image may deteriorate depending on the weather or time. In this paper, we propose an algorithm to improve the contrast of images using the characteristics of images obtained from visual sensors. In the proposed method, we first set the region of interest and then analyzes the change of the color value of the region of interest according to the brightness value of the image. The contrast of an image is improved by using the high contrast image of the same object and the analysis information. It is shown by experimental results that the proposed method improves the contrast of an image by restoring the color components of the low contrast image simply and accurately.

Path Planing for a Moving Robot using Ultra Sonic Sensors (초음파 센서를 이용한 이동로봇의 경로 계획)

  • Cha, Kyung-Hwan;Shin, Hyun-Shil;Hwang, Gi-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.78-83
    • /
    • 2007
  • Robot collects surrounding information to recognize tile unknown environment by using various sensors such as visual, infrared ray and ultra sonic sensors. Although visual sensor is the most popular one, it has some difficulties in collecting data in dark or too bright environment due to sensitivity of the light. It also requests significant amount of calculation on collecting data from certain images with marked, straight and curved ones. As an alternative, ultra sonic sensor can simply overcome this visual sensing system's flaw and easily be used. It is easier than visual system, especially in case of collecting data on object and distance in dark environment. Ultra sonic sensor can replace the expensive visual sensing system not only in avoiding obstacles but also in reaching to the target area smoothly. The purpose of this paper is to develop the algorithm to optimize the environmental recognition, path planning and free-ranging by minimizing errors caused by inaccurate information and by considering characteristics of the ultra sonic rays such as refraction and diffusion. This paper also realizes the system that can recognize the environment and make the appropriate path planning by applying the algorithm on this moving robot.

  • PDF

A Design of an LED Sensor Luminaire for Visual Function Improvement (시각적 기능개선을 위한 LED 센서 등기구 설계)

  • Seo, Jung-Nam;Yu, Yong-Su;Yeo, In-Seon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.1
    • /
    • pp.134-137
    • /
    • 2010
  • An LED sensor luminaire for visual function improvement necessitates the control algorithm for light level adjustment and the appropriate lens design. The control algorithm adapts to surround lighting condition, and thus has the advantages of energy saving and glare reduction. The multi-cell lens design improves color temperature uniformity and spatial light distribution of the luminaire. Experimental and simulated results show that this approach contributes noticeably to energy saving and color temperature uniformity of the LED sensor luminaire.

Hand/Eye calibration of Robot arms with a 3D visual sensing system (3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

Sensor Based Bridge Monitoring System (센서기반 교량 유지관리 시스템)

  • 장정환;김완종;안호현;이세호;정태영
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.11a
    • /
    • pp.602-607
    • /
    • 2003
  • Sensors based bridge monitoring system (SBBMS) is designed to perform real-time monitoring and to store the performance history of in-service bridges. In general, visual inspections play a major role in maintenance of in-service bridges; however, they are not adequate to document the behavior of a bridge. Therefore, visual inspections and sensor based monitoring systems complement each other. Sensor based bridge monitoring systems consist of hardware and software systems. The hardware system contains the sensors and data-loggers to measure the behavior of a structure, the communicational equipment to transmit the measured data from the site to the monitoring center, and the computers to arrange and analyze the data. The software system controls data-loggers, arranges and analyzes the measured data, makes real-time display, stores the performance history.

  • PDF

Sensor Fusion System for Improving the Recognition Performance of 3D Object (3차원 물체의 인식 성능 향상을 위한 감각 융합 시스템)

  • Kim, Ji-Kyoung;Oh, Yeong-Jae;Chong, Kab-Sung;Wee, Jae-Woo;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.107-109
    • /
    • 2004
  • In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile information. The proposed system focuses on improving recognition performance of 3D object. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse these informations. Tactual signals are obtained from the reaction force by the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of teaming iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though visual information has a defect. The experimental results show that the proposed system can improve recognition rate and reduce learning time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme of 3D object.

  • PDF