• Title/Summary/Keyword: Robot Vision Control Algorithm

Search Result 169, Processing Time 0.029 seconds

Development of Precise Localization System for Autonomous Mobile Robots using Multiple Ultrasonic Transmitters and Receivers in Indoor Environments (다수의 초음파 송수신기를 이용한 이동 로봇의 정밀 실내 위치인식 시스템의 개발)

  • Kim, Yong-Hwi;Song, Ui-Kyu;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.353-361
    • /
    • 2011
  • A precise embedded ultrasonic localization system is developed for autonomous mobile robots in indoor environments, which is essential for autonomous navigation of mobile robots with various tasks. Although ultrasonic sensors are more cost-effective than other sensors such as LRF (Laser Range Finder) and vision, they suffer inaccuracy and directional ambiguity. First, we apply the matched filter to measure the distance precisely. For resolving the computational complexity of the matched filter for embedded systems, we propose a new matched filter algorithm with fast computation in three points of view. Second, we propose an accurate ultrasonic localization system which consists of three ultrasonic receivers on the mobile robot and two or more transmitters on the ceiling. Last, we add an extended Kalman filter to estimate position and orientation. Various simulations and experimental results show the effectiveness of the proposed system.

Terrain Feature Extraction and Classification using Contact Sensor Data (접촉식 센서 데이터를 이용한 지질 특성 추출 및 지질 분류)

  • Park, Byoung-Gon;Kim, Ja-Young;Lee, Ji-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.3
    • /
    • pp.171-181
    • /
    • 2012
  • Outdoor mobile robots are faced with various terrain types having different characteristics. To run safely and carry out the mission, mobile robot should recognize terrain types, physical and geometric characteristics and so on. It is essential to control appropriate motion for each terrain characteristics. One way to determine the terrain types is to use non-contact sensor data such as vision and laser sensor. Another way is to use contact sensor data such as slope of body, vibration and current of motor that are reaction data from the ground to the tire. In this paper, we presented experimental results on terrain classification using contact sensor data. We made a mobile robot for collecting contact sensor data and collected data from four terrains we chose for experimental terrains. Through analysis of the collecting data, we suggested a new method of terrain feature extraction considering physical characteristics and confirmed that the proposed method can classify the four terrains that we chose for experimental terrains. We can also be confirmed that terrain feature extraction method using Fast Fourier Transform (FFT) typically used in previous studies and the proposed method have similar classification performance through back propagation learning algorithm. However, both methods differ in the amount of data including terrain feature information. So we defined an index determined by the amount of terrain feature information and classification error rate. And the index can evaluate classification efficiency. We compared the results of each method through the index. The comparison showed that our method is more efficient than the existing method.

Distance Measurement of the Multi Moving Objects using Parallel Stereo Camera in the Video Monitoring System (영상감시 시스템에서 평행식 스테레오 카메라를 이용한 다중 이동물체의 거리측정)

  • 김수인;이재수;손영우
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.1
    • /
    • pp.137-145
    • /
    • 2004
  • In this paper, a new algorithm for the segmentation of the multi moving objects at the 3 dimension space and the method of measuring the distance from the camera to the moving object by using stereo video monitoring system is proposed. It get the input image of left and right from the stereo video monitoring system, and the area of the multi moving objects segmented by using adaptive threshold and PRA(pixel recursive algorithm). Each of the object segmented by window mask, then each coordinate value and stereo disparity of the multi moving objects obtained from the window masks. The distance of the multi moving objects can be calculated by this disparity, the feature of the stereo vision system and the trigonometric function. From the experimental results, the error rate of a distance measurement be existed within 7.28%, therefore, in case of implementation the proposed algorithm, the stereo security system, the automatic moving robot system and the stereo remote control system will be applied practical application.

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

A Relative Depth Estimation Algorithm Using Focus Measure (초점정보를 이용한 패턴간의 상대적 깊이 추정알고리즘 개발)

  • Jeong, Ji-Seok;Lee, Dae-Jong;Shin, Yong-Nyuo;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.527-532
    • /
    • 2013
  • Depth estimation is an essential factor for robot vision, 3D scene modeling, and motion control. The depth estimation method is based on focusing values calculated in a series of images by a single camera at different distance between lens and object. In this paper, we proposed a relative depth estimation method using focus measure. The proposed method is implemented by focus value calculated for each image obtained at different lens position and then depth is finally estimated by considering relative distance of two patterns. We performed various experiments on the effective focus measures for depth estimation by using various patterns and their usefulness.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Multi-sensor Intelligent Robot (멀티센서 스마트 로보트)

  • Jang, Jong-Hwan;Kim, Yong-Ho
    • The Journal of Natural Sciences
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 1992
  • A robotically assisted field material handling system designed for loading and unloading of a planar pallet with a forklift in unstructured field environment is presented. The system uses combined acoustic/visual sensing data to define the position/orientation of the pallet and to determine the specific locations of the two slots of the pallet, so that the forklift can move close to the slot and engage it for transport. In order to reduce the complexity of the material handling operation, we have developed a method based on the integration of 2-D range data of Poraloid ultrasonic sensor along with 2-D visual data of an optical camera. Data obtained from the two separate sources complements each other and is used in an efficient algorithm to control this robotically assisted field material handling system . Range data obtained from two linear scannings is used to determine the pan and tilt angles of a pallet using least mean square method. Then 2-D visual data is used to determine the swing angle and engagement location of a pallet by using edge detection and Hough transform techniques. The limitations of the pan and tilt orientation to be determined arc discussed. The system developed is evaluated through the hardware and software implementation. The experimental results are presented.

  • PDF

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.