• Title/Summary/Keyword: Vision-based method

Search Result 1,482, Processing Time 0.031 seconds

Development of Vision based Autonomous Obstacle Avoidance System for a Humanoid Robot (휴머노이드 로봇을 위한 비전기반 장애물 회피 시스템 개발)

  • Kang, Tae-Koo;Kim, Dong-Won;Park, Gwi-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.1
    • /
    • pp.161-166
    • /
    • 2011
  • This paper addresses the vision based autonomous walking control system. To handle the obstacles which exist beyond the field of view(FOV), we used the 3d panoramic depth image. Moreover, to decide the avoidance direction and walking motion of a humanoid robot for the obstacle avoidance by itself, we proposed the vision based path planning using 3d panoramic depth image. In the vision based path planning, the path and walking motion are decided under environment condition such as the size of obstacle and available avoidance space. The vision based path planning is applied to a humanoid robot, URIA. The results from these evaluations show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a practical humanoid robot.

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

Development of Robot Vision Control Schemes based on Batch Method for Tracking of Moving Rigid Body Target (강체 이동타겟 추적을 위한 일괄처리방법을 이용한 로봇비젼 제어기법 개발)

  • Kim, Jae-Myung;Choi, Cheol-Woong;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.17 no.5
    • /
    • pp.161-172
    • /
    • 2018
  • This paper proposed the robot vision control method to track a moving rigid body target using the vision system model that can actively control camera parameters even if the relative position between the camera and the robot and the focal length and posture of the camera change. The proposed robotic vision control scheme uses a batch method that uses all the vision data acquired from each moving point of the robot. To process all acquired data, this robot vision control scheme is divided into two cases. One is to give an equal weight for all acquired data, the other is to give weighting for the recent data acquired near the target. Finally, using the two proposed robot vision control schemes, experiments were performed to estimate the positions of a moving rigid body target whose spatial positions are unknown but only the vision data values are known. The efficiency of each control scheme is evaluated by comparing the accuracy through the experimental results of each control scheme.

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

Unconscious Personal Recognition Method using Personal Footprint (발자국 정보를 이용한 무의식적 개인 식별 방법)

  • 정진우;김대진;박광현;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.137-140
    • /
    • 2002
  • We introduce a personal identification method which can find user's ID without any help of the user. To do this, there has been two approaches, vision-based and pressure-based. Pressure-based approach has some advantages compared than vision-based one in the aspects of illumination, occlusion, and the amount of data. In the previous study about pressure-based personal identification, there are some restrictions about human body posture for extracting normalized footprints. Since this approach cannot be extended unconscious and continuos identification, we suppose more natural method and verified it by experiments.

  • PDF

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

The improvement of MIRAGE I robot system (MIRAGE I 로봇 시스템의 개선)

  • 한국현;서보익;오세종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.605-607
    • /
    • 1997
  • According to the way of the robot control, the robot systems of all the teams which participate in the MIROSOT can be divided into three categories : the remote brainless system, the vision-based system and the robot-based system. The MIRAGE I robot control system uses the last one, the robot-based system. In the robot-based system the host computer with the vision system transmits the data on only the location of the ball and the robots. Based on this robot control method, we took part in the MIROSOT '96 and the MIROSOT '97.

  • PDF

Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm (엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어)

  • Park, Kwang-Ho;Kim, Dong-Joon;Kee, Seok-Ho;Kee, Chang-Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

A study on Vision based Steering Control for Dual Motor Drive AGV (영상시스템을 이용한 이륜속도차방식 AGV 조향제어)

  • Lee, Hyeon-Ho;Lee, Chang-Goo;Kim, Sung-Jong
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2277-2279
    • /
    • 2001
  • This paper describes a vision-based steering control method for AGV which use dual motor drive. We suggest an algorithm which can be detect the guideline quickly and exactly for real time vision processing, and control the steering through an assign the CP (Control - Point) of input image. This method is tested via a IAGV which dual motor drive with a single camera in laboratory environment.

  • PDF