• Title/Summary/Keyword: Obstacle Extraction

Search Result 33, Processing Time 0.029 seconds

A study on the proceeding direction and obstacle detection by line edge extraction (직선 Edge 추출에 의한 주행방향 및 장애물 검출에 관한 연구)

  • 정준익;최성구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.97-100
    • /
    • 1996
  • In this paper, we describe an algorithm which estimate road following direction using the vanishing point property and obstacle detection. This method of detecting the lane markers in a set of continuous lane highway images using linear approximation is presented. This algorithm is designed for accurate and robust extraction of this data as well as high processing speed. Also, this algorithm reckon distance and chase about an obstacle. It include four algorithms which are lane prediction, lane extraction, road following parameter estimation and obstacle detection algorithm. High accuracy was proven by quantitative evaluation using simulated images. Both robustness and the practicality of real time video rate processing were then confirmed through experiment using VTR real road images.

  • PDF

A Basic Study of Obstacles Extraction on the Road for the Stability of Self-driving Vehicles (자율주행 차량의 안전성을 위한 도로의 장애물 추출에 대한 기초 연구)

  • Park, Chang min
    • Journal of Platform Technology
    • /
    • v.9 no.2
    • /
    • pp.46-54
    • /
    • 2021
  • Recently, interest in the safety of Self-driving has been increasing. Self-driving have been studied and developed by many universities, research centers, car companies, and companies of other industries around the world since the middle 1980s. In this study, we propose the automatic extraction method of the threatening obstacle on the Road for the Self-driving. A threatening obstacle is defined in this study as a comparatively large object at center of the image. First of all, an input image and its decreased resolution images are segmented. Segmented areas are classified as the outer or the inner area. The outer area is adjacent to boundaries of the image and the other is not. Each area is merged with its neighbors when adjacent areas are included by a same area in the decreased resolution image. The Obstacle area and Non Obstacle area are selected from the inner area and outer area respectively. Obstacle areas are the representative areas for the obstacle and are selected by using the information about the area size and location. The Obstacle area and Non Obstacle area consist of the threatening obstacle on the road. Through experiments, we expect that the proposed method will be able to reduce accidents and casualties in Self-driving.

A Study on Stable Motion Control of Biped Robot with 18 Joints (18관절 2족보행 로봇의 안정한 모션제어에 관한연구)

  • Park, Youl-Moon;Thu, Le Xuan;Won, Jong-Beom;Park, Sung-Jun;Kim, Yong-Gil
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.17 no.2
    • /
    • pp.35-41
    • /
    • 2014
  • This paper describes the obstacle avoidance architecture to walk safely around in factory and home environment, and presents methods for path planning and obstacle avoidance for the humanoid robot. Solving the problem of obstacle avoidance for a humanoid robot in an unstructured environment is a big challenge, because the robot can easily lose its stability or fall down if it hits or steps on an obstacle. We briefly overview the general software architecture composed of perception, short and long term memory, behavior control, and motion control, and emphasize on our methods for obstacle detection by plane extraction, occupancy grid mapping, and path planning. A main technological target is to autonomously explore and wander around in home environments as well as to communicate with humans.

The course estimation of vehicle using vanishing point and obstacle detection (무한원점을 이용한 주행방향 추정과 장애물 검출)

  • 정준익;최성구;노도환
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.11
    • /
    • pp.126-137
    • /
    • 1997
  • This paper describes the algorithm which can estimate road following direction and deetect obstacle using a monocular vision system. This algorithm can estimate the course of vehicle using the vanishing point properties and detect obstacle by statistical method. The proposed algorithm is composed of four steps, which are lane prediction, lane extraction, road following parameter estimation and obstacle detection. It is designed for high processing speed and high accuracy. The former is achieved by a small area named sub-windown in lane existence area, the later is realized by using connected edge points of lane. We would like to present that the new mehod can detect obstacle using the simple statistical method. The paracticalities of the processing speed, the accuracy of the algorithm and proposing obstacle detection method, have been justified through the experiment applied VTR image of the real road to the algorithm.

  • PDF

Obstacle Classification Method using Multi Feature Comparison Based on Single 2D LiDAR (단일 2차원 라이다 기반의 다중 특징 비교를 이용한 장애물 분류 기법)

  • Lee, Moohyun;Hur, Soojung;Park, Yongwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.4
    • /
    • pp.253-265
    • /
    • 2016
  • We propose an obstacle classification method using multi-decision factors and decision sections based on Single 2D LiDAR. The existing obstacle classification method based on single 2D LiDAR has two specific advantages: accuracy and decreased calculation time. However, it was difficult to classify obstacle type, and therefore accurate path planning was not possible. To overcome this problem, a method of classifying obstacle type based on width data was proposed. However, width data was not sufficient to enable accurate obstacle classification. The proposed algorithm of this paper involves the comparison between decision factor and decision section to classify obstacle type. Decision factor and decision section was determined using width, standard deviation of distance, average normalized intensity, and standard deviation of normalized intensity data. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 2D LiDAR-based method, thus demonstrating the possibility of obstacle type classification using single 2D LiDAR.

Obstacle Avoidance Algorithm of a Mobile Robot using Image Information (화상 정보를 이용한 이동 로봇의 장애물 회피 알고리즘)

  • Kwon, O-Sang;Lee, Eung-Hyuk;Han, Yong-Hwan;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.2 no.1 s.2
    • /
    • pp.139-149
    • /
    • 1998
  • There are some problems in robot navigations with a single kind of sensor. We propose a system that takes advantages of both CCD camera and ultrasonic sensors for the concerning matter. A coordinate extraction algorithm to avoid obstacles during the navigation is also proposed. We implemented a CCD based vision system at the front part of the vehicle and did experiments to verify the suggested algorithm's availability. From experimental results, the error rate was reduced when a CCD camera was used rather than when only ultrasonic sensors were used. Also we can generate path to avoid those obstacles using the measured values.

  • PDF

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Obstacles modeling method in cluttered environments using satellite images and its application to path planning for USV

  • Shi, Binghua;Su, Yixin;Zhang, Huajun;Liu, Jiawen;Wan, Lili
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.11 no.1
    • /
    • pp.202-210
    • /
    • 2019
  • The obstacles modeling is a fundamental and significant issue for path planning and automatic navigation of Unmanned Surface Vehicle (USV). In this study, we propose a novel obstacles modeling method based on high resolution satellite images. It involves two main steps: extraction of obstacle features and construction of convex hulls. To extract the obstacle features, a series of operations such as sea-land segmentation, obstacles details enhancement, and morphological transformations are applied. Furthermore, an efficient algorithm is proposed to mask the obstacles into convex hulls, which mainly includes the cluster analysis of obstacles area and the determination rules of edge points. Experimental results demonstrate that the models achieved by the proposed method and the manual have high similarity. As an application, the model is used to find the optimal path for USV. The study shows that the obstacles modeling method is feasible, and it can be applied to USV path planning.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

3D image processing using laser slit beam and CCD camera (레이저 슬릿빔과 CCD 카메라를 이용한 3차원 영상인식)

  • 김동기;윤광의;강이석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.40-43
    • /
    • 1997
  • This paper presents a 3D object recognition method for generation of 3D environmental map or obstacle recognition of mobile robots. An active light source projects a stripe pattern of light onto the object surface, while the camera observes the projected pattern from its offset point. The system consists of a laser unit and a camera on a pan/tilt device. The line segment in 2D camera image implies an object surface plane. The scaling, filtering, edge extraction, object extraction and line thinning are used for the enhancement of the light stripe image. We can get faithful depth informations of the object surface from the line segment interpretation. The performance of the proposed method has demonstrated in detail through the experiments for varies type objects. Experimental results show that the method has a good position accuracy, effectively eliminates optical noises in the image, greatly reduces memory requirement, and also greatly cut down the image processing time for the 3D object recognition compared to the conventional object recognition.

  • PDF