• Title/Summary/Keyword: 카메라 기반 주행

Search Result 165, Processing Time 0.026 seconds

Real-Time Traffic Information and Road Sign Recognitions of Circumstance on Expressway for Vehicles in C-ITS Environments (C-ITS 환경에서 차량의 고속도로 주행 시 주변 환경 인지를 위한 실시간 교통정보 및 안내 표지판 인식)

  • Im, Changjae;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.55-69
    • /
    • 2017
  • Recently, the IoT (Internet of Things) environment is being developed rapidly through network which is linked to intellectual objects. Through the IoT, it is possible for human to intercommunicate with objects and objects to objects. Also, the IoT provides artificial intelligent service mixed with knowledge of situational awareness. One of the industries based on the IoT is a car industry. Nowadays, a self-driving vehicle which is not only fuel-efficient, smooth for traffic, but also puts top priority on eventual safety for humans became the most important conversation topic. Since several years ago, a research on the recognition of the surrounding environment for self-driving vehicles using sensors, lidar, camera, and radar techniques has been progressed actively. Currently, based on the WAVE (Wireless Access in Vehicular Environment), the research is being boosted by forming networking between vehicles, vehicle and infrastructures. In this paper, a research on the recognition of a traffic signs on highway was processed as a part of the awareness of the surrounding environment for self-driving vehicles. Through the traffic signs which have features of fixed standard and installation location, we provided a learning theory and a corresponding results of experiment about the way that a vehicle is aware of traffic signs and additional informations on it.

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Research on the estimation of ship size information based on a ground-based radar using AI techniques (인공지능 기법을 이용한 육상 레이더 기반 선박 크기 정보 추정에 관한 연구)

  • JeongSu Lee;Jungwook Han;Kyurin Park;Hye-Jin Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.76-76
    • /
    • 2023
  • 최근 자율주행과 관련한 시장의 관심은 기존 자동차 자율주행에서 선박 자율운항으로 자연스럽게 이동하고 있다. 이에 인공지능 및 빅데이터 등과 같은 최근 기술을 선박 자율주행에 적용하는 자율운항선박(MASS: Maritime Autonomous Surface Ship) 개발이 활발히 진행되고 있으며, 레이더 및 카메라 등과 같은 센서 정보를 선박 자율운항에 적용하여 다양한 선박 운동 및 정보를 획득하는 연구 기술이 집중되고 있다. 이러한 경향에 따라 IMO(International Maritime Organization)과 같은 국제기구에서도 자율운항선박 표준화 본격 논의로 기술표준 선점 경쟁에 참여하고 있다. 이 중 연안 자율운항선박 개발은 IMO에서 주관하는 무인화 핵심기술로 여겨지고 있어, 기존 대양 항해 기술과 함께 연안 항해에 대한 기술 개발의 중요성이 높아지고 있다. 특히 항만 인근 해역에서는 다수의 선박이 입출항함으로 인해 해상에서의 안전과 물류의 효율화가 요구되기 때문에 고도화된 자율운항 기술개발이 필요하다. 하지만 자율운항선박에서의 상황인식 기술은 탑재된 센서의 제한된 시야각 및 기상조건에 따른 인식률이 떨어지는 문제가 생긴다. 이러한 기술적 한계를 극복하기 위해 육상에 설치된 레이더를 활용하여 선박을 탐지할 수 있는 기술이 필요하다. 본 연구에서는 고해상도 육상 레이더를 기반하여 얻어진 레이더 화면상의 물표 정보를 이용해 인공지능 기법에 활용하기 위한 라벨링 자동 생성 방법에 대해 소개한다. 얻어진 물표 정보에 인공지능 기법을 적용하여 선박 길이 정보를 추정하는 기술에 대해 소개한다.

  • PDF

Vehicle Tracking for Forward Vehicle Detection (전방차량 인식을 위한 차량 추적 방법)

  • Jeong, Sung-Hwan;Kwon, Dong-Jin;Song, Hyok;Park, Sang-Hyun;Lee, Chul-Dong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.486-487
    • /
    • 2012
  • 본 논문에서는 차량 내에 설치된 카메라를 이용하여 전방차량을 인식하는 FCW(Forward Collision Warning)시스템에서 주행 중인 전방 차량을 추적하는 알고리즘을 제안한다. 전방 차량의 후보 영역을 검출하기 위해 Haar-Adaboost를 이용하였으며 검색된 차량 후보 영역 내의 에지 정보를 이용하여 차량 후보 영역을 필터링하였다. 필터링된 차량 영역은 영역기반과 Kalman 예측치를 이용하여 차량을 추적하는 방법으로 차량 검색기가 차량 영역을 검색하지 못하는 경우 Kalman 예측치를 통해 차량 후보 영역을 예측하고 예측된 차량 영역을 검증함으로써 효율적인 전방 차량 인식이 가능하였다. 본 제안 방법을 실험한 결과 이전 프레임에서 추적되던 차량 후보 영역이 현재 프레임에서 Haar-Adaboost가 차량 후보 영역을 검색하지 못하는 경우에 영역기반과 Kalman 예측치를 통하여 현재 프레임에서 전방차량을 연속적으로 추적하는 것을 확인하였다. 본 제안 방법은 영상을 이용한 FCW 시스템에 사용될 수 있을것으로 사료된다.

The Character Recognition System of Mobile Camera Based Image (모바일 이미지 기반의 문자인식 시스템)

  • Park, Young-Hyun;Lee, Hyung-Jin;Baek, Joong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.5
    • /
    • pp.1677-1684
    • /
    • 2010
  • Recently, due to the development of mobile phone and supply of smart phone, many contents have been developed. Especially, since the small-sized cameras are equiped in mobile devices, people are interested in the image based contents development, and it also becomes important part in their practical use. Among them, the character recognition system can be widely used in the applications such as blind people guidance systems, automatic robot navigation systems, automatic video retrieval and indexing systems, automatic text translation systems. Therefore, this paper proposes a system that is able to extract text area from the natural images captured by smart phone camera. The individual characters are recognized and result is output in voice. Text areas are extracted using Adaboost algorithm and individual characters are recognized using error back propagated neural network.

A Study of Tram-Pedestrian Collision Prediction Method Using YOLOv5 and Motion Vector (YOLOv5와 모션벡터를 활용한 트램-보행자 충돌 예측 방법 연구)

  • Kim, Young-Min;An, Hyeon-Uk;Jeon, Hee-gyun;Kim, Jin-Pyeong;Jang, Gyu-Jin;Hwang, Hyeon-Chyeol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.561-568
    • /
    • 2021
  • In recent years, autonomous driving technologies have become a high-value-added technology that attracts attention in the fields of science and industry. For smooth Self-driving, it is necessary to accurately detect an object and estimate its movement speed in real time. CNN-based deep learning algorithms and conventional dense optical flows have a large consumption time, making it difficult to detect objects and estimate its movement speed in real time. In this paper, using a single camera image, fast object detection was performed using the YOLOv5 algorithm, a deep learning algorithm, and fast estimation of the speed of the object was performed by using a local dense optical flow modified from the existing dense optical flow based on the detected object. Based on this algorithm, we present a system that can predict the collision time and probability, and through this system, we intend to contribute to prevent tram accidents.

Application of Deep Learning-based Object Detection and Distance Estimation Algorithms for Driving to Urban Area (도심로 주행을 위한 딥러닝 기반 객체 검출 및 거리 추정 알고리즘 적용)

  • Seo, Juyeong;Park, Manbok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.83-95
    • /
    • 2022
  • This paper proposes a system that performs object detection and distance estimation for application to autonomous vehicles. Object detection is performed by a network that adjusts the split grid to the input image ratio using the characteristics of the recently actively used deep learning model YOLOv4, and is trained to a custom dataset. The distance to the detected object is estimated using a bounding box and homography. As a result of the experiment, the proposed method improved in overall detection performance and processing speed close to real-time. Compared to the existing YOLOv4, the total mAP of the proposed method increased by 4.03%. The accuracy of object recognition such as pedestrians, vehicles, construction sites, and PE drums, which frequently occur when driving to the city center, has been improved. The processing speed is approximately 55 FPS. The average of the distance estimation error was 5.25m in the X coordinate and 0.97m in the Y coordinate.