• Title/Summary/Keyword: 카메라 기반 주행

Search Result 165, Processing Time 0.026 seconds

Research on Development of Construction Spatial Information Technology, using Rover's Camera System (로버 카메라 시스템을 이용한 건설공간정보화 기술의 개발 방안 연구)

  • Hong, Sungchul;Chung, Taeil;Park, Jaemin;Shin, Hyu-Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.7
    • /
    • pp.630-637
    • /
    • 2019
  • The scientific, economical and industrial values of the Moon have been increased, as massive ice-water and rare resource were founded from the lunar exploration missions. Korea and other major space agencies in the world are competitively developing the ISRU (In Situ Resource Utilization) technology to secure future lunar resource as well as to construct the lunar base. To prepare for the lunar construction, it is essential to develop the rover based construction spatial information technology to provide a decision-making aided information during the lunar construction process. Thus, this research presented the construction spatial information technology based upon rover's camera system. Specifically, the conceptual design of rover based camera system was designed for acquisition of a rover's navigation image, and lunar terrain and construction images around the rover. The reference architecture of the rover operation system was designed for computation of the lunar construction spatial information. Also, rover's localization and terrain reconstruction methods were introduced considering the characteristics of lunar surface environments. It is necessary to test and validate the conceptual design of the construction spatial information technology. Thus, in the future study, the developed rover and rover operation system will be applied to the lunar terrestrial analogue site for further improvements.

Deep Learning Model Selection Platform for Object Detection (사물인식을 위한 딥러닝 모델 선정 플랫폼)

  • Lee, Hansol;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Recently, object recognition technology using computer vision has attracted attention as a technology to replace sensor-based object recognition technology. It is often difficult to commercialize sensor-based object recognition technology because such approach requires an expensive sensor. On the other hand, object recognition technology using computer vision may replace sensors with inexpensive cameras. Moreover, Real-time recognition is viable due to the growth of CNN, which is actively introduced into other fields such as IoT and autonomous vehicles. Because object recognition model applications demand expert knowledge on deep learning to select and learn the model, such method, however, is challenging for non-experts to use it. Therefore, in this paper, we analyze the structure of deep - learning - based object recognition models, and propose a platform that can automatically select a deep - running object recognition model based on a user 's desired condition. We also present the reason we need to select statistics-based object recognition model through conducted experiments on different models.

Design and Implementation of the Stop line and Crosswalk Recognition Algorithm for Autonomous UGV (자율 주행 UGV를 위한 정지선과 횡단보도 인식 알고리즘 설계 및 구현)

  • Lee, Jae Hwan;Yoon, Heebyung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.271-278
    • /
    • 2014
  • In spite of that stop line and crosswalk should be aware of the most basic objects in transportation system, its features extracted are very limited. In addition to image-based recognition technology, laser and RF, GPS/INS recognition technology, it is difficult to recognize. For this reason, the limited research in this area has been done. In this paper, the algorithm to recognize the stop line and crosswalk is designed and implemented using image-based recognition technology with the images input through a vision sensor. This algorithm consists of three functions.; One is to select the area, in advance, needed for feature extraction in order to speed up the data processing, 'Region of Interest', another is to process the images only that white color is detected more than a certain proportion in order to remove the unnecessary operation, 'Color Pattern Inspection', the other is 'Feature Extraction and Recognition', which is to extract the edge features and compare this to the previously-modeled one to identify the stop line and crosswalk. For this, especially by using case based feature comparison algorithm, it can identify either both stop line and crosswalk exist or just one exists. Also the proposed algorithm is to develop existing researches by comparing and analysing effect of in-vehicle camera installation and changes in recognition rate of distance estimation and various constraints such as backlight and shadow.

Pallet Measurement Method for Automatic Pallet Engaging in Real-Time (자동 화물처리를 위한 실시간 팔레트 측정 방법)

  • Byun, Sung-Min;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.171-181
    • /
    • 2011
  • A vision-based method for positioning and orienting of pallets is presented in this paper, which guides autonomous forklifts to engage pallets automatically. The method uses a single camera mounted on the fork carriage instead of two cameras for stereo vision that is conventionally used for positioning objects in 3D space. An image back-projection technique for determining the orient of a pallet without any fiducial marks is suggested in tins paper, which projects two feature lines on the front plane of the pallet backward onto a virtual plane that can be rotated around a given axis in 3D space. We show the fact that the rotation angle of the virtual plane on which the back-projected feature lines are parallel can be used to describe the orient of the pallet front plane. The position of the pallet is determined by using ratio of the distance between the back-projected feature lines and their real distance on the pallet front plane. Through a test on real pallet images, we found that the proposed method was applicable to real environment practically in real-time.

Remote Navigation and Monitoring System for Mobile Robot Using Smart Phone (스마트 폰을 이용한 모바일로봇의 리모트 주행제어 시스템)

  • Park, Jong-Jin;Choi, Gyoo-Seok;Chun, Chang-Hee;Park, In-Ku;Kang, Jeong-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.207-214
    • /
    • 2011
  • In this paper, using Zigbee-based wireless sensor networks and Lego MindStorms NXT robot, a remote monitoring and navigation system for mobile robot has been developed. Mobile robot can estimate its position using encoder values of its motor, but due to the existing friction and shortage of motor power etc., error occurs. To fix this problem and obtain more accurate position of mobile robot, a ultrasound module on wireless sensor networks has been used in this paper. To overcome disadvantages of ultrasound which include straightforwardness and narrow detection coverage, we rotate moving node attached to mobile robot by $360^{\circ}$ to measure each distance from four fixed nodes. Then location of mobile robot is estimated by triangulation using measured distance values. In addition, images are sent via a network using a USB Web camera to smart phone. On smart phones we can see location of robot, and images around places where robot navigates. And remote monitoring and navigation is possible by just clicking points at the map on smart phones.

Development of a Vehicle Positioning Algorithm Using In-vehicle Sensors and Single Photo Resection and its Performance Evaluation (차량 내장 센서와 단영상 후방 교차법을 이용한 차량 위치 결정 알고리즘 개발 및 성능 평가)

  • Kim, Ho Jun;Lee, Im Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2017
  • For the efficient and stable operation of autonomous vehicles or advanced driver assistance systems being actively studied nowadays, it is important to determine the positions of the vehicle accurately and economically. A satellite based navigation system is mainly used for positioning, but it has a limitation in signal blockage areas. To overcome this limitation, sensor fusion methods including additional sensors such as an inertial navigation system have been mainly proposed but the high sensor cost has been a problem. In this work, we develop a vehicle position estimation algorithm using in-vehicle sensors and a low-cost imaging sensor without any expensive additional sensor. We determine the vehicle positions using the velocity and yaw-rate of a car from the in-vehicle sensors and the position and attitude of the camera based on the single photo resection process. For the evaluation, we built a prototype system, acquired test data using the system, and estimated the trajectory. The proposed algorithm shows the accuracy of about 40% higher than an in-vehicle sensor only method.

A Lane Detection and Departure Warning System Robust to Illumination Change and Road Surface Symbols (도로조명변화 및 노면표시에 강인한 차선 검출 및 이탈 경고 시스템)

  • Kim, Kwang Soo;Choi, Seung Wan;Kwak, Soo Yeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.6
    • /
    • pp.9-16
    • /
    • 2017
  • An Algorithm for Lane Detection and Lane Departure Warning for a Vehicle Driving on Roads is proposed in This Paper. Using Images Obtained from On-board Cameras for Lane Detection has Some Difficulties, e.g. the Increase of Fault Detection Ratio Due to Symbols on Roads, Missing Yellow Lanes in the Tunnel due to a Similar Color Lighting, Missing Some Lanes in Rainy Days Due to Low Intensity of Illumination, and so on. The Proposed Algorithm has been developed Focusing on Solving These Problems. It also has an Additional Function to Determine How much the Vehicle is leaning to any Side between The Lanes and, If Necessary, to Give a Warning to a Driver. Experiments Using an Image Database Built by Collecting with Vehicle On-board Blackbox in Six Different Situations have been conducted for Validation of the Proposed Algorithm. The Experimental Results show a High Performance of the Proposed Algorithm with Overall 97% Detection Success Ratio.

Intelligent Driver Assistance Systems based on All-Around Sensing (전방향 환경인식에 기반한 지능형 운전자 보조 시스템)

  • Kim Sam-Yong;Kang Geong-Kwan;Ryu Young-Woo;Oh Se-Young;Kim Kwang-Soo;Park Sang-Cheol;Kim Jin-Won
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.49-59
    • /
    • 2006
  • DAS(Driver Assistance Systems) support the driver's decision making to increase safety and comfort by issuing the naming signals or even exert the active control in case of dangerous conditions. Most previous research and products intend to offer only a single warning service like the lane departure warning, collision warning, lane change assistance, etc. Although these functions elevate the driving safety and convenience to a certain degree, New type of DAS will be developed to integrate all the important functions with an efficient HMI (Human-Machine Interface) framework for various driving conditions. We propose an all-around sensing based on the integrated DAS that can also remove the blind spots using 2 cameras and 8 sonars, recognize the driving environment by lane and vehicle detection, construct a novel birds-eye HMI for easy comprehension. it can give proper warning in case of imminent danger.

Panorama Image Stitching Using Sythetic Fisheye Image (Synthetic fisheye 이미지를 이용한 360° 파노라마 이미지 스티칭)

  • Kweon, Hyeok-Joon;Cho, Donghyeon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.20-30
    • /
    • 2022
  • Recently, as VR (Virtual Reality) technology has been in the spotlight, 360° panoramic images that can view lively VR contents are attracting a lot of attention. Image stitching technology is a major technology for producing 360° panorama images, and many studies are being actively conducted. Typical stitching algorithms are based on feature point-based image stitching. However, conventional feature point-based image stitching methods have a problem that stitching results are intensely affected by feature points. To solve this problem, deep learning-based image stitching technologies have recently been studied, but there are still many problems when there are few overlapping areas between images or large parallax. In addition, there is a limit to complete supervised learning because labeled ground-truth panorama images cannot be obtained in a real environment. Therefore, we produced three fisheye images with different camera centers and corresponding ground truth image through carla simulator that is widely used in the autonomous driving field. We propose image stitching model that creates a 360° panorama image with the produced fisheye image. The final experimental results are virtual datasets configured similar to the actual environment, verifying stitching results that are strong against various environments and large parallax.

Development of a Cause Analysis Program to Risky Driving with Vision System (Vision 시스템을 이용한 위험운전 원인 분석 프로그램 개발에 관한 연구)

  • Oh, Ju-Taek;Lee, Sang-Yong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.6
    • /
    • pp.149-161
    • /
    • 2009
  • Electronic control systems of vehicle are rapidly developed to keep balance of a driver`s safety and the legal, social needs. The driver assistance systems are putted into practical use according to the cost drop in hardware and highly efficient sensor, etc. This study has developed a lane and vehicle detection program using CCD camera. The Risky Driving Analysis Program based on vision systems is developed by combining a risky driving detection algorithm formed in previous study with lane and vehicle detection program suggested in this study. Risky driving detection programs developed in this study with information coming from the vehicle moving data and lane data are useful in efficiently analyzing the cause and effect of risky driving behavior.

  • PDF