• Title/Summary/Keyword: 자율주행로봇

Search Result 468, Processing Time 0.022 seconds

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

Development of distance sensor module with object tracking function using radial arrangement of phototransistor for educational robot (교육용 로봇을 위한 포토트랜지스터의 방사형 배열을 이용한 물체추적기능을 갖는 거리 센서 모듈 개발)

  • Cho, Se-Hyoung
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.922-932
    • /
    • 2018
  • Radial distance sensors are widely used for surveying and autonomous navigation. It is necessary to train the operation principle of these sensors and how to apply them. Although commercialization of radial distance sensor continues to be cost-effective through lower performance, but it is still expensive for educational purposes. In this paper, we propose a distance sensor module with object tracking using radial array of low cost phototransistor which can be used for educational robot. The proposed method is able to detect the position of a fast moving object immediately by arranging the phototransistor in the range of 180 degrees and improve the sensing angle range and track the object by the sensor rotation using the servo motor. The scan speed of the proposed sensor is 50~200 times faster than the commercial distance sensor, thus it can be applied to a high performance educational mobile robot with 1ms control loop.

Implementation of an Autonomous Driving System for the Segye AI Robot Car Race Competition (세계 AI 로봇 카레이스 대회를 위한 자율 주행 시스템 구현)

  • Choi, Jung Hyun;Lim, Ye Eun;Park, Jong Hoon;Jeong, Hyeon Soo;Byun, Seung Jae;Sagong, Ui Hun;Park, Jeong Hyun;Kim, Chang Hyun;Lee, Jae Chan;Kim, Do Hyeong;Hwang, Myun Joong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.198-208
    • /
    • 2022
  • In this paper, an autonomous driving system is implemented for the Segye AI Robot Race Competition that multiple vehicles drive simultaneously. By utilizing the ERP42-racing platform, RTK-GPS, and LiDAR sensors provided in the competition, we propose an autonomous driving system that can drive safely and quickly in a road environment with multiple vehicles. This system consists of a recognition, judgement, and control parts. In the recognition stage, vehicle localization and obstacle detection through waypoint-based LiDAR ROI were performed. In the judgement stage, target velocity setting and obstacle avoidance judgement are determined in consideration of the straight/curved section and the distance between the vehicle and the neighboring vehicle. In the control stage, adaptive cruise longitudinal velocity control based on safe distance and lateral velocity control based on pure-pursuit are performed. To overcome the limited experimental environment, simulation and partial actual experiments were conducted together to develop and verify the proposed algorithms. After that, we participated in the Segye AI Robot Race Competition and performed autonomous driving racing with verified algorithms.

Land Preview System Using Laser Range Finder based on Heave Estimation (Heave 추정 기반의 레이저 거리측정기를 이용한 선행지형예측시스템)

  • Kim, Tae-Won;Kim, Jin-Hyoung;Kim, Sung-Soo;Ko, Yun-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.64-73
    • /
    • 2012
  • In this paper, a new land preview system using laser range finder based on heave estimation algorithm is proposed. The proposed land preview system is an equipment which measures the shape of forward topography for autonomous vehicle. To implement this land preview system, the laser range finder is generally used because of its wide measuring range and robustness under various environmental condition. Then the current location of the vehicle has to be known to generate the shape of forward topography and sensors based on acceleration such as IMU and accelerometer are generally utilized to measure heave motion in the conventional land preview system. However the drawback to these sensors is that they are too expensive for low-cost vehicle such as mobile robot and their measurement error is increased for mobile robot with abrupt acceleration. In order to overcome this drawback, an algorithm that estimates heave motion using the information of odometer and previously measured topography is proposed in this paper. The proposed land preview system based on the heave estimation algorithm is verified through simulation and experiments for various terrain using a simulator and a real system.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Analysis of the Valuation Model for the state-of-the-art ICT Technology (첨단 ICT 기술에 대한 가치평가 모델 분석)

  • Oh, Sun-Jin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.705-710
    • /
    • 2021
  • Nowadays, cutting-edge information communication technology is the genuine core technology of the fourth Industrial Revolution and is still making great progress rapidly among various technology fields. The biggest issue in ICT fields is the machine learning based Artificial Intelligence applications using big data in cloud computing environment on the basis of wireless network, and also the technology fields of autonomous control applications such as Autonomous Car or Mobile Robot. Since value of the high-tech ICT technology depends on the surrounded environmental factors and is very flexible, the precise technology valuation method is urgently needed in order to get successful technology transfer, transaction and commercialization. In this research, we analyze the characteristics of the high-tech ICT technology and the main factors in technology transfer or commercialization process, and propose the precise technology valuation method that reflects the characteristics of the ICT technology through phased analysis of the existing technology valuationmodel.

A Study of Real Time Object Tracking using Reinforcement Learning (강화학습을 사용한 실시간 이동 물체 추적에 관한 연구)

  • 김상헌;이동명;정재영;운학수;박민욱;김관형
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.87-90
    • /
    • 2003
  • 과거의 이동로봇 시스템은 완전한 자율주행이 주된 목표였으며 그때의 영상정보는 단지 모니터링을 하는 보조적인 수단으로 사용되었다. 그러나 지금은 이동 물체의 추적, 대상 물체의 인식과 판별, 특징 추출과 같은 다양한 응용분야에서 영상정보를 이용하는 연구가 활발히 진행되고 있다 또한 제어 측면에서는 전통적인 제어기법으로는 해결하기 힘들었던 여러 가지 비선형적인 제어를 지능제어 방법을 통하여 많이 해결하곤 하였다. 그러한 지능제어에서 신경망을 많이 사용하기도 한다. 최근에는 신경망의 학습에 많이 사용하는 방법 중 강화학습이 많이 사용되고 있다. 강화학습이란 동적인 제어평면에서 시행착오를 통해, 목적을 이루기 위해 각 상황에서 행동을 학습하는 방법이다. 그러므로 이러한 강화학습은 수많은 시행착오를 거쳐 그 대응 관계를 학습하게 된다. 제어에 사용되는 제어 파라메타는 어떠한 상태에 처할 수 있는 상태와 행동들, 그리고 상태의 변화, 또한 최적의 해를 구할 수 있는 포상알고리즘에 대해 다양하게 연구되고 있다. 본 논문에서 연구한 시스템은 비젼시스템과 Strong Arm 보드를 이용하여 대상물체의 색상과 형태를 파악한 후 실시간으로 물체를 추적할 수 있게 구성하였으며, 또한 물체 이동의 비선형적인 경향성을 강화학습을 통하여 물체이동의 비선형성을 보다 유연하게 대처하여 보다 안정하고 빠르며 정확하게 물체를 추적하는 방법을 실험을 통하여 제안하였다.

  • PDF

Design of Navigation Algorithm for Mobile Robot using Sensor fusion (센서 합성을 이용한 자율이동로봇의 주행 알고리즘 설계)

  • Kim Jung-Hoon;Kim young-Joong;Lim Myo-Teag
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.10
    • /
    • pp.703-713
    • /
    • 2004
  • This paper presents the new obstacle avoidance method that is composed of vision and sonar sensors, also a navigation algorithm is proposed. Sonar sensors provide poor information because the angular resolution of each sonar sensor is not exact. So they are not suitable to detect relative direction of obstacles. In addition, it is not easy to detect the obstacle by vision sensors because of an image disturbance. In This paper, the new obstacle direction measurement method that is composed of sonar sensors for exact distance information and vision sensors for abundance information. The modified splitting/merging algorithm is proposed, and it is robuster for an image disturbance than the edge detecting algorithm, and it is efficient for grouping of the obstacle. In order to verify our proposed algorithm, we compare the proposed algorithm with the edge detecting algorithm via experiments. The direction of obstacle and the relative distance are used for the inputs of the fuzzy controller. We design the angular velocity controllers for obstacle avoidance and for navigation to center in corridor, respectively. In order to verify stability and effectiveness of our proposed method, it is apply to a vision and sonar based mobile robot navigation system.