• Title/Summary/Keyword: Vision navigation

Search Result 310, Processing Time 0.024 seconds

Issue-Tree and QFD Analysis of Transportation Safety Policy with Autonomous Vehicle (Issue-Tree기법과 QFD를 이용한 자율주행자동차 교통안전정책과제 분석)

  • Nam, Doohee;Lee, Sangsoo;Kim, Namsun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.26-32
    • /
    • 2016
  • An autonomous car(driverless car, self-driving car, robotic car) is a vehicle that is capable of sensing its environment and navigating without human input. Autonomous cars can detect surroundings using a variety of techniques such as radar, lidar, GPS, odometry, and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. Autonomous cars have control systems that are capable of analyzing sensory data to distinguish between different cars on the road, which is very useful in planning a path to the desired destination. An issue tree, also called a logic tree, is a graphical breakdown of a question that dissects it into its different components vertically and that progresses into details as it reads to the right.Issue trees are useful in problem solving to identify the root causes of a problem as well as to identify its potential solutions. They also provide a reference point to see how each piece fits into the whole picture of a problem. Using Issue-Tree menthods, transportation safety policies were developed with autonompus vehicle in mind.

Analysis of the Feasibility of GNSS/Geoid Technology in Determining Orthometric Height in Mountain (산악지 표고결정에 있어서 GNSS/Geoid 기술의 활용가능성 분석)

  • Lee, Suk Bae;Lee, Keun Sang;Lee, Min Kun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.2
    • /
    • pp.57-65
    • /
    • 2017
  • The purpose of this study is to analyze the feasibility of using Global Navigation Satellite System(GNSS)/Geoid technology in determining orthometric height in mountain. For the study, a test bed was set up in and around Mount Jiri and GNSS surveying were conducted. The orthometric height of 39 benchmarks was determined by applying the EGM2008, KNGeoid13, and KNGeoid14 geoid models and the accuracy was estimated by comparing with the offical Benchmarks orthometric height value issued by National Geographic Information Institute(NGII) and finally, the results were analyzed with the Aerial Photogrammetry Work Regulations. As a result of the study, it was found that the accuracy of the orthometric height determination by GNSS/Geoid technology was ${\pm}7.1cm$ when the KNGeoid14 geoid model was applied. And also, it can be confirmed that it is usable for the less than 1/1000 plotting scales as a vertical reference point for the aerial triangulation in Aerial Photogrammetry.

Development of A Haptic Interactive Virtual Exhibition Space (햅틱 상호작용을 제공하는 가상 전시공간 개발)

  • You, Yong-Hee;Cho, Yun-Hye;Choi, Geon-Suk;Sung, Mee-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.6
    • /
    • pp.412-416
    • /
    • 2007
  • In this paper, we present a haptic virtual exhibition space that allows users to interact with 3D graphic objects not only through the sense of sight but also through the sense of touch. The haptic virtual exhibition space offers users in different places some efficient ways to experience the exhibitions of a virtual musical museum using the basic human senses of perception, such as vision, audition, and touch. Depending on 3D graphic objects, we apply different properties to let those feel realistic. We also provide haptic device based navigation which prevents users from rushing between various interfaces: keyboard and mouse. The haptic virtual museum is based on Client-Server architecture and clients are represented in the 3D space in the form of avatars. In this paper, we mainly discuss the design of the haptic virtual exhibition space in detail and in the end, we provide performance analysis in comparison to other similar applications such as QTVR and VRML).

Conceptual Study on Coaxial Rotorcraft UAV for teaming operation with UGV (무인지상차량과의 합동운용을 위한 동축반전 회전익형 무인항공기 개념연구)

  • Byun, Young-Seop;Song, Jun-Beom;Song, Woo-Jin;Kim, Jeong;Kang, Beom-Soo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.39 no.5
    • /
    • pp.458-465
    • /
    • 2011
  • UAV-UGV teaming concept has been proposed that can compensate for weak points of each platform by providing carrying, launching, recovery and recharging capability for the VTOL-UAV through the host UGV. The teaming concept can expand the observation envelop of the UGV and extend the operational capability of the UAV through mechanical combination of each system. The spherical-shaped coaxial rotorcraft UAV is suggested to provide flexible and precise interface between two systems. Hybrid navigation solution that included vision-based target tracking method for precision landing is investigated and its experimental study is performed. Feasibility study on length-variable rotor to provide the compact configuration of the loaded rotorcraft platform is also described.

Place Modeling and Recognition using Distribution of Scale Invariant Features (스케일 불변 특징들의 분포를 이용한 장소의 모델링 및 인식)

  • Hu, Yi;Shin, Bum-Joo;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.51-58
    • /
    • 2008
  • In this paper, we propose a place modeling based on the distribution of scale-invariant features, and a place recognition method that recognizes places by comparing the place model in a database with the extracted features from input data. The proposed method is based on the assumption that every place can be represented by unique feature distributions that are distinguishable from others. The proposed method uses global information of each place where one place is represented by one distribution model. Therefore, the main contribution of the proposed method is that the time cost corresponding to the increase of the number of places grows linearly without increasing exponentially. For the performance evaluation of the proposed method, the different number of frames and the different number of features are used, respectively. Empirical results illustrate that our approach achieves better performance in space and time cost comparing to other approaches. We expect that the Proposed method is applicable to many ubiquitous systems such as robot navigation, vision system for blind people, wearable computing, and so on.

  • PDF

Research Trends and Case Study on Keypoint Recognition and Tracking for Augmented Reality in Mobile Devices (모바일 증강현실을 위한 특징점 인식, 추적 기술 및 사례 연구)

  • Choi, Heeseung;Ahn, Sang Chul;Kim, Ig-Jae
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.45-55
    • /
    • 2015
  • In recent years, keypoint recognition and tracking technologies are considered as crucial task in many practical systems for markerless augmented reality. The keypoint recognition and technologies are widely studied in many research areas, including computer vision, robot navigation, human computer interaction, and etc. Moreover, due to the rapid growth of mobile market related to augmented reality applications, several effective keypoint-based matching and tracking methods have been introduced by considering mobile embedded systems. Therefore, in this paper, we extensively analyze the recent research trends on keypoint-based recognition and tracking with several core components: keypoint detection, description, matching, and tracking. Then, we also present one of our research related to mobile augmented reality, named mobile tour guide system, by real-time recognition and tracking of tour maps on mobile devices.

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.

Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates (다중 카메라와 절대 공간 좌표를 활용한 이동 로봇의 강인한 실내 위치 인식 시스템 연구)

  • Mo, Se Hyun;Jeon, Young Pil;Park, Jong Ho;Chong, Kil To
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.7
    • /
    • pp.655-663
    • /
    • 2017
  • With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

Design and Implementation of a Hardware Accelerator for Marine Object Detection based on a Binary Segmentation Algorithm for Ship Safety Navigation (선박안전 운항을 위한 이진 분할 알고리즘 기반 해상 객체 검출 하드웨어 가속기 설계 및 구현)

  • Lee, Hyo-Chan;Song, Hyun-hak;Lee, Sung-ju;Jeon, Ho-seok;Kim, Hyo-Sung;Im, Tae-ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1331-1340
    • /
    • 2020
  • Object detection in maritime means that the captain detects floating objects that has a risk of colliding with the ship using the computer automatically and as accurately as human eyes. In conventional ships, the presence and distance of objects are determined through radar waves. However, it cannot identify the shape and type. In contrast, with the development of AI, cameras help accurately identify obstacles on the sea route with excellent performance in detecting or recognizing objects. The computer must calculate high-volume pixels to analyze digital images. However, the CPU is specialized for sequential processing; the processing speed is very slow, and smooth service support or security is not guaranteed. Accordingly, this study developed maritime object detection software and implemented it with FPGA to accelerate the processing of large-scale computations. Additionally, the system implementation was improved through embedded boards and FPGA interface, achieving 30 times faster performance than the existing algorithm and a three-times faster entire system.