• Title/Summary/Keyword: Vision-based positioning

Search Result 71, Processing Time 0.025 seconds

Bimodal Approach of Multi-Sensor Integration for Telematics Application (텔레매틱스 응용을 위한 다중센서통합의 이중 접근구조)

  • 김성백;이승용;최지훈;장병태;이종훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.525-528
    • /
    • 2003
  • In this paper, we present a novel idea to integrate low cost Inertial Measurement Unit(IMU) and Differential Global Positioning System (DGPS) for Telematics applications. As well known, low cost IMU produces large positioning and attitude errors in very short time due to the poor quality of inertial sensor assembly. To conquer the limitation, we present a bimodal approach for integrating IMU and DGPS, taking advantage of positioning and orientation data calculated from CCD images based on photogrammetry and stereo-vision techniques. The positioning and orientation data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method that can provide accurate position and attitude information for extended period for non-aided GPS information.

  • PDF

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Development of a Test Environment for Performance Evaluation of the Vision-aided Navigation System for VTOL UAVs (수직 이착륙 무인 항공기용 영상보정항법 시스템 성능평가를 위한 검증환경 개발)

  • Sebeen Park;Hyuncheol Shin;Chul Joo Chung
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.788-797
    • /
    • 2023
  • In this paper, we introduced a test environment to test a vision-aided navigation system, as an alternative navigation system when global positioning system (GPS) is unavailable, for vertical take-off and landing (VTOL) unmanned aerial system. It is efficient to use a virtual environment to test and evaluate the vision-aided navigation system under development, but currently no suitable equipment has been developed in Korea. Thus, the proposed test environment is developed to evaluate the performance of the navigation system by generating input signal modeling and simulating operation environment of the system, and by monitoring output signal. This paper comprehensively describes research procedure from derivation of requirements specifications to hardware/software design according to the requirements, and production of the test environment. This test environment was used for evaluating the vision-aided navigation algorithm which we are developing, and conducting simulation based pre-flight tests.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

Development of a LonRF Intelligent Device-based Ubiquitous Home Network Testbed (LonRF 지능형 디바이스 기반의 유비쿼터스 홈네트워크 테스트베드 개발)

  • 이병복;박애순;김대식;노광현
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.6
    • /
    • pp.566-573
    • /
    • 2004
  • This paper describes the ubiquitous home network (uHome-net) testbed and LonRF intelligent devices based on LonWorks technology. These devices consist of Neuron Chip, RF transceiver, sensor, and other peripheral components. Using LonRF devices, a home control network can be simplified and most devices can be operated on LonWorks control network. Also, Indoor Positioning System (IPS) that can serve various location based services was implemented in uHome-net. Smart Badge of IPS, that is a special LonRF device, can measure the 3D location of objects in the indoor environment. In the uHome-net testbed, remote control service, cooking help service, wireless remote metering service, baby monitoring service and security & fire prevention service were realized. This research shows the vision of the ubiquitous home network that will be emerged in the near future.

Extended Information Overlap Measure Algorithm for Neighbor Vehicle Localization

  • Punithan, Xavier;Seo, Seung-Woo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.4
    • /
    • pp.208-215
    • /
    • 2013
  • Early iterations of the existing Global Positioning System (GPS)-based or radio lateration technique-based vehicle localization algorithms suffer from flip ambiguities, forged relative location information and location information exchange overhead, which affect the subsequent iterations. This, in turn, results in an erroneous neighbor-vehicle map. This paper proposes an extended information overlap measure (EIOM) algorithm to reduce the flip error rates by exchanging the neighbor-vehicle presence features in binary information. This algorithm shifts and associates three pieces of information in the Moore neighborhood format: 1) feature information of the neighboring vehicles from a vision-based environment sensor system; 2) cardinal locations of the neighboring vehicles in its Moore neighborhood; and 3) identification information (MAC/IP addresses). Simulations were conducted for multi-lane highway scenarios to compare the proposed algorithm with the existing algorithm. The results showed that the flip error rates were reduced by up to 50%.

  • PDF

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

Vision-based Food Shape Recognition and Its Positioning for Automated Production of Custom Cakes (주문형 케이크 제작 자동화를 위한 영상 기반 식품 모양 인식 및 측위)

  • Oh, Jang-Sub;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1280-1287
    • /
    • 2020
  • This paper proposes a vision-based food recognition method for automated production of custom cakes. A small camera module mounted on a food art printer recognizes objects' shape and estimates their center points through image processing. Through the perspective transformation, the top-view image is obtained from the original image taken at an oblique position. The line and circular hough transformations are applied to recognize square and circular shapes respectively. In addition, the center of gravity of each figure are accurately detected in units of pixels. The test results show that the shape recognition rate is more than 98.75% under 180 ~ 250 lux of light and the positioning error rate is less than 0.87% under 50 ~ 120 lux. These values sufficiently meet the needs of the corresponding market. In addition, the processing delay is also less than 0.5 seconds per frame, so the proposed algorithm is suitable for commercial purpose.