• Title/Summary/Keyword: Vision-based positioning

Search Result 71, Processing Time 0.027 seconds

Recognition Direction Improvement of Target Object for Machine Vision based Automatic Inspection (머신비전 자동검사를 위한 대상객체의 인식방향성 개선)

  • Hong, Seung-Beom;Hong, Seung-Woo;Lee, Kyou-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1384-1390
    • /
    • 2019
  • This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This enables the automatic machine vision inspection to detect the image of the inspection object regardless of the position and orientation of the object, eliminating the need for a separate inspection jig and improving the automation level of the inspection process. This study develops the technology and method that can be applied to the wire harness manufacturing process as the inspection object and present the result of real system. The results of the system implementation was evaluated by the accredited institution. This includes successful measurement in the accuracy, detection recognition, reproducibility and positioning success rate, and achievement the goal in ten kinds of color discrimination ability, inspection time within one second and four automatic mode setting, etc.

Unmanned Forklift Docking Using Two Cameras (상하 카메라를 이용한 무인 지게차의 도킹)

  • Yi, Sang-Jin;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.930-935
    • /
    • 2015
  • An unmanned forklift requires precise positioning and pallet detection. Therefore, conventional unmanned forklifts use high-cost sensors to find the exact position of the pallet. In this study, a docking algorithm with two cameras is proposed. The proposed method uses vision data to extract the angle difference between the pallet and the forklift. Then the control law is derived from the extracted angle for successful docking. The extracted angle is compared with the actual angle in the real environment. The control law is tested with the Lyapunov stability test and Routh-Hurwitz stability criterion. Through various experiments, the proposed docking algorithm showed the success rate high enough for real-life applications.

Road Recognition based Extended Kalman Filter with Multi-Camera and LRF (다중카메라와 레이저스캐너를 이용한 확장칼만필터 기반의 노면인식방법)

  • Byun, Jae-Min;Cho, Yong-Suk;Kim, Sung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.182-188
    • /
    • 2011
  • This paper describes a method of road tracking by using a vision and laser with extracting road boundary (road lane and curb) for navigation of intelligent transport robot in structured road environments. Road boundary information plays a major role in developing such intelligent robot. For global navigation, we use a global positioning system achieved by means of a global planner and local navigation accomplished with recognizing road lane and curb which is road boundary on the road and estimating the location of lane and curb from the current robot with EKF(Extended Kalman Filter) algorithm in the road assumed that it has prior information. The complete system has been tested on the electronic vehicles which is equipped with cameras, lasers, GPS. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision system by our approach for detecting the curb of road and lane boundary detection.

Machine Learning Based BLE Indoor Positioning Performance Improvement (머신러닝 기반 BLE 실내측위 성능 개선)

  • Moon, Joon;Pak, Sang-Hyon;Hwang, Jae-Jeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.467-468
    • /
    • 2021
  • In order to improve the performance of the indoor positioning system using BLE beacons, a receiver that measures the angle of arrival among the direction finding technologies supported by BLE5.1 was manufactured and analyzed by machine learning to measure the optimal position. For the creation and testing of machine learning models, k-nearest neighbor classification and regression, logistic regression, support vector machines, decision tree artificial neural networks, and deep neural networks were used to learn and test. As a result, when the test set 4 produced in the study was used, the accuracy was up to 99%.

  • PDF

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF

Vehicular Cooperative Navigation Based on H-SPAWN Using GNSS, Vision, and Radar Sensors (GNSS, 비전 및 레이더를 이용한 H-SPAWN 알고리즘 기반 자동차 협력 항법시스템)

  • Ko, Hyunwoo;Kong, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2252-2260
    • /
    • 2015
  • In this paper, we propose a vehicular cooperative navigation system using GNSS, vision sensor and radar sensor that are frequently used in mass-produced cars. The proposed cooperative vehicular navigation system is a variant of the Hybrid-Sum Product Algorithm over Wireless Network (H-SPAWN), where we use vision and radar sensors instead of radio ranging(i.e.,UWB). The performance is compared and analyzed with respect to the sensors, especially the position estimation error decreased about fifty percent when using radar compared to vision and radio ranging. In conclusion, the proposed system with these popular sensors can improve position accuracy compared to conventional cooperative navigation system(i.e.,H-SPAWN) and decrease implementation costs.

A Vision-based Position Estimation Method Using a Horizon (지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차)

  • Shin, Jong-Jin;Nam, Hwa-Jin;Kim, Byung-Ju
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

Pallet Measurement Method for Automatic Pallet Engaging in Real-Time (자동 화물처리를 위한 실시간 팔레트 측정 방법)

  • Byun, Sung-Min;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.171-181
    • /
    • 2011
  • A vision-based method for positioning and orienting of pallets is presented in this paper, which guides autonomous forklifts to engage pallets automatically. The method uses a single camera mounted on the fork carriage instead of two cameras for stereo vision that is conventionally used for positioning objects in 3D space. An image back-projection technique for determining the orient of a pallet without any fiducial marks is suggested in tins paper, which projects two feature lines on the front plane of the pallet backward onto a virtual plane that can be rotated around a given axis in 3D space. We show the fact that the rotation angle of the virtual plane on which the back-projected feature lines are parallel can be used to describe the orient of the pallet front plane. The position of the pallet is determined by using ratio of the distance between the back-projected feature lines and their real distance on the pallet front plane. Through a test on real pallet images, we found that the proposed method was applicable to real environment practically in real-time.

Computer vision-based remote displacement monitoring system for in-situ bridge bearings robust to large displacement induced by temperature change

  • Kim, Byunghyun;Lee, Junhwa;Sim, Sung-Han;Cho, Soojin;Park, Byung Ho
    • Smart Structures and Systems
    • /
    • v.30 no.5
    • /
    • pp.521-535
    • /
    • 2022
  • Efficient management of deteriorating civil infrastructure is one of the most important research topics in many developed countries. In particular, the remote displacement measurement of bridges using linear variable differential transformers, global positioning systems, laser Doppler vibrometers, and computer vision technologies has been attempted extensively. This paper proposes a remote displacement measurement system using closed-circuit televisions (CCTVs) and a computer-vision-based method for in-situ bridge bearings having relatively large displacement due to temperature change in long term. The hardware of the system is composed of a reference target for displacement measurement, a CCTV to capture target images, a gateway to transmit images via a mobile network, and a central server to store and process transmitted images. The usage of CCTV capable of night vision capture and wireless data communication enable long-term 24-hour monitoring on wide range of bridge area. The computer vision algorithm to estimate displacement from the images involves image preprocessing for enhancing the circular features of the target, circular Hough transformation for detecting circles on the target in the whole field-of-view (FOV), and homography transformation for converting the movement of the target in the images into an actual expansion displacement. The simple target design and robust circle detection algorithm help to measure displacement using target images where the targets are far apart from each other. The proposed system is installed at the Tancheon Overpass located in Seoul, and field experiments are performed to evaluate the accuracy of circle detection and displacement measurements. The circle detection accuracy is evaluated using 28,542 images captured from 71 CCTVs installed at the testbed, and only 48 images (0.168%) fail to detect the circles on the target because of subpar imaging conditions. The accuracy of displacement measurement is evaluated using images captured for 17 days from three CCTVs; the average and root-mean-square errors are 0.10 and 0.131 mm, respectively, compared with a similar displacement measurement. The long-term operation of the system, as evaluated using 8-month data, shows high accuracy and stability of the proposed system.