• Title/Summary/Keyword: vision-based method

Search Result 1,463, Processing Time 0.03 seconds

Vision-based Navigation using Semantically Segmented Aerial Images (의미론적 분할된 항공 사진을 활용한 영상 기반 항법)

  • Hong, Kyungwoo;Kim, Sungjoong;Park, Junwoo;Bang, Hyochoong;Heo, Junhoe;Kim, Jin-Won;Pak, Chang-Ho;Seo, Songwon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.783-789
    • /
    • 2020
  • This paper proposes a new method for vision-based navigation using semantically segmented aerial images. Vision-based navigation can reinforce the vulnerability of the GPS/INS integrated navigation system. However, due to the visual and temporal difference between the aerial image and the database image, the existing image matching algorithms have difficulties being applied to aerial navigation problems. For this reason, this paper proposes a suitable matching method for the flight composed of navigational feature extraction through semantic segmentation followed by template matching. The proposed method shows excellent performance in simulation and even flight situations.

Cable Tension Measurement of Long-span Bridges Using Vision-based System (영상처리기법을 이용한 장대교량 케이블의 장력 측정)

  • Kim, Sung-Wan;Cheung, Jin-Hwan;Kim, Seong-Do
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.2
    • /
    • pp.115-123
    • /
    • 2018
  • In a long-span bridge, the cables are important elements that support the load of the bridge. Accordingly, the cable tension is a very important variable in evaluating the health and safety of the bridge. The most popular methods of estimating the cable tensions are the direct method, which directly measures the cable stresses using load cells, hydraulic jacking devices, etc., and the vibration method, which inverses the tensions using the cable shapes and the measured dynamic characteristics. Studies on the use of the electromagnetic (EM) sensor, which detects the magnetic field variations caused by the change in the stress of the steel in the cable, are increasing. In this study, the lift-off test, the EM sensor, and the vibration method (Vision-based System and Accelerometer) were used to measure cable tension, and their results were compared and analyzed.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Target Tracking Control of a Quadrotor UAV using Vision Sensor (비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어)

  • Yoo, Min-Goo;Hong, Sung-Kyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.2
    • /
    • pp.118-128
    • /
    • 2012
  • The goal of this paper is to design the target tracking controller for a quadrotor micro UAV using a vision sensor. First of all, the mathematical model of the quadrotor was estimated through the Prediction Error Method(PEM) using experimental input/output flight data, and then the estimated model was validated via the comparison with new experimental flight data. Next, the target tracking controller was designed using LQR(Linear Quadratic Regulator) method based on the estimated model. The relative distance between an object and the quadrotor was obtained by a vision sensor, and the altitude was obtained by a ultra sonic sensor. Finally, the performance of the designed target tracking controller was evaluated through flight tests.

Vision-Based Train Position and Movement Estimation Using a Fuzzy Classifier (퍼지 분류기를 이용한 비전 기반 열차 위치 및 움직임 추정)

  • Song, Jae-Won;An, Tae-Ki;Lee, Dae-Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.1
    • /
    • pp.365-369
    • /
    • 2012
  • We propose a vision-based method that estimates train position and movement for railway monitoring in which we use a fuzzy classifier to determine train states. The proposed method employs frame difference and background subtraction for estimating train motion and presence, respectively. These features are used as the linguistic variables of the fuzzy classifier. Experimental results show that the proposed method can correctly estimate train position and movement. Therefore the method can be used for railway monitoring systems which estimate crowd density or protect safety.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

Development and Validation of a Vision-Based Needling Training System for Acupuncture on a Phantom Model

  • Trong Hieu Luu;Hoang-Long Cao;Duy Duc Pham;Le Trung Chanh Tran;Tom Verstraten
    • Journal of Acupuncture Research
    • /
    • v.40 no.1
    • /
    • pp.44-52
    • /
    • 2023
  • Background: Previous studies have investigated technology-aided needling training systems for acupuncture on phantom models using various measurement techniques. In this study, we developed and validated a vision-based needling training system (noncontact measurement) and compared its training effectiveness with that of the traditional training method. Methods: Needle displacements during manipulation were analyzed using OpenCV to derive three parameters, i.e., needle insertion speed, needle insertion angle (needle tip direction), and needle insertion length. The system was validated in a laboratory setting and a needling training course. The performances of the novices (students) before and after training were compared with the experts. The technology-aided training method was also compared with the traditional training method. Results: Before the training, a significant difference in needle insertion speed was found between experts and novices. After the training, the novices approached the speed of the experts. Both training methods could improve the insertion speed of the novices after 10 training sessions. However, the technology-aided training group already showed improvement after five training sessions. Students and teachers showed positive attitudes toward the system. Conclusion: The results suggest that the technology-aided method using computer vision has similar training effectiveness to the traditional one and can potentially be used to speed up needling training.

Vision Based Vehicle Detection and Traffic Parameter Extraction (비젼 기반 차량 검출 및 교통 파라미터 추출)

  • 하동문;이종민;김용득
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.610-620
    • /
    • 2003
  • Various shadows are one of main factors that cause errors in vision based vehicle detection. In this paper, two simple methods, land mark based method and BS & Edge method, are proposed for vehicle detection and shadow rejection. In the experiments, the accuracy of vehicle detection is higher than 96%, during which the shadows arisen from roadside buildings grew considerably. Based on these two methods, vehicle counting, tracking, classification, and speed estimation are achieved so that real-time traffic parameters concerning traffic flow can be extracted to describe the load of each lane.

Aircraft Recognition from Remote Sensing Images Based on Machine Vision

  • Chen, Lu;Zhou, Liming;Liu, Jinming
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.795-808
    • /
    • 2020
  • Due to the poor evaluation indexes such as detection accuracy and recall rate when Yolov3 network detects aircraft in remote sensing images, in this paper, we propose a remote sensing image aircraft detection method based on machine vision. In order to improve the target detection effect, the Inception module was introduced into the Yolov3 network structure, and then the data set was cluster analyzed using the k-means algorithm. In order to obtain the best aircraft detection model, on the basis of our proposed method, we adjusted the network parameters in the pre-training model and improved the resolution of the input image. Finally, our method adopted multi-scale training model. In this paper, we used remote sensing aircraft dataset of RSOD-Dataset to do experiments, and finally proved that our method improved some evaluation indicators. The experiment of this paper proves that our method also has good detection and recognition ability in other ground objects.