• 제목/요약/키워드: Vision data

검색결과 1,800건 처리시간 0.025초

비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어 (Target Tracking Control of a Quadrotor UAV using Vision Sensor)

  • 유민구;홍성경
    • 한국항공우주학회지
    • /
    • 제40권2호
    • /
    • pp.118-128
    • /
    • 2012
  • 본 논문은 쿼드로터형 무인 비행체를 비전센서를 이용한 목표 추적 위치 제어기 설계하였고, 이를 시뮬레이션 및 실험을 통해서 확인하였다. 우선 제어기 설계에 앞서 쿼드로터의 동역학 분석 및 실험데이터를 통한 모델링을 수행하였다. 이때, 모델의 계수들은 실제 비행 데이터를 이용한 PEM(Prediction Error Method)을 이용하여 얻었다. 이 추정된 모델을 바탕으로 LQR(Linear Quadratic Regulator) 기법을 이용한 임의의 목표를 따라가는 위치 제어기를 설계하였으며, 이때 위치 정보는 비전센서의 색 정보를 이용한 Color Tracking기능을 이용하여 쿼드로터와 물체의 상대적인 위치를 얻어내었고, 초음파 센서를 이용하여 고도 정보를 얻어 내었다. 마지막으로 실제 움직이는 물체의 추적 제어 실험을 수행하여 LQR 제어기 성능을 평가하였다.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법 (VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle)

  • 김태진;최진우;이영준;최현택
    • 한국해양공학회지
    • /
    • 제30권5호
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.

Implementation of a High-speed Template Matching System for Wafer-vision Alignment Using FPGA

  • Jae-Hyuk So;Minjoon Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권8호
    • /
    • pp.2366-2380
    • /
    • 2024
  • In this study, a high-speed template matching system is proposed for wafer-vision alignment. The proposed system is designed to rapidly locate markers in semiconductor equipment used for wafer-vision alignment. We optimized and implemented a template-matching algorithm for the high-speed processing of high-resolution wafer images. Owing to the simplicity of wafer markers, we removed unnecessary components in the algorithm and designed the system using a field-programmable gate array (FPGA) to implement high-speed processing. The hardware blocks were designed using the Xilinx ZCU104 board, and the pyramid and matching blocks were designed using programmable logic for accelerated operations. To validate the proposed system, we established a verification environment using stage equipment commonly used in industrial settings and reference-software-based validation frameworks. The output results from the FPGA were transmitted to the wafer-alignment controller for system verification. The proposed system reduced the data-processing time by approximately 30% and achieved a level of accuracy in detecting wafer markers that was comparable to that achieved by reference software, with minimal deviation. This system can be used to increase precision and productivity during semiconductor manufacturing processes.

Evaluating Chest Abnormalities Detection: YOLOv7 and Detection Transformer with CycleGAN Data Augmentation

  • Yoshua Kaleb Purwanto;Suk-Ho Lee;Dae-Ki Kang
    • International journal of advanced smart convergence
    • /
    • 제13권2호
    • /
    • pp.195-204
    • /
    • 2024
  • In this paper, we investigate the comparative performance of two leading object detection architectures, YOLOv7 and Detection Transformer (DETR), across varying levels of data augmentation using CycleGAN. Our experiments focus on chest scan images within the context of biomedical informatics, specifically targeting the detection of abnormalities. The study reveals that YOLOv7 consistently outperforms DETR across all levels of augmented data, maintaining better performance even with 75% augmented data. Additionally, YOLOv7 demonstrates significantly faster convergence, requiring approximately 30 epochs compared to DETR's 300 epochs. These findings underscore the superiority of YOLOv7 for object detection tasks, especially in scenarios with limited data and when rapid convergence is essential. Our results provide valuable insights for researchers and practitioners in the field of computer vision, highlighting the effectiveness of YOLOv7 and the importance of data augmentation in improving model performance and efficiency.

로봇 비전의 영상 인식 AI를 위한 전이학습 정량 평가 (Quantitative evaluation of transfer learning for image recognition AI of robot vision)

  • 정재학
    • 문화기술의 융합
    • /
    • 제10권3호
    • /
    • pp.909-914
    • /
    • 2024
  • 본 연구에서는 로봇 비전용 영상 인식을 비롯한 다양한 AI 분야에서 널리 활용되는 전이학습에 대한 정량적 평가를 제시하였다. 전이학습을 적용한 연구 결과에 대한 정량적, 정성적 분석은 제시되나, 전이학습 자체에 대해서는 논의되지 않는다. 따라서 본 연구에서는 전이학습 자체에 대한 정량적 평가를 숫자 손글씨 데이터베이스인 MNIST를 기반으로 제안한다. 기준 네트워크를 대상으로 전이학습 동결층의 깊이 및 전이학습 데이터와 사전 학습 데이터의 비율에 따른 정확도 변화를 추적하였다. 이를 통해 첫번째 레이어까지 동결할 때 전이학습 데이터의 비율이 3% 이상일 경우, 90% 이상의 정확도를 안정적으로 유지할 수 있음이 확인되었다. 본 연구의 전이학습 정량 평가 방법은 향후 네트워크 구조와 데이터의 종류에 따라 최적화된 전이학습을 구현하는데 활용 가능하며, 다양한 환경에서 로봇 비전 및 이미지 분석 AI의 활용 범위를 확대할 것이다.

A Study on Real-time Control of Bead Height and Joint Tracking Using Laser Vision Sensor

  • Kim, H. K.;Park, H.
    • International Journal of Korean Welding Society
    • /
    • 제4권1호
    • /
    • pp.30-37
    • /
    • 2004
  • There have been continuous efforts on automating welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, a vision system is constructed and using this, the 3 dimensional geometry of the bead is measured on-line. For the application as in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

  • PDF

스테레오 비전을 이용한 포인팅 디바이스에 관한 연구 (A study on pointing device system using stereo vision)

  • 한승일;황용현;이병국;이준재
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제10권2호
    • /
    • pp.67-80
    • /
    • 2006
  • 본 논문에서는 스테레오 비전을 이용하여 기존의 포인팅 디바이스인 마우스를 대신할 새로운 포인팅 디바이스 방법을 제안 한다. 제안한 방법은 기존의 인식 장비들이 가진 마커에 의한 움직임 제약과 고가 장비의 단점을 극복하기 위해 컴퓨터 비전을 이용한다. 즉, 사람이 시각을 통해 정보를 인식하는 것과 동일하게 컴퓨터의 영상 정보를 이용하여 실시간으로 영상내의 컬러 영역의 분할을 통해 물체의 위치를 추적 및 정합하고 이의 위치를 스테레오 기하학 정보로부터 계산하여 포인팅 동작을 수행한다.

  • PDF

조명 변화에 강인한 로봇 축구 시스템의 색상 분류기 (Robust Color Classifier for Robot Soccer System under Illumination Variations)

  • 이성훈;박진현;전향식;최영규
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권1호
    • /
    • pp.32-39
    • /
    • 2004
  • The color-based vision systems have been used to recognize our team robots, the opponent team robots and a ball in the robot soccer system. The color-based vision systems have the difficulty in that they are very sensitive to color variations brought by brightness changes. In this paper, a neural network trained with data obtained from various illumination conditions is used to classify colors in the modified YUV color space for the robot soccer vision system. For this, a new method to measure brightness is proposed by use of a color card. After the neural network is constructed, a look-up-table is generated to replace the neural network in order to reduce the computation time. Experimental results show that the proposed color classification method is robust under illumination variations.

파이프 용접에서 다중 시각센서를 이용한 용접선 추적 및 용접결함 측정에 관한 연구 (A Study on Seam Tracking and Weld Defects Detecting for Automated Pipe Welding by Using Double Vision Sensors)

  • 송형진;이승기;강윤희;나석주
    • Journal of Welding and Joining
    • /
    • 제21권1호
    • /
    • pp.60-65
    • /
    • 2003
  • At present. welding of most pipes with large diameter is carried out by the manual process. Automation of the welding process is necessary f3r the sake of consistent weld quality and improvement in productivity. In this study, two vision sensors, based on the optical triangulation, were used to obtain the information for seam tracking and detecting the weld defects. Through utilization of the vision sensors, noises were removed, images and 3D information obtained and positions of the feature points detected. The aforementioned process provided the seam and leg position data, calculated the magnitude of the gap, fillet area and leg length and judged the weld defects by ISO 5817. Noises in the images were removed by using the gradient values of the laser stripe's coordinates and various feature points were detected by using an algorithm based on the iterative polygon approximation method. Since the process time is very important, all the aforementioned processes should be conducted during welding.