• 제목/요약/키워드: vision model

검색결과 1,333건 처리시간 0.028초

카메라 모델과 데이터의 정확도가 불확실한 상황에서의 카메라 보정 (Camera Calibration when the Accuracies of Camera Model and Data Are Uncertain)

  • 도용태
    • 센서학회지
    • /
    • 제13권1호
    • /
    • pp.27-34
    • /
    • 2004
  • Camera calibration is an important and fundamental procedure for the application of a vision sensor to 3D problems. Recently many camera calibration methods have been proposed particularly in the area of robot vision. However, the reliability of data used in calibration has been seldomly considered in spite of its importance. In addition, a camera model can not guarantee good results consistently in various conditions. This paper proposes methods to overcome such uncertainty problems of data and camera models as we often encounter them in practical camera calibration steps. By the use of the RANSAC (Random Sample Consensus) algorithm, few data having excessive magnitudes of errors are excluded. Artificial neural networks combined in a two-step structure are trained to compensate for the result by a calibration method of a particular model in a given condition. The proposed methods are useful because they can be employed additionally to most existing camera calibration techniques if needed. We applied them to a linear camera calibration method and could get improved results.

Aircraft Recognition from Remote Sensing Images Based on Machine Vision

  • Chen, Lu;Zhou, Liming;Liu, Jinming
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.795-808
    • /
    • 2020
  • Due to the poor evaluation indexes such as detection accuracy and recall rate when Yolov3 network detects aircraft in remote sensing images, in this paper, we propose a remote sensing image aircraft detection method based on machine vision. In order to improve the target detection effect, the Inception module was introduced into the Yolov3 network structure, and then the data set was cluster analyzed using the k-means algorithm. In order to obtain the best aircraft detection model, on the basis of our proposed method, we adjusted the network parameters in the pre-training model and improved the resolution of the input image. Finally, our method adopted multi-scale training model. In this paper, we used remote sensing aircraft dataset of RSOD-Dataset to do experiments, and finally proved that our method improved some evaluation indicators. The experiment of this paper proves that our method also has good detection and recognition ability in other ground objects.

다양한 컴퓨팅 환경에서 YOLOv7 모델의 추론 시간 복잡도 분석 (YOLOv7 Model Inference Time Complexity Analysis in Different Computing Environments)

  • 박천수
    • 반도체디스플레이기술학회지
    • /
    • 제21권3호
    • /
    • pp.7-11
    • /
    • 2022
  • Object detection technology is one of the main research topics in the field of computer vision and has established itself as an essential base technology for implementing various vision systems. Recent DNN (Deep Neural Networks)-based algorithms achieve much higher recognition accuracy than traditional algorithms. However, it is well-known that the DNN model inference operation requires a relatively high computational power. In this paper, we analyze the inference time complexity of the state-of-the-art object detection architecture Yolov7 in various environments. Specifically, we compare and analyze the time complexity of four types of the Yolov7 model, YOLOv7-tiny, YOLOv7, YOLOv7-X, and YOLOv7-E6 when performing inference operations using CPU and GPU. Furthermore, we analyze the time complexity variation when inferring the same models using the Pytorch framework and the Onnxruntime engine.

Object Detection Performance Analysis between On-GPU and On-Board Analysis for Military Domain Images

  • Du-Hwan Hur;Dae-Hyeon Park;Deok-Woong Kim;Jae-Yong Baek;Jun-Hyeong Bak;Seung-Hwan Bae
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권8호
    • /
    • pp.157-164
    • /
    • 2024
  • 본 논문에서는 제한된 자원을 가진 보드에서 딥러닝 기반 검출기 구축에 대한 실현 가능성에 대해 논의한다. 많은 연구에서 고성능 GPU 환경에서 검출기를 평가하지만, 제한된 연산 자원을 가진 보드에서의 평가는 여전히 미비하다. 따라서 본 연구에서는 검출기를 파싱하고 최적화하는 것으로 보드에 딥러닝 기반 검출기를 구현하고 구축한다. 제한된 자원에서의 딥러닝 기반 검출기의 성능을 확인하기 위해, 여러 검출기를 다양한 하드웨어 자원에서 모니터링하고, COCO 검출 데이터 셋에서 On-Board에서의 검출 모델과 On-GPU의 검출 모델을 mAP, 전력 소모량, 실행 속도(FPS) 관점으로 비교 및 분석한다. 그리고 군사 분야에 검출기를 적용한 효과를 고려하기 위해 항공 전투 시나리오를 고려할 수 있는 열화상 이미지로 구성된 자체 데이터 셋에서 검출기를 평가한다. 결과적으로 우리는 본 연구를 통해 On-Board에서 모델을 실행하는 딥러닝 기반 검출기의 강점을 조사하고, 전장 상황에서 딥러닝 기반 검출기가 기여할 수 있음을 보인다.

시각-언어 이동 에이전트를 위한 복합 학습 (Hybrid Learning for Vision-and-Language Navigation Agents)

  • 오선택;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제9권9호
    • /
    • pp.281-290
    • /
    • 2020
  • 시각-언어 이동 문제는 시각 이해와 언어 이해 능력을 함께 요구하는 복합 지능 문제이다. 본 논문에서는 시각-언어 이동 에이전트를 위한 새로운 학습 모델을 제안한다. 이 모델은 데모 데이터에 기초한 모방 학습과 행동 보상에 기초한 강화 학습을 함께 결합한 복합 학습을 채택하고 있다. 따라서 이 모델은 데모 데이터에 편향될 수 있는 모방 학습의 문제와 상대적으로 낮은 데이터 효율성을 갖는 강화 학습의 문제를 상호 보완적으로 해소할 수 있다. 또한, 제안 모델에서는 기존의 목표 기반 보상 함수들의 문제점을 해결하기 위해 설계된 새로운 경로 기반 보상 함수를 이용한다. 본 논문에서는 Matterport3D 시뮬레이션 환경과 R2R 벤치마크 데이터 집합을 이용한 다양한 실험들을 통해, 제안 모델의 높은 성능을 입증하였다.

u-Cities: Vision, Model & Strategies

  • 오재인
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회/대한산업공학회 2005년도 춘계공동학술대회 발표논문
    • /
    • pp.448-474
    • /
    • 2005
  • PDF

Position estimation using combined vision and acceleration measurement

  • Nam, Yoonsu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.187-192
    • /
    • 1992
  • There are several potential error sources that can affect the estimation of the position of an object using combined vision and acceleration measurements. Two of the major sources, accelerometer dynamics and random noise in both sensor outputs, are considered. Using a second-order model, the errors introduced by the accelerometer dynamics are reduced by the smaller value of damping ratio and larger value of natural frequency. A Kalman filter approach was developed to minimize the influence of random errors on the position estimate. Experimental results for the end-point movement of a flexible beam confirmed the efficacy of the Kalman filter algorithm.

  • PDF

TSK 퍼지 시스템을 이용한 카메라 켈리브레이션 (Camera Calibration using the TSK fuzzy system)

  • 이희성;홍성준;오경세;김은태
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.56-58
    • /
    • 2006
  • Camera calibration in machine vision is the process of determining the intrinsic cameara parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

  • PDF

비젼 시스템을 이용한 DGPS 데이터 보정에 관한 연구 (A study on the DGPS data errors correction through real-time coordinates conversion using the vision system)

  • 문성룡;채정수;박장훈;이호순;노도환
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 하계학술대회 논문집 D
    • /
    • pp.2310-2312
    • /
    • 2003
  • This paper describes a navigation system for an autonomous vehicle in outdoor environments. The vehicle uses vision system to detect coordinates and DGPS information to determine the vehicles initial position and orientation. The vision system detects coordinates in the environment by referring to an environment model. As the vehicle moves, it estimates its position by conventional DGPS data, and matches up the coordinates with the environment model in order to reduce the error in the vehicles position estimate. The vehicles initial position and orientation are calculated from the coordinate values of the first and second locations, which are acquired by DGPS. Subsequent orientations and positions are derived. Experimental results in real environments have showed the effectiveness of our proposed navigation methods and real-time methods.

  • PDF

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • 제1권1호
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.