• 제목/요약/키워드: Vision Camera

검색결과 1,374건 처리시간 0.027초

멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구 (Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera)

  • 원대연;오현동;허성식;박봉균;안종선;심현철;탁민제
    • 한국항공우주학회지
    • /
    • 제37권12호
    • /
    • pp.1209-1216
    • /
    • 2009
  • 본 논문에서는 실내 공간에 설치된 복수의 카메라로부터 획득한 영상정보를 소형무인기의 자세 추정 및 제어에 이용하는 시스템에 대한 연구를 기술하였다. 제안된 시스템은 실외 비행시험의 제한을 극복하고 효율적인 비행시험 환경을 구축하기 위한 것으로 무인기의 위치 및 자세를 측정하기 위해 별도의 센서를 탑재할 필요가 없어 저가의 장비로 테스트베드를 구성할 수 있다는 장점을 갖는다. 시스템 구현을 위해 요구되는 카메라 보정, 마커 검출, 자세 추정 기법을 소개하였으며 테스트베드를 이용한 실험 결과를 통해 제안된 방법의 타당성 및 성능을 보였다.

엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어 (Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm)

  • 박광호;김동준;기석호;기창두
    • 한국정밀공학회지
    • /
    • 제19권1호
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Evaluation of Defects in the Bonded Area of Shoes using an Infrared Thermal Vision Camera

  • Kim, Jae-Yeol;Yang, Dong-Jo;Kim, Chang-Hyun
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권4호
    • /
    • pp.511-514
    • /
    • 2003
  • The Infrared Camera usually detects only Infrared waves emitted from the light in order to illustrate the temperature distribution. An Infrared diagnosis system can be applied to various fields. But the defect discrimination can be automatic or mechanized in the special shoes total inspection system. This study introduces a method for special shoes nondestructive total inspection. Performance of the proposed method is shown through thermo-Image.

머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구 (An Efficient Camera Calibration Method for Head Pose Tracking)

  • 박경수;임창주;이경태
    • 대한인간공학회지
    • /
    • 제19권1호
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법 (A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect)

  • 정회범
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2000년도 추계학술대회논문집A
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

거리측정이 가능한 단동형 수중 스테레오 카메라의 제어 (Control of an Underwater Stereo Camera Embedded in a Single Canister Capable of Measuring Distance)

  • 이판묵;전봉환;이종무
    • 한국해양공학회:학술대회논문집
    • /
    • 한국해양공학회 2000년도 추계학술대회 논문집
    • /
    • pp.90-95
    • /
    • 2000
  • This paper presents the vergence control of a parallel stereo camera and its application to underwater stereo camera to enhance the working efficiency of underwater vehicles that equips with manipulators in seabed operation. The stereo camera consists of two parallel lenses mounted on a lateral moving base and two CCD cameras mounted on a longitudinal moving base, which is embedded in a small pressure canister for underwater application. Because the lateral shift is related to the backward shift with a nonlinear relation, only one control input is needed to control the vergence and focus of the camera with a special driving device. We can get a clear stereo vision with the camera for all the range of objects in air and in water, especially in short range objects. The control system of the camera is so simple that we are able to realize a small stereo camera system and to apply it to a stereo vision system for underwater vehicles. This paper also shows how to acquire the distance information of an underwater object with this stereo camera. Whenever we focus on an underwater object with the camera, we can obtain the three-dimensional images and the distance information in real-time.

  • PDF

거리측정이 가능한 단동형 수중 스테레오 카메라의 제어 (Control of an Underwater Stereo Camera Embedded in a Single Canister Capable of Measuring Distance)

  • 이판묵;전봉환;이종무
    • 한국해양공학회지
    • /
    • 제15권1호
    • /
    • pp.79-84
    • /
    • 2001
  • This paper present the control of the image disparity of a parallel stereo camera and its application to an underwater stereo camera to enhance the working efficiency of underwater vehicles that are equiped with manipulators in seabed operation. The stereo camera consists of two parallel lenses mounted on a lateral moving base and two CCD cameras mounted on a longitudinal moving base, which is embedded in a small pressure canister for underwater application. Because the lateral shift is related to the backward shift with a nonlinear relation, only one control input is needed to control the vergence and focus of the camera with a special driving device. We can get clear stereo vision with the camera for all the range of objects in air and in water, especially in short range object. The control system of the camera is so simple that we are able to realize a small stereo camera system and apply it to a stereo vision system for underwater vehicles. This paper also shows how to acquire the distance information of an underwater object with this stereo camera. Whenever we focus on an underwater object with the camera, we can obtain three-dimensional images and distance information in real-time.

  • PDF

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • 제15권5호
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법 (Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry)

  • 이용주;김도완;박만우
    • 한국BIM학회 논문집
    • /
    • 제5권2호
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.