• 제목/요약/키워드: vision camera

검색결과 1,369건 처리시간 0.025초

홀위치 측정을 위한 레이져비젼 시스템 개발 (A Laser Vision System for the High-Speed Measurement of Hole Positions)

  • 노영식;서영수;최원태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.333-335
    • /
    • 2006
  • In this page, we developed the inspection system for automobile parts using the laser vision sensor. Laser vision sensor has gotten 2 dimensions information and third dimension information of laser vision camera using the vision camera. Used JIG and ROBOT for inspection position transfer. Also, computer integration system developed that control system component pal1s and manage data measurement information. Compare sensor measurement result with CAD Data and verified measurement result effectiveness taking advantage of CAD to get information of measurement object.

  • PDF

능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마 (Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control)

  • 한영모
    • 전자공학회논문지SC
    • /
    • 제41권4호
    • /
    • pp.39-43
    • /
    • 2004
  • 능동적인 비전 시스템의 전형적인 한 테마는 카메라의 시선 고정 문제이다. 여기서 카메라의 시선 고정이란 동적인 물체 상의 지정된 한 점이 항시 이미지의 중앙부에 놓이도록 카메라의 자세를 조정하는 것으로서, 이를 위해서는 카메라에 비친 영상정보를 분석하는 기능과 카메라의 자세를 제어하는 두 가지 기능이 결합되어야 한다. 본 논문에서는 영상분석과 자세제어가 한 개의 프레임 하에서 설계되는 카메라의 시선 고정을 위한 알고리즘을 제안한다. 이 때 제작시의 어려움을 피하고 실시간 응용을 위해서 본 알고리즘은 카메라의 calibration이나 3차원 거리 정보의 복원을 필요로 하지 않도록, 그리고 닫힌 형태(closed-form)가 되도록 설계된다.

Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안 (A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System)

  • 김종형;장경재;권혁동
    • 한국생산제조학회지
    • /
    • 제26권2호
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

카메라 Back Cover의 형상인식 및 납땜 검사용 Vision 기술 개발 (Development of Vision Technology for the Test of Soldering and Pattern Recognition of Camera Back Cover)

  • 장영희
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 1999년도 추계학술대회 논문집 - 한국공작기계학회
    • /
    • pp.119-124
    • /
    • 1999
  • This paper presents new approach to technology pattern recognition of camera back cover and test of soldering. In real-time implementing of pattern recognition camera back cover and test of soldering, the MVB-03 vision board has been used. Image can be captured from standard CCD monochrome camera in resolutions up to 640$\times$480 pixels. Various options re available for color cameras, a synchronous camera reset, and linescan cameras. Image processing os performed using Texas Instruments TMS320C31 digital signal processors. Image display is via a standard composite video monitor and supports non-destructive color overlay. System processing is possible using c30 machine code. Application software can be written in Borland C++ or Visual C++

  • PDF

Correction of Photometric Distortion of a Micro Camera-Projector System for Structured Light 3D Scanning

  • Park, Go-Gwang;Park, Soon-Yong
    • 센서학회지
    • /
    • 제21권2호
    • /
    • pp.96-102
    • /
    • 2012
  • This paper addresses photometric distortion problems of a compact 3D scanning sensor which is composed of a micro-size and inexpensive camera-projector system. Recently, many micro-size cameras and projectors are available. However, erroneous 3D scanning results may arise due to the poor and nonlinear photometric properties of the sensors. This paper solves two inherent photometric distortions of the sensors. First, the response functions of both the camera and projector are derived from the least squares solutions of passive and active calibration, respectively. Second, vignetting correction of the vision camera is done by using a conventional method, however the projector vignetting is corrected by using the planar homography between the image planes of the projector and camera, respectively. Experimental results show that the proposed technique enhances the linear properties of the phase patterns that are generated by the sensor.

로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구 (An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme)

  • 민관웅;장완식
    • 한국생산제조학회지
    • /
    • 제19권1호
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

강체 이동타겟 추적을 위한 일괄처리방법을 이용한 로봇비젼 제어기법 개발 (Development of Robot Vision Control Schemes based on Batch Method for Tracking of Moving Rigid Body Target)

  • 김재명;최철웅;장완식
    • 한국기계가공학회지
    • /
    • 제17권5호
    • /
    • pp.161-172
    • /
    • 2018
  • This paper proposed the robot vision control method to track a moving rigid body target using the vision system model that can actively control camera parameters even if the relative position between the camera and the robot and the focal length and posture of the camera change. The proposed robotic vision control scheme uses a batch method that uses all the vision data acquired from each moving point of the robot. To process all acquired data, this robot vision control scheme is divided into two cases. One is to give an equal weight for all acquired data, the other is to give weighting for the recent data acquired near the target. Finally, using the two proposed robot vision control schemes, experiments were performed to estimate the positions of a moving rigid body target whose spatial positions are unknown but only the vision data values are known. The efficiency of each control scheme is evaluated by comparing the accuracy through the experimental results of each control scheme.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권1호
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정 (Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • 제19권2호
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF