• 제목/요약/키워드: Camera and Robot Calibration

검색결과 99건 처리시간 0.037초

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Auto-Calibration을 이용한 Unstructured Environment에서의 실내 위치추정 기법 (Localization of Mobile Robot In Unstructured Environment using Auto-Calibration Algorithm)

  • 엄위섭;서대근;박재현;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제15권2호
    • /
    • pp.211-217
    • /
    • 2009
  • This paper proposes a way of expanding the use area of localization technique by using a beacon. In other words, we have developed the auto-calibration algorithm that recognizes the location of this beacon by attaching the beacon on an arbitrary position and by using the information of existing beacon under this situation. By doing so, the moving robot can overcome the limitation that the localization of moving robot is only possible within the area that has installed the existing beacon since the beacon cannot be installed on the accurate location when passing through a danger zone or an unknown zone. Accordingly, the moving robot can slowly move to the unknown zone according to this auto-calibration algorithm and can recognize its own location at a later time in a safe zone. The localization technique is essentially needed in using a moving robot and it is necessary to guarantee certain degree of reliability. Generally, moving robots are designed in a way to work well under the situation that the surroundings is well arranged and the localization techniques of using camera, laser and beacon are well developed. However due to the characteristics of sensor, there may be the cases that the place is dark, interfering radio waves, and/or difficult to install a beacon. The effectiveness of the method proposed in this paper has been proved through an experiment in this paper.

투사영상 불변량을 이용한 장애물 검지 및 자기 위치 인식 (Obstacle Detection and Self-Localization without Camera Calibration using Projective Invariants)

  • 노경식;이왕헌;이준웅;권인소
    • 제어로봇시스템학회논문지
    • /
    • 제5권2호
    • /
    • pp.228-236
    • /
    • 1999
  • In this paper, we propose visual-based self-localization and obstacle detection algorithms for indoor mobile robots. The algorithms do not require calibration, and can be worked with only single image by using the projective invariant relationship between natural landmarks. We predefine a risk zone without obstacles for a robot, and update the image of the risk zone, which will be used to detect obstacles inside the zone by comparing the averaging image with the current image of a new risk zone. The positions of the robot and the obstacles are determined by relative positioning. The method does not require the prior information for positioning robot. The robustness and feasibility of our algorithms have been demonstrated through experiments in hallway environments.

  • PDF

영상궤환을 이용한 이동체의 주적 및 잡기 작업의 구현 (Implementation of tracking and grasping the moving object using visual feedback)

  • 권철;강형진;박민용
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1995년도 추계학술대회 논문집 학회본부
    • /
    • pp.579-582
    • /
    • 1995
  • Recently, the vision system has the wide and growing' application field on account of the vast information from that visual mechanism. Especially, in the control field, the vision system has been applied to the industrial robot. In this paper, the object tracking and grasping task is accomplished by the robot vision system with a camera in the robot hand. The camera setting method is proposed to implement that task in a simple way. In spite of the calibration error, the stable grasping task is achieved using the tracking control algorithm based on the vision feature.

  • PDF

시각센서를 이용한 용접 Gap/Profile 모니터링 (Monitoring the Welding Gap/Profile with Visual Sensor)

  • 김창현;최태용;이주장;서정;박경택;강희신
    • 한국레이저가공학회:학술대회논문집
    • /
    • 한국레이저가공학회 2005년도 춘계학술발표대회 논문집
    • /
    • pp.3-8
    • /
    • 2005
  • The robot systems are widely used in the welding manufacturing. The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot automation, many kinds of contact and non-contact of the system which monitors the shape of the welding part is described. This system uses the line-type structured laser diode and the visual sensor. It includes the correction of radial distortion which is often found in the image from the camera with short focal length. Direct Linear Transformation (DLT) is used for the camera calibration. The three dimensional shape of the parent metal is obtained after simple linear transformation. Therefore, the system operates in real time. Some experiments are carried out to evaluate the performance of the developed system.

  • PDF

조립용 로봇의 오프라인 교시를 위한 영상 정보의 이용에 관한 연구 (Utilization of Vision in Off-Line Teaching for assembly robot)

  • 안철기
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2000년도 춘계학술대회논문집 - 한국공작기계학회
    • /
    • pp.543-548
    • /
    • 2000
  • In this study, an interactive programming method for robot in electronic part assembly task is proposed. Many of industrial robots are still taught and programmed by a teach pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and play back repetitively to perform robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and downloaded to the robot controller. This teaching process is implemented through an off-line programming software. The OLP is developed for an robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on an assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line teaching in the system.

  • PDF

스테레오 비젼에 기반한 6축 로봇의 위치 결정에 관한 연구 (Position Control of Robot Manipulator based on stereo vision system)

  • 조환진;박광호;기창두
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2001년도 춘계학술대회 논문집
    • /
    • pp.590-593
    • /
    • 2001
  • In this paper we describe the 6-axes robot's position determination using a stereo vision and an image based control method. When use a stereo vision, it need a additional time to compare with mono vision system. So to reduce the time required, we use the stereo vision not image Jacobian matrix estimation but depth estimation. Image based control is not needed the high-precision of camera calibration by using a image Jacobian. The experiment is executed as devide by two part. The first is depth estimation by stereo vision and the second is robot manipulator's positioning.

  • PDF

로봇의 시각시스템을 위한 물체의 거리 및 크기측정 알고리즘 개발 (Development of a Robot's Visual System for Measuring Distance and Width of Object Algorism)

  • 김회인;김갑순
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.88-92
    • /
    • 2011
  • This paper looks at the development of the visual system of robots, and the development of image processing algorism to measure the size of an object and the distance from robot to an object for the visual system. Robots usually get the visual systems with a camera for measuring the size of an object and the distance to an object. The visual systems are accurately impossible the size and distance in case of that the locations of the systems is changed and the objects are not on the ground. Thus, in this paper, we developed robot's visual system to measure the size of an object and the distance to an object using two cameras and two-degree robot mechanism. And, we developed the image processing algorism to measure the size of an object and the distance from robot to an object for the visual system, and finally, carried out the characteristics test of the developed visual system. As a result, it is thought that the developed system could accurately measure the size of an object and the distance to an object.

신경회로망을 이용함 카메라 보정기법 개발 (Development of camera caliberation technique using neural-network)

  • 한성현;왕한홍;장영희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.1617-1620
    • /
    • 1997
  • This paper describes the camera caliberation based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distoriton causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera aclibration is illustrated by simulation and experiment.

  • PDF

카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법 (Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model)

  • 임이지;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1099-1110
    • /
    • 2023
  • 자율주행 및 robot navigation의 인식 시스템은 성능 향상을 위해 다중 센서를 융합(Multi-Sensor Fusion)을 한 후, 객체 인식 및 추적, 차선 감지 등의 비전 작업을 한다. 현재 카메라와 라이다 센서의 융합을 기반으로 한 딥러닝 모델에 대한 연구가 활발히 이루어지고 있다. 그러나 딥러닝 모델은 입력 데이터의 변조를 통한 적대적 공격에 취약하다. 기존의 다중 센서 기반 자율주행 인식 시스템에 대한 공격은 객체 인식 모델의 신뢰 점수를 낮춰 장애물 오검출을 유도하는 데에 초점이 맞춰져 있다. 그러나 타겟 모델에만 공격이 가능하다는 한계가 있다. 센서 융합단계에 대한 공격의 경우 융합 이후의 비전 작업에 대한 오류를 연쇄적으로 유발할 수 있으며, 이러한 위험성에 대한 고려가 필요하다. 또한 시각적으로 판단하기 어려운 라이다의 포인트 클라우드 데이터에 대한 공격을 진행하여 공격 여부를 판단하기 어렵도록 한다. 본 연구에서는 이미지 스케일링 기반 카메라-라이다 융합 모델(camera-LiDAR calibration model)인 LCCNet 의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트에 스케일링 공격을 하고자 한다. 스케일링 알고리즘과 크기별 공격 성능 실험을 진행한 결과 평균 77% 이상의 융합 오류를 유발하였다.