• 제목/요약/키워드: perspective camera model

검색결과 62건 처리시간 0.028초

SURF와 Label Cluster를 이용한 이동형 카메라에서 동적물체 추출 (Moving Object Detection Using SURF and Label Cluster Update in Active Camera)

  • 정용한;박은수;이형호;왕덕창;허욱열;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.35-41
    • /
    • 2012
  • This paper proposes a moving object detection algorithm for active camera system that can be applied to mobile robot and intelligent surveillance system. Most of moving object detection algorithms based on a stationary camera system. These algorithms used fixed surveillance system that does not consider the motion of the background or robot tracking system that track pre-learned object. Unlike the stationary camera system, the active camera system has a problem that is difficult to extract the moving object due to the error occurred by the movement of camera. In order to overcome this problem, the motion of the camera was compensated by using SURF and Pseudo Perspective model, and then the moving object is extracted efficiently using stochastic Label Cluster transport model. This method is possible to detect moving object because that minimizes effect of the background movement. Our approach proves robust and effective in terms of moving object detection in active camera system.

새로운 선형의 외형적 카메라 보정 기법 (A New Linear Explicit Camera Calibration Method)

  • 도용태
    • 센서학회지
    • /
    • 제23권1호
    • /
    • pp.66-71
    • /
    • 2014
  • Vision is the most important sensing capability for both men and sensory smart machines, such as intelligent robots. Sensed real 3D world and its 2D camera image can be related mathematically by a process called camera calibration. In this paper, we present a novel linear solution of camera calibration. Unlike most existing linear calibration methods, the proposed technique of this paper can identify camera parameters explicitly. Through the step-by-step procedure of the proposed method, the real physical elements of the perspective projection transformation matrix between 3D points and the corresponding 2D image points can be identified. This explicit solution will be useful for many practical 3D sensing applications including robotics. We verified the proposed method by using various cameras of different conditions.

모바일 자율 주행 로봇의 지면 표현을 위한 확장된 적응형 역투영 맵핑 방법 (Extended and Adaptive Inverse Perspective Mapping for Ground Representation of Autonomous Mobile Robot)

  • 박주용;조영근
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.59-65
    • /
    • 2023
  • This paper proposes an Extended and Adaptive Inverse Perspective Mapping (EA-IPM) model that can obtain an accurate bird's-eye view (BEV) from the forward-looking monocular camera on the sidewalk with various curves. While Inverse Perspective Mapping (IPM) is a good way to obtain ground information, conventional methods assume a fixed relationship between the camera and the ground. Due to the nature of the driving environment of the mobile robot, there are more walking environments with frequent motion changes than flat roads, which have a fatal effect on IPM results. Therefore, we have developed an extended IPM process to be applicable in IPM on sidewalks by adding a formula for complementary Y-derive processes and roll motions to the existing adaptive IPM model that is robust to pitch motions. To convince the performance of the proposed method, we evaluated our results on both synthetic and real road and sidewalk datasets.

렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법 (A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect)

  • 정회범
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2000년도 추계학술대회논문집A
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

렌즈왜곡효과를 보상하는 새로운 hand-eye 보정기법 (A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect)

  • 정회범
    • 한국정밀공학회지
    • /
    • 제19권1호
    • /
    • pp.172-179
    • /
    • 2002
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

효율적인 파노라믹 영상 구축 (Construction of Efficient Panoramic Image)

  • 신성윤;백정욱;이양원
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2010년도 춘계학술대회
    • /
    • pp.155-156
    • /
    • 2010
  • `파노라믹 영상'이라 함은 관련 있는 여러 영상들을 정합함으로써 하나의 새로운 영상으로 생성하는 것을 말하는데 흔히 '모자이크 영상' 이라고도 한다. 본 논문에서는 카메라를 통해서 데이터를 입력 받는다. 따라서 카메라 파라미터의 인식을 위해서 원근모델을 이용하는데, 프레임 사이의 불일치 측정 방법을 제시하여 불일치를 최소화 하였다. 또한 파노라믹 영상을 생성하기 위하여 고정된 참조와 시간에 따라 변하는 참조를 파노라믹 영상으로 제작하는 방법을 제시한다.

  • PDF

Rectification of Perspective Text Images on Rectangular Planes

  • Le, Huy Phat;Madhubalan, Kavitha;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제6권4호
    • /
    • pp.1-7
    • /
    • 2010
  • Natural images often contain useful information about the scene such as text or company logos placed on a rectangular shaped plane. The 2D images captured from such objects by a camera are often distorted, because of the effects of the perspective projection camera model. This distortion makes the acquisition of the text information difficult. In this study, we detect the rectangular object on which the text is written, then the image is restored by removing the perspective distortion. The Hough transform is used to detect the boundary lines of the rectangular object and a bilinear transformation is applied to restore the original image.

저가 카메라를 이용한 스마트 장난감 게임을 위한 모형 자동차 인식 (Recognition of Model Cars Using Low-Cost Camera in Smart Toy Games)

  • 강민혜;홍원기;고재필
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.27-32
    • /
    • 2024
  • Recently, there has been a growing interest in integrating physical toys into video gaming within the game content business. This paper introduces a novel method that leverages low-cost camera as an alternative to using sensor attachments to meet this rising demand. We address the limitations associated with low-cost cameras and propose an optical design tailored to the specific environment of model car recognition. We overcome the inherent limitations of low-cost cameras by proposing an optical design specifically tailored for model car recognition. This approach primarily focuses on recognizing the underside of the car and addresses the challenges associated with this particular perspective. Our method employs a transfer learning model that is specifically trained for this task. We have achieved a 100% recognition rate, highlighting the importance of collecting data under various camera exposures. This paper serves as a valuable case study for incorporating low-cost cameras into vision systems.

Distortion Correction Modeling Method for Zoom Lens Cameras with Bundle Adjustment

  • Fang, Wei;Zheng, Lianyu
    • Journal of the Optical Society of Korea
    • /
    • 제20권1호
    • /
    • pp.140-149
    • /
    • 2016
  • For visual measurement under dynamic scenarios, a zoom lens camera is more flexible than a fixed one. However, the challenges of distortion prediction within the whole focal range limit the widespread application of zoom lens cameras greatly. Thus, a novel sequential distortion correction method for a zoom lens camera is proposed in this study. In this paper, a distortion assessment method without coupling effect is depicted by an elaborated chessboard pattern. Then, the appropriate distortion correction model for a zoom lens camera is derived from the comparisons of some existing models and methods. To gain a rectified image at any zoom settings, a global distortion correction modeling method is developed with bundle adjustment. Based on some selected zoom settings, the optimized quadratic functions of distortion parameters are obtained from the global perspective. Using the proposed method, we can rectify all images from the calibrated zoom lens camera. Experimental results of different zoom lens cameras validate the feasibility and effectiveness of the proposed method.

CG와 동영상의 지적합성 (Intelligent Composition of CG and Dynamic Scene)

  • 박종일;정경훈;박경세;송재극
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1995년도 학술대회
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.