• Title/Summary/Keyword: , Camera estimation method

Search Result 451, Processing Time 0.024 seconds

Semi-Auto Camera Calibration Method for 3D Information Generation (3차원 공간정보 생성을 위한 반자동 카메라 교정 방법)

  • Kim, Hyungtae;Paik, Joonki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.127-135
    • /
    • 2015
  • In this paper, we propose the semi-auto camera calibration method including user input. The proposed method estimates the vanishing points using user defined reference lines and defines the constraint for reducing outlier in vanishing points estimation process. The proposed camera calibration method based on both algebraic and geometric method improves a calibration performance for difficult condition, which represents that existing method can't calibrate a image. Experimental results show that the proposed method calibration accuracy higher than existing method.

Sector Based Scanning and Adaptive Active Tracking of Multiple Objects

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin;Cho, We-Duke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.6
    • /
    • pp.1166-1191
    • /
    • 2011
  • This paper presents an adaptive active tracking system with sector based scanning for a single PTZ camera. Dividing sectors on an image reduces the search space to shorten selection time so that the system can cover many targets. Upon the selection of a target, the system estimates the target trajectory to predict the zooming location with a finite amount of time for camera movement. Advanced estimation techniques using probabilistic reason suffer from the unknown object dynamics and the inaccurate estimation compromises the zooming level to prevent tracking failure. The proposed system uses the simple piecewise estimation with a few frames to cope with fast moving objects and/or slow camera movements. The target is tracked in multiple steps and the zooming time for each step is determined by maximizing the zooming level within the expected variation of object velocity and detection. The number of zooming steps is adaptively determined according to target speed. In addition, the iterative estimation of a zooming location with camera movement time compensates for the target prediction error due to the difference between speeds of a target and a camera. The effectiveness of the proposed method is validated by simulations and real time experiments.

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

A Distortion Correction Method of Wide-Angle Camera Images through the Estimation and Validation of a Camera Model (카메라 모델의 추정과 검증을 통한 광각 카메라 영상의 왜곡 보정 방법)

  • Kim, Kyeong-Im;Han, Soon-Hee;Park, Jeong-Seon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1923-1932
    • /
    • 2013
  • In order to solve the problem of severely distorted images from a wide-angle camera, we propose a calibration method which corrects a radial distortion in wide-angle images by estimation and validation of camera model. First, we estimate a camera model consisting of intrinsic and extrinsic parameters from calibration patterns, where intrinsic parameters are the focal length, the principal point and so on, and extrinsic parameters are the relative position and orientation of calibration pattern from a camera. Next we validate the estimated camera model by re-extracting corner points by inversing the model to images. Finally we correct the distortion of the image using the validated camera model. We confirm that the proposed method can correct the distortion more than 80% by the calibration experiments using the lattice shaped pattern images captured from a general web camera and a wide-angle camera.

Estimation of the position and orientation of the mobile robot using camera calibration (카메라 캘리브레이션을 이용한 이동로봇의 위치 및 자세 추정)

  • 정기주;최명환;이범희;고명삼
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.786-791
    • /
    • 1992
  • When a mobile robot moves from one place to another, position error occurs due to the limit of accuracy of robot and the effect of environmental noise. In this paper. an accurate method of estimating the position and orientation of a mobile robot using the camera calibration is proposed. Kalman filter is used as the estimation algorithm. The uncertainty in the position of camera with repect to robot base frame is considered well as the position error of the robot. Besides developing the mathematical model for mobile robot calibration system, the effect of relative position between camera and calibration points is analyzed and the method to select the most accurate calibration points is also presented.

  • PDF

A Switched Visual Servoing Technique Robust to Camera Calibration Errors for Reaching the Desired Location Following a Straight Line in 3-D Space (카메라 교정 오차에 강인한 3차원 직선 경로 추종을 위한 전환 비주얼 서보잉 기법)

  • Kim, Do-Hyoung;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.125-134
    • /
    • 2006
  • The problem of establishing the servo system to reach the desired location keeping all features in the field of view and following a straight line is considered. In addition, robustness of camera calibration parameters is considered in this paper. The proposed approach is based on switching from position-based visual servoing (PBVS) to image-based visual servoing (IBVS) and allows the camera path to follow a straight line. To achieve the objective, a pose estimation method is required; the camera's target pose is estimated from the obtained images without the knowledge of the object. A switched control law moves the camera equipped to a robot end-effector near the desired location following a straight line in Cartesian space and then positions it to the desired pose with robustness to camera calibration error. Finally simulation results show the feasibility of the proposed visual servoing technique.

  • PDF

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Skin Condition Estimation Using Mobile Handheld Camera

  • Bae, Ji-Sang;Jeon, Jae-Ho;Lee, Jae-Young;Kim, Jong-Ok
    • ETRI Journal
    • /
    • v.38 no.4
    • /
    • pp.776-786
    • /
    • 2016
  • The fairly recent standard of equipping mobile devices with advanced imaging sensors has opened the possibility of conveniently diagnosing skin conditions, anywhere, anytime. For this application, we attempted to estimate skin conditions from a skin image taken by a mobile handheld camera. To estimate the skin conditions, we specifically identified three skin features (pigmentation, pores, and roughness) that can be measured quantitatively from a skin image. The experimental data indicate that the existing thresholding methods are inappropriate for extracting the pigmentation and pore skin features. Thus, we propose a new line-fitting based thresholding method for skin feature detection. We thoroughly evaluated our proposed skin condition estimation method using our skin image database. The experimental results show that our proposed thresholding method can better determine the threshold leading to the most visually plausible detection, when compared to existing methods. We also confirmed that skin conditions can be feasibly estimated using a common mobile handheld camera (for example, a smartphone).

Calibration Method of Plenoptic Camera using CCD Camera Model (CCD 카메라 모델을 이용한 플렌옵틱 카메라의 캘리브레이션 방법)

  • Kim, Song-Ran;Jeong, Min-Chang;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.261-269
    • /
    • 2018
  • This paper presents a convenient method to estimate the internal parameters of plenoptic camera using CCD(charge-coupled device) camera model. The images used for plenoptic camera calibration generally use the checkerboard pattern used in CCD camera calibration. Based on the CCD camera model, the determinant of the plenoptic camera model can be derived through the relationship with the plenoptic camera model. We formulate four equations that express the focal length, the principal point, the baseline, and distance between the virtual camera and the object. By performing a nonlinear optimization technique, we solve the equations to estimate the parameters. We compare the estimation results with the actual parameters and evaluate the reprojection error. Experimental results show that the MSE(mean square error) is 0.309 and estimation values are very close to actual values.