• Title/Summary/Keyword: 렌즈왜곡

Search Result 177, Processing Time 0.02 seconds

Development of Close Range Photogrammetric Model for Measuring the Size of Objects (피사체의 크기 측정을 위한 근접사진측량모델 개발)

  • Hwang, Jin Sang;Yun, Hong Sic;Kang, Ji Hun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1D
    • /
    • pp.129-134
    • /
    • 2009
  • This study is on the development of photogrammetric methode for measuring the size of object without control points. The model is composed of interior orientation parameters, which are consist of specifications of CCD camera and lens distortion parameters, and exterior orientation parameters, which are calculated through relative orientation and scale adjustment. We evaluated the accuracy of the model to find that it is possible to measure the size of object using the model.

Design Anamorphic Lens Thermal Optical System that Focal Length Ratio is 3:1 (초점거리 비가 3:1인 아나모픽 렌즈 열상 광학계 설계)

  • Kim, Se-Jin;Ko, Jung-Hui;Lim, Hyeon-Seon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.4
    • /
    • pp.409-415
    • /
    • 2011
  • Purpose: To design applied anamorphic lens that focal length ratio is 3:1 optical system to improve detecting distance. Methods: We defined a boundary condition as $50^{\circ}{\sim}60^{\circ}$ for viewing angle, horizontal direction 36mm, vertical direction 12 mm for focal length, f-number 4, $15{\mu}m{\times}15{\mu}m$ for pixel size and limit resolution 25% in 33l p/mm. Si, ZnS and ZnSe as a materials were used and 4.8 ${\mu}m$, 4.2 ${\mu}m$, 3.7 ${\mu}m$ as a wavelength were set. optical performance with detection distance, narcissus and athermalization in designed camera were analyzed. Results: F-number 4, y direction 12 mm and x direction 36 mm for focal length of the thermal optical system were satisfied. Total length of the system was 76 mm so that an overall volume of the system was reduced. Astigmatism and spherical aberration was within ${\pm}$0.10 which was less than 2 pixel size. Distortion was within 10% so there was no matter to use as a thermal optical camera. MTF performance for the system was over 25% from 33l p/mm to full field so it was satisfied with the boundary condition. Designed optical system was able to detect up to 2.9 km and reduce a diffused image by decreasing a narcissus value from all surfaces except the 4th surface. From sensitivity analysis, MTF resolution was increased on changing temperature with the 5th lens which was assumed as compensation. Conclusions: Designed optical system which used anamorphic lens was satisfied with boundary condition. an increasing resolution with temperature, longer detecting distance and decreasing of narcissus were verified.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Single Photo Resection Using Cosine Law and Three-dimensional Coordinate Transformation (코사인 법칙과 3차원 좌표 변환을 이용한 단사진의 후방교회법)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.189-198
    • /
    • 2019
  • In photogrammetry, single photo resection is a method of determining exterior orientation parameters corresponding to a position and an attitude of a camera at the time of taking a photograph using known interior orientation parameters, ground coordinates, and image coordinates. In this study, we proposed a single photo resection algorithm that determines the exterior orientation parameters of the camera using cosine law and linear equation-based three-dimensional coordinate transformation. The proposed algorithm first calculated the scale between the ground coordinates and the corresponding normalized coordinates using the cosine law. Then, the exterior orientation parameters were determined by applying linear equation-based three-dimensional coordinate transformation using normalized coordinates and ground coordinates considering the calculated scale. The proposed algorithm was not sensitive to the initial values by using the method of dividing the longest distance among the combinations of the ground coordinates and dividing each ground coordinates, although the partial derivative was required for the nonlinear equation. In addition, since the exterior orientation parameters can be determined by using three points, there was a stable advantage in the geometrical arrangement of the control points.

Gaze Tracking Using a Modified Starburst Algorithm and Homography Normalization (수정 Starburst 알고리즘과 Homography Normalization을 이용한 시선추적)

  • Cho, Tai-Hoon;Kang, Hyun-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1162-1170
    • /
    • 2014
  • In this paper, an accurate remote gaze tracking method with two cameras is presented using a modified Starburst algorithm and honography normalization. Starburst algorithm, which was originally developed for head-mounted systems, often fails in detecting accurate pupil centers in remote tracking systems with a larger field of view due to lots of noises. A region of interest area for pupil is found using template matching, and then only within this area Starburst algorithm is applied to yield pupil boundary candidate points. These are used in improved RANSAC ellipse fitting to produce the pupil center. For gaze estimation robust to head movement, an improved homography normalization using four LEDs and calibration based on high order polynomials is proposed. Finally, it is shown that accuracy and robustness of the system is improved using two cameras rather than one camera.

A Study on Parallax Registration for User Location on the Transparent Display using the Kinect Sensor (키넥트 센서를 활용한 투명 디스플레이에서의 사용자 위치에 대한 시계 정합 연구)

  • Nam, Byeong-Wook;Lee, Kyung-Ho;Lee, Jung-Min;Wu, Yuepeng
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.599-606
    • /
    • 2015
  • International Hydrographic Organization(IHO) adopted standard S-100 as the international standard Geographic Information System(GIS) that can be generally used in the maritime sector. Accordingly, the next-generation system to support navigation information based on GIS standard technology has being developed. AR based navigation information system that supported navigation by overlapping navigation information on the CCTV image has currently being developed. In this study, we considered the application of a transparent display as a method to support efficiently this system. When a transparent display applied, the image distortion caused by using a wide-angle lens for parallax secure, and the disc s, and demonstrated the applicability of the technology by developing a prototype.

Active 3D Shape Acquisition on a Smartphone (스마트폰에서의 능동적 3차원 형상 취득 기법)

  • Won, Jae-Hyun;Yoo, Jin-Woo;Park, In-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.27-34
    • /
    • 2011
  • In this paper, we propose an active 3D shape acquisition method based on photometric stereo using camera and flash on a smartphone. Two smartphones are used as the master and slave, in which the slave projects illumination from different locations while the master captures the images and processes photometric stereo algorithm to reconstruct 3D shape. In order to reduce the error, the smartphone's camera is calibrated to overcome the effect of the lens distortion and nonlinear camera sensor response. We apply 5-point algorithm to estimate the pose between smartphone cameras and then estimate lighting direction vector to run the photometric stereo algorithm. Experimental result shows that the proposed system enables us to use smartphone as a 3D camera with low cost and high quality.

Simple Camera Calibration Using Neural Networks (신경망을 이용한 간단한 카메라교정)

  • 전정희;김충원
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.4
    • /
    • pp.867-873
    • /
    • 1999
  • Camera calibration is a procedure which calculates internal and external parameters of a camera with the Down world coordinates of the control points. Accurate camera calibration is required for achieving accurate visual measurements. In this paper, we propose a simple and flexible camera calibration using neural networks which doesn't require a special knowledge of 3D geometry and camera optics. There are some applications which are not in need of the values of the internal and external parameters. The proposed method is very useful to these applications. Also, the proposed camera calibration has advantage that resolves the ill-condition as object plane is near parallel image plane. The ill-condition is frequently met in product inspection. For little more accurate calibration, acquired image is divided into two regions according to radial distortion of lens and neural network is applied to each region. Experimental results and comparison with Tsai's algorithm prove the validity of the proposed camera calibration.

  • PDF

Development of Calibration Target for Infrared Thermal Imaging Camera (적외선 열화상 카메라용 캘리브레이션 타겟 개발)

  • Kim, Su Un;Choi, Man Yong;Park, Jeong Hak;Shin, Kwang Yong;Lee, Eui Chul
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.34 no.3
    • /
    • pp.248-253
    • /
    • 2014
  • Camera calibration is an indispensable process for improving measurement accuracy in industry fields such as machine vision. However, existing calibration cannot be applied to the calibration of mid-wave and long-wave infrared cameras. Recently, with the growing use of infrared thermal cameras that can measure defects from thermal properties, development of an applicable calibration target has become necessary. Thus, based on heat conduction analysis using finite element analysis, we developed a calibration target that can be used with both existing visible cameras and infrared thermal cameras, by implementing optimal design conditions, with consideration of factors such as thermal conductivity and emissivity, colors and materials. We performed comparative experiments on calibration target images from infrared thermal cameras and visible cameras. The results demonstrated the effectiveness of the proposed calibration target.

Calibrating Stereoscopic 3D Position Measurement Systems Using Artificial Neural Nets (3차원 위치측정을 위한 스테레오 카메라 시스템의 인공 신경망을 이용한 보정)

  • Do, Yong-Tae;Lee, Dae-Sik;Yoo, Seog-Hwan
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.418-425
    • /
    • 1998
  • Stereo cameras are the most widely used sensing systems for automated machines including robots to interact with their three-dimensional(3D) working environments. The position of a target point in the 3D world coordinates can be measured by the use of stereo cameras and the camera calibration is an important preliminary step for the task. Existing camera calibration techniques can be classified into two large categories - linear and nonlinear techniques. While linear techniques are simple but somewhat inaccurate, the nonlinear ones require a modeling process to compensate for the lens distortion and a rather complicated procedure to solve the nonlinear equations. In this paper, a method employing a neural network for the calibration problem is described for tackling the problems arisen when existing techniques are applied and the results are reported. Particularly, it is shown experimentally that by utilizing the function approximation capability of multi-layer neural networks trained by the back-propagation(BP) algorithm to learn the error pattern of a linear technique, the measurement accuracy can be simply and efficiently increased.

  • PDF