• Title/Summary/Keyword: Lens Distortions

Search Result 22, Processing Time 0.028 seconds

Improved Image Restoration Algorithm about Vehicle Camera for Corresponding of Harsh Conditions (가혹한 조건에 대응하기 위한 차량용 카메라의 개선된 영상복원 알고리즘)

  • Jang, Young-Min;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.114-123
    • /
    • 2014
  • Vehicle Black Box (Event Data Recorder EDR) only recognizes the general surrounding environments of load. In addition, general EDR is difficult to recognize the images of a sudden illumination change. It appears that the lens is being a severe distortion. Therefore, general EDR does not provide the clues of the circumstances of the accident. To solve this problem, we estimate the value of Normalized Luminance Descriptor(NLD) and Normalized Contrast Descriptor(NCD). Illumination change is corrected using Normalized Image Quality(NIQ). Second, we are corrected lens distortion using model of Field Of View(FOV) based on designed method of fisheye lens. As a result, we propose integration algorithm of two methods that correct distortions of images using each Gamma Correction and Lens Correction in parallel.

Performance Improvement of Soccer Robot by Vision Calibration and Patch Change in Real Time Environment (실시간 환경에서의 영상조정 및 패치 변경에 의한 축구로봇의 성능개선)

  • Choi, Jeong-Won;Kim, Duk-Hyun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.23 no.1
    • /
    • pp.156-161
    • /
    • 2009
  • This paper proposes a new method about performance improvement of soccer robots system by the revision of lens distortion most commonly occurred in camera and the revision of position and angle error in robot patch for the realization of robot position. Among the lens distortions, we revise geometrical distortion and apply it to soccer robots system for realtime environment. Patch used in the recognition and the distinction for coordination and direction of robot occurs a position and angle error according to the figure of it. In this paper, we suggest the method of reduction for position and angle error of robot by improved patch and verify its propriety through the experiment.

Cosmological parameter constraints from galaxy-galaxy lensing with the Deep Lens Survey

  • Yoon, Mijin;Jee, Myungkook James
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.54.3-55
    • /
    • 2017
  • The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 deg2 survey carried out with NOAO's Blanco and Mayalltelescopes. DLS is unique in its depth reaching down to ~27th mags in BVRz bands. This enables a broad redshift baseline and is optimal for investigating cosmological evolution of the large scale structure. Galaxy-galaxylensing is a powerful tool to estimate averaged matter distribution around lensgalaxies by measuring shape distortions of background galaxies. The signal from galaxy-galaxy lensing is sensitive not only to galaxy halo properties, but also to cosmological environment at large scales. In this study, we measure galaxy-galaxy lensing and galaxy clustering, which together put strong constraints on the cosmological parameters. We obtain significant galaxy-galaxy lensing signals out to ~20 Mpc while tightly controlling systematics. The B-mode signals are consistent with zero. Our lens-source flip test indicates that minimal systematic errors are present in DLS photometric redshifts. Shear calibration is performed using high-fidelity galaxy image simulations. We demonstrate that the overall shape of the galaxy-galaxy lensing signal is well described by the halo model comprised of central and non-central halo contributions. Finally, we present our preliminary constraints on the matter density and the normalization parameters.

  • PDF

Study on Distortion Compensation of Underwater Archaeological Images Acquired through a Fisheye Lens and Practical Suggestions for Underwater Photography - A Case of Taean Mado Shipwreck No. 1 and No. 2 -

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.37 no.4
    • /
    • pp.312-321
    • /
    • 2021
  • Underwater archaeology relies heavily on photography and video image recording during surveillances and excavations like ordinary archaeological studies on land. All underwater images suffer poor image quality and distortions due to poor visibility, low contrast and blur, caused by differences in refractive indices of water and air, properties of selected lenses and shapes of viewports. In the Yellow Sea (between mainland China and the Korean peninsula), the visibility underwater is far less than 1 m, typically in the range of 30 cm to 50 cm, on even a clear day, due to very high turbidity. For photographing 1 m x 1 m grids underwater, a very wide view angle (180°) fisheye lens with an 8 mm focal length is intentionally used despite unwanted severe barrel-shaped image distortion, even with a dome port camera housing. It is very difficult to map wide underwater archaeological excavation sites by combining severely distorted images. Development of practical compensation methods for distorted underwater images acquired through the fisheye lens is strongly desired. In this study, the source of image distortion in underwater photography is investigated. We have identified the source of image distortion as the mismatching, in optical axis and focal points, between dome port housing and fisheye lens. A practical image distortion compensation method, using customized image processing software, was explored and verified using archived underwater excavation images for effectiveness in underwater archaeological applications. To minimize unusable area due to severe distortion after distortion compensation, practical underwater photography guidelines are suggested.

Investigation on Image Quality of Smartphone Cameras as Compared with a DSLR Camera by Using Target Image Edges

  • Seo, Suyoung
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.1
    • /
    • pp.49-60
    • /
    • 2016
  • This paper presents a set of methods to evaluate the image quality of smartphone cameras as compared with that of a DSLR camera. In recent years, smartphone cameras have been used broadly for many purposes. As the performance of smartphone cameras has been enhanced considerably, they can be considered to be used for precise mapping instead of metric cameras. To evaluate the possibility, we tested the quality of one DSLR camera and 3 smartphone cameras. In the first step, we compare the amount of lens distortions inherent in each camera using camera calibration sheet images. Then, we acquired target sheet images, extracted reference lines from them and evaluated the geometric quality of smartphone cameras based on the amount of errors occurring in fitting a straight line to observed points. In addition, we present a method to evaluate the radiometric quality of the images taken by each camera based on planar fitting errors. Also, we propose a method to quantify the geometric quality of the selected camera using edge displacements observed in target sheet images. The experimental results show that the geometric and radiometric qualities of smartphone cameras are comparable to those of a DSLR camera except lens distortion parameters.

Motion Analysis of a Moving Object using one Camera and Tracking Method (단일 카메라와 Tracking 기법을 이용한 이동 물체의 모션 분석)

  • Shin, Myong-Jun;Son, Young-Ik;Kim, Kab-Il
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2821-2823
    • /
    • 2005
  • When we deal with the image data through camera lens, much works are necessary for removing image distortions and obtaining accurate informations from the raw data. However, the calibration process is very complicated and requires many trials and errors. In this paper, 3 new approach to image processing is presented by developing a H/W vision system with a tracking camera. Using motor control with encoders the proposed tracking method tells us exact displacements of a moving object. Therefore this method does not require any calibration process for pin cusion. Owing to the mobility one camera covers wide ranges and, by lowering its height, the camera also obtains high resolution of the image. We first introduce the structure of the motion analysis system. Then the construced vision system is investigated by some experiments.

  • PDF

Distortion Calibration and FOV Adjustment in Video See-through AR using Mobile Phones (모바일 폰을 사용한 비디오 투과식 증강현실에서의 왜곡 보정과 시야각 조정)

  • Widjojo, Elisabeth Adelia;Hwang, Jae-In
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • In this paper, we present a distortion correction for wearable Augmented Reality (AR) on mobile phones. Head Mounted Display (HMD) using mobile phones, such as Samsung Gear VR or Google's cardboard, introduces lens distortion of the rendered image to user. Especially, in case of AR the distortion is more complicated due to the duplicated optical systems from mobile phone's camera and HMD's lens. Furthermore, such distortions generate mismatches of the visual cognition or perception of the user. In a natural way, we can assume that transparent wearable displays are the ultimate visual system which generates the least misperception. Therefore, the image from the mobile phone must be corrected to cancel this distortion to make transparent-like AR display with mobile phone based HMD. We developed a transparent-like display in the mobile wearable AR environment focusing on two issues: pincushion distortion and field-of view. We implemented our technique and evaluated their performance.

Implementation of Multiview Calibration System for An Effective 3D Display (효과적인 3차원 디스플레이를 위한 다시점 영상왜곡 보정처리 시스템 구현)

  • Bae Kyung-Hoon;Park Jae-Sung;Yi Dong-Sik;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, multiview calibration system for an effective 3D display is proposed. This system can be obtain 4-view image from multiview camera system. Also it can be rectify lens and camera distortion, error of bright and color, and it can be calibrate distortion of geometry. In this paper, we proposed the signal processing skill to calibrate the camera distortions which are able to take place from the acquisited multiview images. The discordance of the brightness and the colors are calibrated the color transform by extracting the feature point, correspondence point. And the difference of brightness is calibrated by using the differential map of brightness from each camera image. A spherical lens distortion is corrected by extracting the pattern of the multiview camera images. Finally the camera error and size among the multiview cameras is calibrated by removing the distortion. Accordingly, this proposed rectification & calibration system enable to effective 3D display and acquire natural multiview 3D image.

Automatic Target Recognition for Camera Calibration (카메라 캘리브레이션을 위한 자동 타겟 인식)

  • Kim, Eui Myoung;Kwon, Sang Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.525-534
    • /
    • 2018
  • Camera calibration is the process of determining the parameters such as the focal length of a camera, the position of a principal point, and lens distortions. For this purpose, images of checkerboard have been mainly used. When targets were automatically recognized in checkerboard image, the existing studies had limitations in that the user should have a good understanding of the input parameters for recognizing the target or that all checkerboard should appear in the image. In this study, a methodology for automatic target recognition was proposed. In this method, even if only a part of the checkerboard image was captured using rectangles including eight blobs, four each at the central portion and the outer portion of the checkerboard, the index of the target can be automatically assigned. In addition, there is no need for input parameters. In this study, three conditions were used to automatically extract the center point of the checkerboard target: the distortion of black and white pattern, the frequency of edge change, and the ratio of black and white pixels. Also, the direction and numbering of the checkerboard targets were made with blobs. Through experiments on two types of checkerboards, it was possible to automatically recognize checkerboard targets within a minute for 36 images.

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF