• Title/Summary/Keyword: Camera system calibration

Search Result 386, Processing Time 0.024 seconds

Distortion Calibration and FOV Adjustment in Video See-through AR using Mobile Phones (모바일 폰을 사용한 비디오 투과식 증강현실에서의 왜곡 보정과 시야각 조정)

  • Widjojo, Elisabeth Adelia;Hwang, Jae-In
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • In this paper, we present a distortion correction for wearable Augmented Reality (AR) on mobile phones. Head Mounted Display (HMD) using mobile phones, such as Samsung Gear VR or Google's cardboard, introduces lens distortion of the rendered image to user. Especially, in case of AR the distortion is more complicated due to the duplicated optical systems from mobile phone's camera and HMD's lens. Furthermore, such distortions generate mismatches of the visual cognition or perception of the user. In a natural way, we can assume that transparent wearable displays are the ultimate visual system which generates the least misperception. Therefore, the image from the mobile phone must be corrected to cancel this distortion to make transparent-like AR display with mobile phone based HMD. We developed a transparent-like display in the mobile wearable AR environment focusing on two issues: pincushion distortion and field-of view. We implemented our technique and evaluated their performance.

MTF Assessment and Image Restoration Technique for Post-Launch Calibration of DubaiSat-1 (DubaiSat-1의 발사 후 검보정을 위한 MTF 평가 및 영상복원 기법)

  • Hwang, Hyun-Deok;Park, Won-Kyu;Kwak, Sung-Hee
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.5
    • /
    • pp.573-586
    • /
    • 2011
  • The MTF(modulation transfer function) is one of parameters to evaluate the performance of imaging systems. Also, it can be used to restore information that is lost by a harsh space environment (radioactivity, extreme cold/heat condition and electromagnetic field etc.), atmospheric effects and falloff of system performance etc. This paper evaluated the MTF values of images taken by DubaiSat-1 satellite which was launched in 2009 by EIAST(Emirates Institute for Advanced Science and Technology) and Satrec Initiative. Generally, the MTF was assessed using various methods such as a point source method and a knife-edge method. This paper used the slanted-edge method. The slantededge method is the ISO 12233 standard for the MTF measurement of electronic still-picture cameras. The method is adapted to estimate the MTF values of line-scanning telescopes. After assessing the MTF, we performed the MTF compensation by generating a MTF convolution kernel based on the PSF(point spread function) with image denoising to enhance the image quality.

A Study on Gaze Tracking Based on Pupil Movement, Corneal Specular Reflections and Kalman Filter (동공 움직임, 각막 반사광 및 Kalman Filter 기반 시선 추적에 관한 연구)

  • Park, Kang-Ryoung;Ko, You-Jin;Lee, Eui-Chul
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.203-214
    • /
    • 2009
  • In this paper, we could simply compute the user's gaze position based on 2D relations between the pupil center and four corneal specular reflections formed by four IR-illuminators attached on each corner of a monitor, without considering the complex 3D relations among the camera, the monitor, and the pupil coordinates. Therefore, the objectives of our paper are to detect the pupil center and four corneal specular reflections exactly and to compensate for error factors which affect the gaze accuracy. In our method, we compensated for the kappa error between the calculated gaze position through the pupil center and actual gaze vector. We performed one time user calibration to compensate when the system started. Also, we robustly detected four corneal specular reflections that were important to calculate gaze position based on Kalman filter irrespective of the abrupt change of eye movement. Experimental results showed that the gaze detection error was about 1.0 degrees though there was the abrupt change of eye movement.

Vision-Based Self-Localization of Autonomous Guided Vehicle Using Landmarks of Colored Pentagons (컬러 오각형을 이정표로 사용한 무인자동차의 위치 인식)

  • Kim Youngsam;Park Eunjong;Kim Joonchoel;Lee Joonwhoan
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.387-394
    • /
    • 2005
  • This paper describes an idea for determining self-localization using visual landmark. The critical geometric dimensions of a pentagon are used here to locate the relative position of the mobile robot with respect to the pattern. This method has the advantages of simplicity and flexibility. This pentagon is also provided nth a unique identification, using invariant features and colors that enable the system to find the absolute location of the patterns. This algorithm determines both the correspondence between observed landmarks and a stored sequence, computes the absolute location of the observer using those correspondences, and calculates relative position from a pentagon using its (ive vortices. The algorithm has been implemented and tested. In several trials it computes location accurate to within 5 centimeters in less than 0.3 second.

Detection Range Improvement of Radiation Sensor for Radiation Contamination Distribution Imaging (방사선 오염분포 영상화를 위한 방사선 센서의 탐지 범위 개선에 관한 연구)

  • Song, Keun-Young;Hwang, Young-Gwan;Lee, Nam-Ho;Na, Jun-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1535-1541
    • /
    • 2019
  • To carry out safe and rapid decontamination in radiological accident areas, acquisition of various information on radiation sources is needed. In particular, to figure out the location and distribution of radiation sources is essential for rapid follow-up and removal of contaminants as well as minimizing worker damage. The radiation distribution detection device is used to obtain the position and distribution information of the radiation source. In the case of a radiation distribution detection device, a detection sensor unit is generally composed of a single sensor, and the detection range is limited due to the physical characteristics of the single sensor. We applied a calibration detector for controlling the detection sensitivity of a single sensor for radiation detection and improved the limited detection range of radiation dose rate. Also, gamma irradiation test confirmed the improvement of radiation distribution detection range.

Design of a Mapping Framework on Image Correction and Point Cloud Data for Spatial Reconstruction of Digital Twin with an Autonomous Surface Vehicle (무인수상선의 디지털 트윈 공간 재구성을 위한 이미지 보정 및 점군데이터 간의 매핑 프레임워크 설계)

  • Suhyeon Heo;Minju Kang;Jinwoo Choi;Jeonghong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.3
    • /
    • pp.143-151
    • /
    • 2024
  • In this study, we present a mapping framework for 3D spatial reconstruction of digital twin model using navigation and perception sensors mounted on an Autonomous Surface Vehicle (ASV). For improving the level of realism of digital twin models, 3D spatial information should be reconstructed as a digitalized spatial model and integrated with the components and system models of the ASV. In particular, for the 3D spatial reconstruction, color and 3D point cloud data which acquired from a camera and a LiDAR sensors corresponding to the navigation information at the specific time are required to map without minimizing the noise. To ensure clear and accurate reconstruction of the acquired data in the proposed mapping framework, a image preprocessing was designed to enhance the brightness of low-light images, and a preprocessing for 3D point cloud data was included to filter out unnecessary data. Subsequently, a point matching process between consecutive 3D point cloud data was conducted using the Generalized Iterative Closest Point (G-ICP) approach, and the color information was mapped with the matched 3D point cloud data. The feasibility of the proposed mapping framework was validated through a field data set acquired from field experiments in a inland water environment, and its results were described.

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

Intermediate View Image and its Digital Hologram Generation for an Virtual Arbitrary View-Point Hologram Service (임의의 가상시점 홀로그램 서비스를 위한 중간시점 영상 및 디지털 홀로그램 생성)

  • Seo, Young-Ho;Lee, Yoon-Hyuk;Koo, Ja-Myung;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.1
    • /
    • pp.15-31
    • /
    • 2013
  • This paper proposes an intermediate image generation method for the viewer's view point by tracking the viewer's face, which is converted to a digital hologram. Its purpose is to increase the viewing angle of a digital hologram, which is gathering higher and higher interest these days. The method assumes that the image information for the leftmost and the rightmost view points within the viewing angle to be controlled are given. It uses a stereo-matching method between the leftmost and the rightmost depth images to obtain the pseudo-disparity increment per depth value. With this increment, the positional informations from both the leftmost view point and the rightmost view point are generated, which are blended to get the information at the wanted intermediate viewpoint. The occurrable dis-occlusion region in this case is defined and a inpainting method is proposed. The results from implementing and experimenting this method showed that the average image qualities of the generated depth and RGB image were 33.83[dB] and 29.5[dB], respectively, and the average execution time was 250[ms] per frame. Also, we propose a prototype system to service digital hologram interactively to the viewer by using the proposed intermediate view generation method. It includes the operations of data acquisition for the leftmost and the rightmost viewpoints, camera calibration and image rectification, intermediate view image generation, computer-generated hologram (CGH) generation, and reconstruction of the hologram image. This system is implemented in the LabView(R) environments, in which CGH generation and hologram image reconstruction are implemented with GPGPUs, while others are implemented in software. The implemented system showed the execution speed to process about 5 frames per second.

Development and Performance Evaluation of an Animal SPECT System Using Philips ARGUS Gamma Camera and Pinhole Collimator (Philips ARGUS 감마카메라와 바늘구멍조준기를 이용한 소동물 SPECT 시스템의 개발 및 성능 평가)

  • Kim, Joong-Hyun;Lee, Jae-Sung;Kim, Jin-Su;Lee, Byeong-Il;Kim, Soo-Mee;Choung, In-Soon;Kim, Yu-Kyeong;Lee, Won-Woo;Kim, Sang-Eun;Chung, June-Key;Lee, Myung-Chul;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.445-455
    • /
    • 2005
  • Purpose: We developed an animal SPECT system using clinical Philips ARGUS scintillation camera and pinhole collimator with specially manufactured small apertures. In this study, we evaluated the physical characteristics of this system and biological feasibility for animal experiments. Materials and Methods: Rotating station for small animals using a step motor and operating software were developed. Pinhole inserts with small apertures (diameter of 0.5, 1.0, and 2.0 mm) were manufactured and physical parameters including planar spatial resolution and sensitivity and reconstructed resolution were measured for some apertures. In order to measure the size of the usable field of view according to the distance from the focal point, manufactured multiple line sources separated with the same distance were scanned and numbers of lines within the field of view were counted. Using a Tc-99m line source with 0.5 mm diameter and 12 mm length placed in the exact center of field of view, planar spatial resolution according to the distance was measured. Calibration factor to obtain FWHM values in 'mm' unit was calculated from the planar image of two separated line sources. Te-99m point source with i mm diameter was used for the measurement of system sensitivity. In addition, SPECT data of micro phantom with cold and hot line inserts and rat brain after intravenous injection of [I-123]FP-CIT were acquired and reconstructed using filtered back protection reconstruction algorithm for pinhole collimator. Results: Size of usable field of view was proportional to the distance from the focal point and their relationship could be fitted into a linear equation (y=1.4x+0.5, x: distance). System sensitivity and planar spatial resolution at 3 cm measured using 1.0 mm aperture was 71 cps/MBq and 1.24 mm, respectively. In the SPECT image of rat brain with [I-123]FP-CIT acquired using 1.0 mm aperture, the distribution of dopamine transporter in the striatum was well identified in each hemisphere. Conclusion: We verified that this new animal SPECT system with the Phlilps ARGUS scanner and small apertures had sufficient performance for small animal imaging.

Development of High Dynamic Range Panorama Environment Map Production System Using General-Purpose Digital Cameras (범용 디지털 카메라를 이용한 HDR 파노라마 환경 맵 제작 시스템 개발)

  • Park, Eun-Hea;Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2012
  • High dynamic range (HDR) images represent a far wider numerical range of exposures than common digital images. Thus it can accurately store intensity levels of light found in the specific scenes generated by light sources in the real world. Although a kind of professional HDR cameras which support fast accurate capturing has been developed, high costs prevent from employing those in general working environments. The common method to produce a HDR image with lower cost is to take a set of photos of the target scene with a range of exposures by general purpose cameras, and then to transform them into a HDR image by commercial softwares. However, the method needs complicate and accurate camera calibration processes. Furthermore, creating HDR environment maps which are used to produce high quality imaging contents includes delicate time-consuming manual processes. In this paper, we present an automatic HDR panorama environment map generating system which was constructed to make the complicated jobs of taking pictures easier. And we show that our system can be effectively applicable to photo-realistic compositing tasks which combine 3D graphic models with a 2D background scene using image-based lighting techniques.