• Title/Summary/Keyword: lens calibration

Search Result 148, Processing Time 0.025 seconds

SPECKLE OBSERVATION OF VISUAL DOUBLE STARS AT BOSSCHA OBSERVATORY: SEPARATION AND MAGNITUDE DIFFERENCE LIMITS

  • HADIPUTRAWAN, I PUTU WIRA;PUTRA, MAHASENA;IRFAN, MOCHAMAD;YUSUF, MUHAMMAD
    • Publications of The Korean Astronomical Society
    • /
    • v.30 no.2
    • /
    • pp.223-224
    • /
    • 2015
  • We present the results of visual double stars speckle observations from 2013 using a Zeiss Double Refractor 60 cm with visual focal length f = 1,078 cm, and CCD SBIG ST-402 MEA. A Bessel V filter with ${\lambda}=550nm$ was placed in front of the CCD camera to reduce the chromatic aberration of the objective lens. The objects selected for this observation were calibration candidates and program stars with separations ranging from 0.9-6 arc second, and were located in both the northern and southern hemispheres. Seeing at Bosscha Observatory is generally 1-2 arc second, imposing a limit on visual double star separation below which the system cannot be resolved by long exposure imaging (longer than ~50 ms). Speckle interferometry methods are used to resolve double stars with separations below the typical size of seeing effects. A series of images were captured in fast short-time exposures (~50 ms) using a CCD camera. The result of our experiment shows that our system can be used to measure separations of 0.9 arc second (for systems with small ${\Delta}m$) and ${\Delta}m{\approx}3.7$ (for wide systems).

Single Camera 3D-Particle Tracking Velocimetry-Measurements of the Inner Flows of a Water Droplet (단일카메라 3차원 입자영상추적유속계-액적내부 유동측정)

  • Doh, Deog-Hee;Sung, Hyung-Jin;Kim, Dong-Hyuk;Cho, Kyeong-Rae;Pyeon, Yong-Beom;Cho, Yong-Beom
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2006.12a
    • /
    • pp.1-6
    • /
    • 2006
  • Single-Camera Stereoscopic Vision three-dimensional measurement system has been developed based upon 30-PTV algorithm. The system consists of one camera $(1k\times1k)$ and a host computer. To attain three-dimensional measurements a plate having stereo holes has been installed inside of the lens system. Three-dimensional measurements was successfully attained by adopting the conventional 30-PTV camera calibration methods. As applications of the constructed measurement system, a water droplet mixed with alcohol was constructed on a transparent plastic plate with the contacted fluid diameter 4mm, and the particles motions inside of the droplet have been investigated with the constructed measurement system. The measurement uncertainty of the constructed system was 0.04mm, 0.04mm and 0.09mm for X, Y and Z coordinates.

  • PDF

Through-field Investigation of Stray Light for the Fore-optics of an Airborne Hyperspectral Imager

  • Cha, Jae Deok;Lee, Jun Ho;Kim, Seo Hyun;Jung, Do Hwan;Kim, Young Soo;Jeong, Yumee
    • Current Optics and Photonics
    • /
    • v.6 no.3
    • /
    • pp.313-322
    • /
    • 2022
  • Remote-sensing optical payloads, especially hyperspectral imagers, have particular issues with stray light because they often encounter high-contrast target/background conditions, such as sun glint. While developing an optical payload, we usually apply several stray-light analysis methods, including forward and backward analyses, separately or in combination, to support lens design and optomechanical design. In addition, we often characterize the stray-light response over a full field to support calibration, or when developing an algorithm to correct stray-light errors. For this purpose, we usually use forward analysis across the entire field, but this requires a tremendous amount of computational time. In this paper, we propose a sequence of forward-backward-forward analyses to more effectively investigate the through-field response of stray light, utilizing the combined advantages of the individual methods. The application is an airborne hyperspectral imager for creating hyperspectral maps from 900 to 1700 nm in a 5-nm-continuous band. With the proposed method, we have investigated the through-field response of stray light to an effective accuracy of 0.1°, while reducing computation time to 1/17th of that for a conventional, forward-only stray-light analysis.

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

Image Processing Algorithms for DI-method Multi Touch Screen Controllers (DI 방식의 대형 멀티터치스크린을 위한 영상처리 알고리즘 설계)

  • Kang, Min-Gu;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.1-12
    • /
    • 2011
  • Large-sized multi-touch screen is usually made using infrared rays. That is because it has technical constraints or cost problems to make the screen with the other ways using such as existing resistive overlays, capacitive overlay, or acoustic wave. Using infrared rays to make multi-touch screen is easy, but is likely to have technical limits to be implemented. To make up for these technical problems, two other methods were suggested through Surface project, which is a next generation user-interface concept of Microsoft. One is Frustrated Total Internal Reflection (FTIR) which uses infrared cameras, the other is Diffuse Illumination (DI). FTIR and DI are easy to be implemented in large screens and are not influenced by the number of touch points. Although FTIR method has an advantage in detecting touch-points, it also has lots of disadvantages such as screen size limit, quality of the materials, the module for infrared LED arrays, and high consuming power. On the other hand, DI method has difficulty in detecting touch-points because of it's structural problems but makes it possible to solve the problem of FTIR. In this thesis, we study the algorithms for effectively correcting the distort phenomenon of optical lens, and image processing algorithms in order to solve the touch detecting problem of the original DI method. Moreover, we suggest calibration algorithms for improving the accuracy of multi-touch, and a new tracking technique for accurate movement and gesture of the touch device. To verify our approaches, we implemented a table-based multi touch screen.

PRODUCT10N OF KSR-III AIRGLOW PHOTOMETERS TO MEASURE MUV AIRGLOWS OF THE UPPER ATMOSPHERE ABOVE THE KOREAN PENINSULAR (한반도 상공의 고층대기 중간 자외선 대기광 측정을 위한 KSR-III 대기광도계 제작)

  • Oh, T.H.;Park, K.C.;Kim, Y.H.;Yi, Y.;Kim, J.
    • Journal of Astronomy and Space Sciences
    • /
    • v.19 no.4
    • /
    • pp.305-318
    • /
    • 2002
  • We have constructed two flight models of airglow photometer system (AGP) to be onboard Korea Sounding Rocket-III (KSR-III) for detection of MUV dayglow above the Korean peninsular. The AGP system is designed to detect dayglow emissions of OI 2972${\AA}$, $N_2$ VK(0,6) 2780${\AA}$, $N_2$ 2PG 3150${\AA}$ and background 3070${\AA}$ toward the horizon at altitudes between 100 km and 300 km. The AGP system consists of a photometer body, a baffle an electronic control unit and a battery unit. The MUV dayglow emissions enter through a narrow band interference filter and focusing lens of the photometer, which contains an ultraviolet sensitive photomultiplier tube. The photometer is equipped with an in-flight calibration light source on a circular plane that will rotate at the rocket's apogee. A bane tube is installed at the entry of the photometer in order to block strong scattering lights from the lower atmosphere. We have carried out laboratory measurements of sensitivity and in-flight calibration light source for the AGP flight models. Although absolute sensitivities of the AGP flight models could not be determined in the country, relative sensitivities among channels are well measured so that observation data during rocket flight in the future can be analyzed with confidence.

IMAGING SPECTROMETRY FOR DETECTING FECES AND INGESTA ON POULTRY CARCASSES

  • Park, Bo-Soon;William R.Windham;Kurt C.Lawrence;Smith, Douglas-P
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.3106-3106
    • /
    • 2001
  • Imaging spectrometry or hyperspectral imaging is a recent development that makes possible quantitative and qualitative measurement for food quality and safety. This paper presents the research results that a hyperspectral imaging system can be used effectively for detecting fecal (from duodenum, cecum, and colon) and ingesta contamination on poultry carcasses from the different feed meals (wheat, mile, and corn with soybean) for poultry safety inspection. A hyperspectral imaging system has been developed and tested for the identification of fecal and ingesta surface contamination on poultry carcasses. Hypercube image data including both spectral and spatial domains between 430 and 900 nm were acquired from poultry carcasses with fecal and ingesta contamination. A transportable hyperspectral imaging system including fiber optically fabricated line lights, motorized lens control for line scans, and hypercube image data from contaminated carcasses with different feeds are presented. Calibration method of a hyperspectral imaging system is demonstrated using different lighting sources and reflectance panels. Principal Component and Minimum Noise Fraction transformations will be discussed to characterize hyperspectral images and further image processing algorithms such as image band ratio of dual-wavelength images and its histogram stretching with thresholding process will be demonstrated to identify fecal and ingesta materials on poultry carcasses. This algorithm could be further applied for real-time classification of fecal and ingesta contamination on poultry carcasses in the poultry processing line.

  • PDF

A Study of an OMM System for Machined Spherical form Using the Volumetric Error Calibration of Machining Center (머시닝센터의 체적오차 보상을 통한 구면 가공형상 측정 OMM시스템 연구)

  • Kim, Sung-Chung;Kim, Ok-Hyun;Lee, Eung-Suk;Oh, Chang-Jin;Lee, Chan-Ho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.7
    • /
    • pp.98-105
    • /
    • 2001
  • The machining accuracy is affected by geometric, volumetric errors of the machine tools. To improve the product quality, we need to enhance the machining accuracy of the machine tools. To this point of view, measurement and inspection of finished part as error analysis of machine tools ahas been studied for last several decades. This paper suggests the enhancement method of machining accuracy for precision machining of high quality metal reflection mirror or optics lens, etc. In this paper, we study 1) the compensation of linear pitch error with NC controller compensation function using laser interferometer measurement, 2) the method for enhancing the accuracy of NC milling machining by modeling and compensation of volumetric error, 3) the spherical surface manufacturing by modeling and compensation of volumetric error of the machine tool, 4) the system development of OMM without detaching work piece from a bed of machine tool after working, 5) the generation of the finished part profile by OMM. Furthermore, the output of OMM is compared with that of CMM, and verified the feasibility of the measurement system.

  • PDF