• Title/Summary/Keyword: Camera Calibration

Search Result 694, Processing Time 0.031 seconds

MTF Assessment and Image Restoration Technique for Post-Launch Calibration of DubaiSat-1 (DubaiSat-1의 발사 후 검보정을 위한 MTF 평가 및 영상복원 기법)

  • Hwang, Hyun-Deok;Park, Won-Kyu;Kwak, Sung-Hee
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.5
    • /
    • pp.573-586
    • /
    • 2011
  • The MTF(modulation transfer function) is one of parameters to evaluate the performance of imaging systems. Also, it can be used to restore information that is lost by a harsh space environment (radioactivity, extreme cold/heat condition and electromagnetic field etc.), atmospheric effects and falloff of system performance etc. This paper evaluated the MTF values of images taken by DubaiSat-1 satellite which was launched in 2009 by EIAST(Emirates Institute for Advanced Science and Technology) and Satrec Initiative. Generally, the MTF was assessed using various methods such as a point source method and a knife-edge method. This paper used the slanted-edge method. The slantededge method is the ISO 12233 standard for the MTF measurement of electronic still-picture cameras. The method is adapted to estimate the MTF values of line-scanning telescopes. After assessing the MTF, we performed the MTF compensation by generating a MTF convolution kernel based on the PSF(point spread function) with image denoising to enhance the image quality.

Fast Vehicle Detection based on Haarlike and Vehicle Tracking using SURF Method (Haarlike 기반의 고속 차량 검출과 SURF를 이용한 차량 추적 알고리즘)

  • Yu, Jae-Hyoung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.71-80
    • /
    • 2012
  • This paper proposes vehicle detection and tracking algorithm using a CCD camera. The proposed algorithm uses Haar-like wavelet edge detector to detect features of vehicle and estimates vehicle's location using calibration information of an image. After that, extract accumulated vehicle information in continuous k images to improve reliability. Finally, obtained vehicle region becomes a template image to find same object in the next continuous image using SURF(Speeded Up Robust Features). The template image is updated in the every frame. In order to reduce SURF processing time, ROI(Region of Interesting) region is limited on expended area of detected vehicle location in the previous frame image. This algorithm repeats detection and tracking progress until no corresponding points are found. The experimental result shows efficiency of proposed algorithm using images obtained on the road.

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.

Performance Comparison of Pressure Sensitive Paint and Pressure Field Measurement of Oblique Impinging Jet (Pressure Sensitive Paint의 성능비교 및 경사충돌분류의 압력장 측정)

  • Lee, Sang-Ik;Lee, Sang-Joon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.26 no.7
    • /
    • pp.1031-1038
    • /
    • 2002
  • The pressure sensitive paint (PSP) has recently received a considerable attention in the fields of aerodynamics and fluid mechanics as a new revolutionary optical technique to measure pressure fields on a body surface. In this study, the feasibility and effectiveness of the PSP pressure field measurement technique have been investigated experimentally. Seven different PSP formulations including two porphyrins(PtOEP and PtTFPP) and four polymers(Polystyrene, cellulous acetate butyrate, GP-197 and Silicon-708) were tested to check the performance and characteristics of each combination. The static calibration of each PSP formulation was carried out in a constant-pressure chamber. The PSP technique was applied to an oblique impinging jet flow to measure variation of pressure field on the impinging plate at on oblique jet angle of ${\theta}=60^{\circ}$. Pressure field images were captured by an 12bit intensified CCD(ICCD, $1K{\times}1K$)camera. As a result, the dynamic response of PSP depends on the oxygen permeability of polymer and the photochemical interaction between luminophore and polymer as well as the reaction of luminophore itself. The reaction of luminophore was changed by employing different polymers. In conclusion, Among 7 PSP formulation tested, the combination of PtTFPP and cellulous acetate butyrate show the best performance. In addition, the detail pressure field of an oblique high-speed impinging jet was measured effectively using the PSP technique.

A Study on Gaze Tracking Based on Pupil Movement, Corneal Specular Reflections and Kalman Filter (동공 움직임, 각막 반사광 및 Kalman Filter 기반 시선 추적에 관한 연구)

  • Park, Kang-Ryoung;Ko, You-Jin;Lee, Eui-Chul
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.203-214
    • /
    • 2009
  • In this paper, we could simply compute the user's gaze position based on 2D relations between the pupil center and four corneal specular reflections formed by four IR-illuminators attached on each corner of a monitor, without considering the complex 3D relations among the camera, the monitor, and the pupil coordinates. Therefore, the objectives of our paper are to detect the pupil center and four corneal specular reflections exactly and to compensate for error factors which affect the gaze accuracy. In our method, we compensated for the kappa error between the calculated gaze position through the pupil center and actual gaze vector. We performed one time user calibration to compensate when the system started. Also, we robustly detected four corneal specular reflections that were important to calculate gaze position based on Kalman filter irrespective of the abrupt change of eye movement. Experimental results showed that the gaze detection error was about 1.0 degrees though there was the abrupt change of eye movement.

A Preliminary Study on UAV Photogrammetry for the Hyanho Coast Near the Military Reservation Zone, Eastern Coast of Korea (동해안 군사시설보호구역 주변 향호 연안역을 대상으로 무인항공사진측량에 관한 예비 연구)

  • Kim, Baeck-Oon;Yun, Kong-Hyun;Chang, Tae-Soo;Bahk, Jang-Jun;Kim, Seong-Pil
    • Ocean and Polar Research
    • /
    • v.39 no.2
    • /
    • pp.159-168
    • /
    • 2017
  • To evaluate the accuracy of UAV photogrammetry for Hyangho coast, eastern coast of Korea, we conducted a field experiment wherein UAV photogrammetry test was repeated three times. Since the Haygho coast is located within a military reservation zone, it was necessary to obtain permission to gain access to the beach and to have sensitive aerial photographs showing military facilities inspected and cropped. The standard deviation of the UAV shooting position between the three tests was less than 1 m, but repeatability of footprint on the ground was low due to wind-driven variability of the UAV pose. Self-calibrating bundle adjustment(SCBA) of implementing non-metric camera calibration was failed in one test. In two tests, the vertical error was twice as large as the pixel size except for those areas that were subject to security inspection and cropping. Given the problems that can arise with regard to the repeatability of the shooting area as well as the possibility of failure with regard to SCBA, we strongly recommend that UAV photogrammetry in coastal areas needs to be repeated at least twice.

Silhouette-based Gait Recognition Using Homography and PCA (호모그래피와 주성분 분석을 이용한 실루엣 기반 걸음걸이 인식)

  • Jeong Seung-Do;Kim Su-Sun;Cho Tae-Kyung;Choi Byung-Uk;Cho Jung-Won
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.1
    • /
    • pp.31-40
    • /
    • 2006
  • In this paper, we propose a gait recognition method based on gait silhouette sequences. Features of gait are affected by the variation of gait direction. Therefore, we synthesize silhouettes to canonical form by using planar homography in order to reduce the effect of the variation of gait direction. The planar homography is estimated with only the information which exist within the gait sequences without complicate operations such as camera calibration. Even though gait silhouettes are generated from an individual person, fragments beyond common characteristics exist because of errors caused by inaccuracy of background subtraction algorithm. In this paper, we use the Principal Component Analysis to analyze the deviated characteristics of each individual person. PCA used in this paper, however, is not same as the traditional strategy used in pattern classification. We use PCA as a criterion to analyze the amount of deviation from common characteristic. Experimental results show that the proposed method is robust to the variation of gait direction and improves separability of test-data groups.

  • PDF

A Study on Underwater-Pipe Video Image Mosaicking using Digital Photogrammetry (수치사진측량을 이용한 수중 파이프 비디오 모자익 영상 제작에 관한 연구)

  • Kang, Jin-A;Kwon, Kwang-Seok;Kim, Byung-Guk;Oh, Yoon-Seuk
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.4
    • /
    • pp.150-160
    • /
    • 2008
  • The present domestic underwater and ocean facilities management depends on analysis with the naked eye. This study performs quantitative analysis to improve conventional methods, analyze spatial situation of underwater facilities. This research is divided into two steps; underwater image distortion correction and image mosaic step. First, underwater image distortion correction step is for the production of underwater target, calculates the correction parameters, and then developed the method that convert the original image point to whose distortion is corrected. Second step is for the obtaining pipe images installed in the underwater, corrects the distortion, and then transforms a coordinates of the correction pipe image. After coordinate transformation, we make the mosaic image using the singularities. As a result, when we measure the distance between pipe and underwater ground and compare with calculation value on mosaic image, it is showed that RMSE is 0.3cm.

  • PDF

Optimization of Image Merging Conditions for Lumber Scanning System (제재목 화상입력시스템의 최적 화상병합 조건 구명)

  • Kim, Kwang-Mo;Kim, Byoung-Nam;Shim, Kug-Bo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.38 no.6
    • /
    • pp.498-506
    • /
    • 2010
  • To use domestic softwood for structural lumber, appropriate grading system for quality, production and distribution condition of domestic lumber should be prepared. Kim et al. developed an automatic image processing system for grading domestic structural lumber (2009a and b). This study was carried out to investigate optimal image merging conditions for improving performance of image input system which is the key technique of image processing system, developed in the previous paper. To merge digital images of Korean larch lumber, choosing the green channel information of obtained image data showed the most accurate merging performance. As a pre-treatment process, applying Y-derivative Sharr's kernel filter could improve the image merging accuracy, but the effect of camera calibration was imperceptible. The optimal size of template image was verified as 30 pixel widths and 150 pixel heights. When applying the above mentioned conditions, the error length of images was 3.1 mm and the processing time was 9.7 seconds in average.

First Light Results of IGRINS Instrument Control Software

  • Lee, Hye-In;Pak, Soojong;Sim, Chae Kyung;Le, Huynh Anh N.;Jeong, Ueejeong;Chun, Moo-Young;Park, Chan;Yuk, In-Soo;Kim, Kangmin;Pavel, Michael;Jaffe, Daniel T.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2014
  • IGRINS (Immersion GRating Infrared Spectrograph) is a high spectral resolution near-infrared spectrograph that has been developed in a collaboration between the Korea Astronomy & Space Science Institute and the University of Texas at Austin. By using a silicon immersion echelle grating, the size of the fore optics is reduced by a factor of three times and we can make a more compact instrument. One exposure covers the whole of the H- and K-band spectrum with R=40,000. While the operation of and data reduction for this instrument is relatively simple compared to other grating spectrographs, we still need to operate three infrared arrays, cryostat sensors, calibration lamp units, and the telescope during astronomical observations. The IGRINS Instrument Control Software consists of a Housekeeping Package (HKP), Slit Camera Package (SCP), Data Taking Package (DTP), and Quick Look Package (QLP). The SCP will do auto guiding using a center finding algorithm. The DTP will take the echellogram images of the H and K bands, and the QLP will confirm fast processing of data. We will have a commissioning observations in 2014 March. In this poster, we present the performance of the software during the test observations.

  • PDF