• Title/Summary/Keyword: Pixel error

Search Result 480, Processing Time 0.021 seconds

An Efficiency Assessment for Reflectance Normalization of RapidEye Employing BRD Components of Wide-Swath satellite

  • Kim, Sang-Il;Han, Kyung-Soo;Yeom, Jong-Min
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.303-314
    • /
    • 2011
  • Surface albedo is an important parameter of the surface energy budget, and its accurate quantification is of major interest to the global climate modeling community. Therefore, in this paper, we consider the direct solution of kernel based bidirectional reflectance distribution function (BRDF) models for retrieval of normalized reflectance of high resolution satellite. The BRD effects can be seen in satellite data having a wide swath such as SPOT/VGT (VEGETATION) have sufficient angular sampling, but high resolution satellites are impossible to obtain sufficient angular sampling over a pixel during short period because of their narrow swath scanning when applying semi-empirical model. This gives a difficulty to run BRDF model inferring the reflectance normalization of high resolution satellites. The principal purpose of the study is to estimate normalized reflectance of high resolution satellite (RapidEye) through BRDF components from SPOT/VGT. We use semi-empirical BRDF model to estimated BRDF components from SPOT/VGT and reflectance normalization of RapidEye. This study used SPOT/VGT satellite data acquired in the S1 (daily) data, and within this study is the multispectral sensor RapidEye. Isotropic value such as the normalized reflectance was closely related to the BRDF parameters and the kernels. Also, we show scatter plot of the SPOT/VGT and RapidEye isotropic value relationship. The linear relationship between the two linear regression analysis is performed by using the parameters of SPOTNGT like as isotropic value, geometric value and volumetric scattering value, and the kernel values of RapidEye like as geometric and volumetric scattering kernel Because BRDF parameters are difficult to directly calculate from high resolution satellites, we use to BRDF parameter of SPOT/VGT. Also, we make a decision of weighting for geometric value, volumetric scattering value and error through regression models. As a result, the weighting through linear regression analysis produced good agreement. For all sites, the SPOT/VGT isotropic and RapidEye isotropic values had the high correlation (RMSE, bias), and generally are very consistent.

Machine Vision Technique for Rapid Measurement of Soybean Seed Vigor

  • Lee, Hoonsoo;Huy, Tran Quoc;Park, Eunsoo;Bae, Hyung-Jin;Baek, Insuck;Kim, Moon S.;Mo, Changyeun;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • v.42 no.3
    • /
    • pp.227-233
    • /
    • 2017
  • Purpose: Morphological properties of soybean roots are important indicators of the vigor of the seed, which determines the survival rate of the seedlings grown. The current vigor test for soybean seeds is manual measurement with the human eye. This study describes an application of a machine vision technique for rapid measurement of soybean seed vigor to replace the time-consuming and labor-intensive conventional method. Methods: A CCD camera was used to obtain color images of seeds during germination. Image processing techniques were used to obtain root segmentation. The various morphological parameters, such as primary root length, total root length, total surface area, average diameter, and branching points of roots were calculated from a root skeleton image using a customized pixel-based image processing algorithm. Results: The measurement accuracy of the machine vision system ranged from 92.6% to 98.8%, with accuracies of 96.2% for primary root length and 96.4% for total root length, compared to manual measurement. The correlation coefficient for each measurement was 0.999 with a standard error of prediction of 1.16 mm for primary root length and 0.97 mm for total root length. Conclusions: The developed machine vision system showed good performance for the morphological measurement of soybean roots. This image analysis algorithm, combined with a simple color camera, can be used as an alternative to the conventional seed vigor test method.

SURE-based-Trous Wavelet Filter for Interactive Monte Carlo Rendering (몬테카를로 렌더링을 위한 슈어기반 실시간 에이트러스 웨이블릿 필터)

  • Kim, Soomin;Moon, Bochang;Yoon, Sung-Eui
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.835-840
    • /
    • 2016
  • Monte Carlo ray tracing has been widely used for simulating a diverse set of photo-realistic effects. However, this technique typically produces noise when insufficient numbers of samples are used. As the number of samples allocated per pixel is increased, the rendered images converge. However, this approach of generating sufficient numbers of samples, requires prohibitive rendering time. To solve this problem, image filtering can be applied to rendered images, by filtering the noisy image rendered using low sample counts and acquiring smoothed images, instead of naively generating additional rays. In this paper, we proposed a Stein's Unbiased Risk Estimator (SURE) based $\grave{A}$-Trous wavelet to filter the noise in rendered images in a near-interactive rate. Based on SURE, we can estimate filtering errors associated with $\grave{A}$-Trous wavelet, and identify wavelet coefficients reducing filtering errors. Our approach showed improvement, up to 6:1, over the original $\grave{A}$-Trous filter on various regions in the image, while maintaining a minor computational overhead. We have integrated our propsed filtering method with the recent interactive ray tracing system, Embree, and demonstrated its benefits.

Slit-light Laser Range Finding Using Perspective Warping Calibration (원근 와핑 보정을 이용한 선광원 레이저 거리 검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.232-237
    • /
    • 2010
  • In this paper, a slit light laser range finding method using perspective warping calibration is proposed. This approach has an advantage to acquire relatively high accuracy, although the optical system is nonlinear. In the calibration, we detect the calibration points which are marked on the calibration panel and acquire the center position of the slit light laser in the image, which are used for computing the real positions of the slit light by using perspective warping. A calibration file is obtained by integrating the calibration data with the transition of the panel. The range data is acquired by interpolating the center position of the slit light laser to the calibration coordinates. Experimental results show that the proposed method provides the accuracy of 0.08mm error in depth range of 130mm with the low cost optical system.

Data Hiding Using Sequential Hamming + k with m Overlapped Pixels

  • Kim, Cheonshik;Shin, Dongkyoo;Yang, Ching-Nung;Chen, Yi-Cheng;Wu, Song-Yu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6159-6174
    • /
    • 2019
  • Recently, Kim et al. introduced the Hamming + k with m overlapped pixels data hiding (Hk_mDH) based on matrix encoding. The embedding rate (ER) of this method is 0.54, which is better than Hamming code HC (n, n - k) and HC (n, n - k) +1 DH (H1DH), but not enough. Hamming code data hiding (HDH) is using a covering function COV(1, n = 2k -1, k) and H1DH has a better embedding efficiency, when compared with HDH. The demerit of this method is that they do not exploit their space of pixels enough to increase ER. In this paper, we increase ER using sequential Hk_mDH (SHk_mDH ) through fully exploiting every pixel in a cover image. In SHk_mDH, a collision maybe happens when the position of two pixels within overlapped two blocks is the same. To solve the collision problem, in this paper, we have devised that the number of modification does not exceed 2 bits even if a collision occurs by using OPAP and LSB. Theoretical estimations of the average mean square error (AMSE) for these schemes demonstrate the advantage of our SHk_mDH scheme. Experimental results show that the proposed method is superior to previous schemes.

The Operational Comparison of SPOT GCP Acquisition and Accuracy Evaluation

  • Kim, Kam-Lae;Kim, Uk-Nam;Chun, Ho-Woun;Lee, Ho-Nam
    • Korean Journal of Geomatics
    • /
    • v.1 no.1
    • /
    • pp.1-5
    • /
    • 2001
  • This paper presents an investigation into the operational comparison of SPOT triangulation to build GCP library by analytical plotter and DPW (digital photogrammetric workstation). GCP database derived from current SPOT images can be used to other image sensors of satellite, if any reasons, such as lack of topographic maps or GCPs. But, general formulation of a photogrammetric process for GCP measurement has to take care of the scene interpretation problem. There are two classical methods depending on whether an analytical plotter or DPW is being used. Regardless of the method used, the measurement of GCPs is the weakest point in the automation of photogrammetric orientation procedures. To make an operational comparison, five models of SPOT panchromatic images (level 1A) and negative films (level 1AP) were used. Ten images and film products were used for the five GRS areas. Photogrammetric measurements were carried out in a manual mode on P2 analytical plotter and LH Systems DPW770. We presented an approach for exterior orientation of SPOT images, which was based on the use of approximately eighty national geodetic control points as GCPs which located on the summit of the mountain. Using sixteen well-spaced geodetic control points per model, all segments consistently showed RMS error just below the pixel at the check points in analytical instrument. In the case of DPW, half of the ground controls could not found or distinguished exactly when we displayed the image on the computer monitor. Experiment results showed that the RMS errors with DPW test was fluctuated case by case. And the magnitudes of the errors were reached more than three pixels due to the lack of image interpretation capability. It showed that the geodetic control points is not suitable as the ground control points in DPW for modeling the SPOT image.

  • PDF

Ship Detection by Satellite Data: Radiometric and Geometric Calibrations of RADARS AT Data (위성 데이터에 의한 선박 탐지: RADARSAT의 대기보정과 기하보정)

  • Yang, Chan-Su
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.10 no.1 s.20
    • /
    • pp.1-7
    • /
    • 2004
  • RADARSAT is one of many possible data sources that can play an important role in marine surveillance including ship detection because radar sensors have the two primary advantages: all-weather and day or night imaging. However, atmospheric effects on SAR imaging can not be bypassed and any remote sensing image has various geometric distortions, In this study, radiometric and geometric calibrations for RADARSAT/SAT data are tried using SGX products georeferenced as level 1. Even comparison of the near vs. far range sections of the same images requires such calibration Radiometric calibration is performed by compensating for effects of local illuminated area and incidence angle on the local backscatter, Conversion method of the pixel DNs to beta nought and sigma nought is also investigated. Finally, automatic geometric calibration based on the 4 pixels from the header file is compared to a marine chart. The errors for latitude and longitude directions are 300m and 260m, respectively. It can be concluded that the error extent is acceptable for an application to open sea and can be calibrated using a ground control point.

  • PDF

Comparison between Possibilistic c-Means (PCM) and Artificial Neural Network (ANN) Classification Algorithms in Land use/ Land cover Classification

  • Ganbold, Ganchimeg;Chasia, Stanley
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.7 no.1
    • /
    • pp.57-78
    • /
    • 2017
  • There are several statistical classification algorithms available for land use/land cover classification. However, each has a certain bias or compromise. Some methods like the parallel piped approach in supervised classification, cannot classify continuous regions within a feature. On the other hand, while unsupervised classification method takes maximum advantage of spectral variability in an image, the maximally separable clusters in spectral space may not do much for our perception of important classes in a given study area. In this research, the output of an ANN algorithm was compared with the Possibilistic c-Means an improvement of the fuzzy c-Means on both moderate resolutions Landsat8 and a high resolution Formosat 2 images. The Formosat 2 image comes with an 8m spectral resolution on the multispectral data. This multispectral image data was resampled to 10m in order to maintain a uniform ratio of 1:3 against Landsat 8 image. Six classes were chosen for analysis including: Dense forest, eucalyptus, water, grassland, wheat and riverine sand. Using a standard false color composite (FCC), the six features reflected differently in the infrared region with wheat producing the brightest pixel values. Signature collection per class was therefore easily obtained for all classifications. The output of both ANN and FCM, were analyzed separately for accuracy and an error matrix generated to assess the quality and accuracy of the classification algorithms. When you compare the results of the two methods on a per-class-basis, ANN had a crisper output compared to PCM which yielded clusters with pixels especially on the moderate resolution Landsat 8 imagery.

Automatic generation of reliable DEM using DTED level 2 data from high resolution satellite images (고해상도 위성영상과 기존 수치표고모델을 이용하여 신뢰성이 향상된 수치표고모델의 자동 생성)

  • Lee, Tae-Yoon;Jung, Jae-Hoon;Kim, Tae-Jung
    • Spatial Information Research
    • /
    • v.16 no.2
    • /
    • pp.193-206
    • /
    • 2008
  • If stereo images is used for Digital Elevation Model (DEM) generation, a DEM is generally made by matching left image against right image from stereo images. In stereo matching, tie-points are used as initial match candidate points. The number and distribution of tie-points influence the matching result. DEM made from matching result has errors such as holes, peaks, etc. These errors are usually interpolated by neighbored pixel values. In this paper, we propose the DEM generation method combined with automatic tie-points extraction using existing DEM, image pyramid, and interpolating new DEM using existing DEM for more reliable DEM. For test, we used IKONOS, QuickBird, SPOT5 stereo images and a DTED level 2 data. The test results show that the proposed method automatically makes reliable DEMs. For DEM validation, we compared heights of DEM by proposed method with height of existing DTED level 2 data. In comparison result, RMSE was under than 15 m.

  • PDF

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.