• Title/Summary/Keyword: Sampling points

Search Result 643, Processing Time 0.242 seconds

Number of sampling leaves for reflectance measurement of Chinese cabbage and kale

  • Chung, Sun-Ok;Ngo, Viet-Duc;Kabir, Md. Shaha Nur;Hong, Soon-Jung;Park, Sang-Un;Kim, Sun-Ju;Park, Jong-Tae
    • Korean Journal of Agricultural Science
    • /
    • v.41 no.3
    • /
    • pp.169-175
    • /
    • 2014
  • Objective of this study was to investigate effects of pre-processing method and number of sampling leaves on stability of the reflectance measurement for Chinese cabbage and kale leaves. Chinese cabbage and kale were transplanted and cultivated in a plant factory. Leaf samples of the kale and cabbage were collected at 4 weeks after transplanting of the seedlings. Spectra data were collected with an UV/VIS/NIR spectrometer in the wavelength region from 190 to 1130 nm. All leaves (mature and young leaves) were measured on 9 and 12 points in the blade part in the upper area for kale and cabbage leaves, respectively. To reduce the spectral noise, the raw spectral data were preprocessed by different methods: i) moving average, ii) Savitzky-Golay filter, iii) local regression using weighted linear least squares and a $1^{st}$ degree polynomial model (lowess), iv) local regression using weighted linear least squares and a $2^{nd}$ degree polynomial model (loess), v) a robust version of 'lowess', vi) a robust version of 'loess', with 7, 11, 15 smoothing points. Effects of number of sampling leaves were investigated by reflectance difference (RD) and cross-correlation (CC) methods. Results indicated that the contribution of the spectral data collected at 4 sampling leaves were good for both of the crops for reflectance measurement that does not change stability of measurement much. Furthermore, moving average method with 11 smoothing points was believed to provide reliable pre-processed data for further analysis.

A Random Sampling Method in Estimating the Mean Areal Precipitation Using Kriging (임의 추출방식 크리깅을 이용한 평균면적우량의 추정)

  • 이상일
    • Water for future
    • /
    • v.26 no.2
    • /
    • pp.79-87
    • /
    • 1993
  • A new method to estimate the mean areal precipitation using kriging is developed. Unlike the conventional approach, points for double and quadruple numerical integrations in the kriging equation are selected randomly, given the boundary of area of interest. This feature eliminates the conventional approach's necessity of dividing the area into subareas and calculating the center of each subarea, which in turn makes the developed method more powerful in the case of complex boundaries. The algorithm to select random points within an arbitrary boundary, based on the theory of complex variables, is described. The results of Monte Carlo simulation showed that the error associated with estimation using randomly selected points is inversely proportional to the square root of the number of sampling points.

  • PDF

A Study on ECG Oata Compression Algorithm Using Neural Network (신경회로망을 이용한 심전도 데이터 압축 알고리즘에 관한 연구)

  • 김태국;이명호
    • Journal of Biomedical Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.191-202
    • /
    • 1991
  • This paper describes ECG data compression algorithm using neural network. As a learning method, we use back error propagation algorithm. ECG data compression is performed using learning ability of neural network. CSE database, which is sampled 12bit digitized at 500samp1e/sec, is selected as a input signal. In order to reduce unit number of input layer, we modify sampling ratio 250samples/sec in QRS complex, 125samples/sec in P & T wave respectively. hs a input pattern of neural network, from 35 points backward to 45 points forward sample Points of R peak are used.

  • PDF

A Study on Modeling of Search Space with GA Sampling

  • Banno, Yoshifumi;Ohsaki, Miho;Yoshikawa, Tomohiro;Shinogi, Tsuyoshi;Tsuruoka, Shinji
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.86-89
    • /
    • 2003
  • To model a numerical problem space under the limitation of available data, we need to extract sparse but key points from the space and to efficiently approximate the space with them. This study proposes a sampling method based on the search process of genetic algorithm and a space modeling method based on least-squares approximation using the summation of Gaussian functions. We conducted simulations to evaluate them for several kinds of problem spaces: DeJong's, Schaffer's, and our original one. We then compared the performance between our sampling method and sampling at regular intervals and that between our modeling method and modeling using a polynomial. The results showed that the error between a problem space and its model was the smallest for the combination of our sampling and modeling methods for many problem spaces when the number of samples was considerably small.

  • PDF

Study on the Soil Sample Number of Total Petroleum Hydrocarbons Fractionation for Risk Assessment in Contaminated Site (석유계총탄화수소의 위해성평가 시 적정 분획 시료수 결정에 대한 고찰)

  • Jeon, Inhyeong;Kim, Sang Hyun;Chung, Hyeonyong;Jeong, Buyun;Noh, Hoe-Jung;Kim, Hyun-Koo;Nam, Kyoungphile
    • Journal of Soil and Groundwater Environment
    • /
    • v.24 no.5
    • /
    • pp.11-16
    • /
    • 2019
  • In this study, a reliable number of soil samples for TPH fractionation was investigated in order to perform risk assessment. TPH was fractionated into volatile petroleum hydrocarbons (VPH) with three subgroups and extractable petroleum hydrocarbons (EPH) with four subgroups. At the study site, concentrations of each fraction were determined at 18 sampling points, and the 95% upper confidence limit (UCL) value was used as an exposure concentration of each fraction. And then, 5 sampling points were randomly selected out of the 18 points, and an exposure concentration was calculated. This process was repeated 30 times, and the results were compared statistically. Exposure concentrations of EPH obtained from 18 points were 99.9, 339.1, 27.3, and 85.9 mg/kg for aliphatic $C_9-C_{18}$, $C_{19}-C_{36}$, $C_{37}-C_{40}$, and aromatic $C_{11}-C_{22}$, respectively. The corresponding exposure concentrations obtained from 5 points were 139.8, 462.8, 35.1 and 119.4 mg/kg, which were significantly higher than those from 18 points results (p <0.05). Our results suggest that limited number of samples for TPH fractionation may bias estimation of exposure concentration of TPH fractions. Also, it is recommended that more than 30 samples need to be analyzed for TPH fractionation in performing risk assessment.

3D Reconstruction using three vanishing points from a single image

  • Yoon, Yong-In;Im, Jang-Hwan;Kim, Dae-Hyun;Park, Jong-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1145-1148
    • /
    • 2002
  • This paper presents a new method which is calculated to use only three vanishing points in order to compute the dimensions of object and its pose from a single image of perspective projection taken by a camera and the problem of recovering 3D models from three vanishing points of box scene. Our approach is to compute only three vanishing points without this information such as the focal length, rotation matrix, and translation from images in the case of perspective projection. We assume that the object can be modeled as a linear function of a dimension vector ν. The input of reconstruction is a set of correspondences between features in the model and features in the image. To minimize each the dimensions of the parameterized models, this reconstruction of optimization can be solved by the standard nonlinear optimization techniques with a multi-start method which generates multiple starting points for the optimizer by sampling the parameter space uniformly.

  • PDF

Effects of Latin hypercube sampling on surrogate modeling and optimization

  • Afzal, Arshad;Kim, Kwang-Yong;Seo, Jae-won
    • International Journal of Fluid Machinery and Systems
    • /
    • v.10 no.3
    • /
    • pp.240-253
    • /
    • 2017
  • Latin hypercube sampling is widely used design-of-experiment technique to select design points for simulation which are then used to construct a surrogate model. The exploration/exploitation properties of surrogate models depend on the size and distribution of design points in the chosen design space. The present study aimed at evaluating the performance characteristics of various surrogate models depending on the Latin hypercube sampling (LHS) procedure (sample size and spatial distribution) for a diverse set of optimization problems. The analysis was carried out for two types of problems: (1) thermal-fluid design problems (optimizations of convergent-divergent micromixer coupled with pulsatile flow and boot-shaped ribs), and (2) analytical test functions (six-hump camel back, Branin-Hoo, Hartman 3, and Hartman 6 functions). The three surrogate models, namely, response surface approximation, Kriging, and radial basis neural networks were tested. The important findings are illustrated using Box-plots. The surrogate models were analyzed in terms of global exploration (accuracy over the domain space) and local exploitation (ease of finding the global optimum point). Radial basis neural networks showed the best overall performance in global exploration characteristics as well as tendency to find the approximate optimal solution for the majority of tested problems. To build a surrogate model, it is recommended to use an initial sample size equal to 15 times the number of design variables. The study will provide useful guidelines on the effect of initial sample size and distribution on surrogate construction and subsequent optimization using LHS sampling plan.

The Research on The Stability as Fill Material of Soil Defiled by Oil Element and Heavy Metals (중금속 및 유류로 오염된 토질의 성토재료로서의 안정성에 관한 연구)

  • Lee, Chung-Sook;Eom, Tae-Kyu;Choi, Yong-Kyu;Lee, Min-Hee
    • Journal of the Korean GEO-environmental Society
    • /
    • v.5 no.2
    • /
    • pp.5-13
    • /
    • 2004
  • In the site for apartment construction, the contaminated soils of the heavy metal and the oil were appeared. The representative soil samples were sampled at 7 sampling points. To confirm the geotechnical stability of the contaminated soils, the environmental checks for the heavy metal and the oil. The soils of 2 sampling points were contaminated heavily, so it was estimated that these soils must be disused. For 1 sampling point of the slightly contaminated soil, to confirm the re-applicability of fill material, the stability analysis was performed and it was concluded that this soil will be able to re-use.

  • PDF

RPC MODEL FOR ORTHORECTIFYING VHRS IMAGE

  • Ke, Luong Chinh
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.631-634
    • /
    • 2006
  • Three main important sources for establishing GIS are the orthomap in scale 1:5 000 with Ground Sampling Distance of 0,5m; DEM/DTM data with height error of ${\pm}$1,0m and topographic map in scale 1: 10 000. The new era with Very High Resolution Satellite (VHRS) images as IKONOS, QuickBird, EROS, OrbView and other ones having Ground Sampling Distance (GSD) even lower than 1m has been in potential for producing orthomap in large scale 1:5 000, to update existing maps, to compile general-purpose or thematic maps and for GIS. The accuracy of orthomap generated from VHRS image affects strongly on GIS reliability. Nevertheless, orthomap accuracy taken from VHRS image is at first dependent on chosen sensor geometrical models. This paper presents, at fist, theoretical basic of the Rational Polynomial Coefficient (RPC) model installed in the commercial ImageStation Systems, realized for orthorectifying VHRS images. The RPC model of VHRS image is a replacement camera mode that represents the indirect relation between terrain and its image acquired on the flight orbit. At the end of this paper the practical accuracies of IKONOS and QuickBird image orthorectified by RPC model on Canadian PCI Geomatica System have been presented. They are important indication for practical application of producing digital orthomaps.

  • PDF

Bayesian Multiple Change-Point for Small Data (소량자료를 위한 베이지안 다중 변환점 모형)

  • Cheon, Soo-Young;Yu, Wenxing
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.2
    • /
    • pp.237-246
    • /
    • 2012
  • Bayesian methods have been recently used to identify multiple change-points. However, the studies for small data are limited. This paper suggests the Bayesian noncentral t distribution change-point model for small data, and applies the Metropolis-Hastings-within-Gibbs Sampling algorithm to the proposed model. Numerical results of simulation and real data show the performance of the new model in terms of the quality of the resulting estimation of the numbers and positions of change-points for small data.