• Title/Summary/Keyword: Least-squared-based method

Search Result 47, Processing Time 0.025 seconds

Variance function estimation with LS-SVM for replicated data

  • Shim, Joo-Yong;Park, Hye-Jung;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.925-931
    • /
    • 2009
  • In this paper we propose a variance function estimation method for replicated data based on averages of squared residuals obtained from estimated mean function by the least squares support vector machine. Newton-Raphson method is used to obtain associated parameter vector for the variance function estimation. Furthermore, the cross validation functions are introduced to select the hyper-parameters which affect the performance of the proposed estimation method. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF

Estimation on a two-parameter Rayleigh distribution under the progressive Type-II censoring scheme: comparative study

  • Seo, Jung-In;Seo, Byeong-Gyu;Kang, Suk-Bok
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.2
    • /
    • pp.91-102
    • /
    • 2019
  • In this paper, we propose a new estimation method based on a weighted linear regression framework to obtain some estimators for unknown parameters in a two-parameter Rayleigh distribution under a progressive Type-II censoring scheme. We also provide unbiased estimators of the location parameter and scale parameter which have a nuisance parameter, and an estimator based on a pivotal quantity which does not depend on the other parameter. The proposed weighted least square estimator (WLSE) of the location parameter is not dependent on the scale parameter. In addition, the WLSE of the scale parameter is not dependent on the location parameter. The results are compared with the maximum likelihood method and pivot-based estimation method. The assessments and comparisons are done using Monte Carlo simulations and real data analysis. The simulation results show that the estimators ${\hat{\mu}}_u({\hat{\theta}}_p)$ and ${\hat{\theta}}_p({\hat{\mu}}_u)$ are superior to the other estimators in terms of the mean squared error (MSE) and bias.

Construction of a Ginsenoside Content-predicting Model based on Hyperspectral Imaging

  • Ning, Xiao Feng;Gong, Yuan Juan;Chen, Yong Liang;Li, Hongbo
    • Journal of Biosystems Engineering
    • /
    • v.43 no.4
    • /
    • pp.369-378
    • /
    • 2018
  • Purpose: The aim of this study was to construct a saponin content-predicting model using shortwave infrared imaging spectroscopy. Methods: The experiment used a shortwave imaging spectrometer and ENVI spectral acquisition software sampling a spectrum of 910 nm-2500 nm. The corresponding preprocessing and mathematical modeling analysis was performed by Unscrambler 9.7 software to establish a ginsenoside nondestructive spectral testing prediction model. Results: The optimal preprocessing method was determined to be a standard normal variable transformation combined with the second-order differential method. The coefficient of determination, $R^2$, of the mathematical model established by the partial least squares method was found to be 0.9999, while the root mean squared error of prediction, RMSEP, was found to be 0.0043, and root mean squared error of calibration, RMSEC, was 0.0041. The residuals of the majority of the samples used for the prediction were between ${\pm}1$. Conclusion: The experiment showed that the predicted model featured a high correlation with real values and a good prediction result, such that this technique can be appropriately applied for the nondestructive testing of ginseng quality.

Comparison of Different Schemes for Speed Sensorless Control of Induction Motor Drives by Neural Network (유도전동기의 속도 센서리스 제어를 위한 신경회로망 알고리즘의 추정 특성 비교)

  • 이경훈;국윤상;김윤호;최원범
    • Proceedings of the KIPE Conference
    • /
    • 1999.07a
    • /
    • pp.526-530
    • /
    • 1999
  • This paper presents a newly developed speed sensorless drive using Neural Network algorithm. Neural Network algorithm can be divided into three categories. In the first one, a Back Propagation-based NN algorithm is well-known to gradient descent method. In the second scheme, a Extended Kalman Filter-based NN algorithm has just the time varying learning rate. In the last scheme, a Recursive Least Square-based NN algorithm is faster and more stable than the classical back-propagation algorithm for training multilayer perceptrons. The number of iterations required to converge and the mean-squared error between the desired and actual outputs is compared with respect to each method. The theoretical analysis and experimental results are discussed.

  • PDF

Determination of Design Width for Medium Streams in the Han River Basin (한강유역의 중소하천에 대한 계획하폭 산정)

  • Jeon, Se-Jin;An, Tae-Jin;Park, Jeong-Eung
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.675-684
    • /
    • 1998
  • This paper presents the empirical formulas for determining the design-width for medium rivers in the Han river basin. The design flood, the watershed ares, and the channel slope of 216 medium rivers in the Han river basin are collected. the design width formulas are then determined by 1) the least squares (LS) method, 2)the least median squares (LMS) method, and 3) the reweighted least squares method based on the LMS (RLS). The six types of formulas are considered to determine the acceptable type for medium streams in the Han river basin. The root mean squared errors (RMSE), the absolute mean (AME) errors, and the mean errors (ME) are computed to test the formulas derived by three regression methods. It si found that the equation related stream width to the watershed area and the channel slope is acceptable for determining the design width for medium streams in the Han river basin. It is expected that the equations proposed by this study be used an index for determining the design-width for medium streams in the Han river basin.

  • PDF

A Hierarchical Image Mosaicing using Camera and Object Parameters for Efficient Video Database Construction (효율적인 비디오 데이터베이스 구축을 위해 카메라와 객체 파라미터를 이용한 계층형 영상 모자이크)

  • 신성윤;이양원
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.2
    • /
    • pp.167-175
    • /
    • 2002
  • Image Mosaicing creates a new image by composing video frames or still images that are related, and performed by arrangement, composition and redundancy analysis of images. This paper proposes a hierarchical image mosaicing system using camera and object parameters far efficient video database construction. A tree-based image mosiacing has implemented for high-speed computation time and for construction of static and dynamic image mosaic. Camera parameters are measured by using least sum of squared difference and affine model. Dynamic object detection algorithm has proposed for extracting dynamic objects. For object extraction, difference image, macro block, region splitting and 4-split detection methods are proposed and used. Also, a dynamic positioning method is used for presenting dynamic objects and a blurring method is used for creating flexible mosaic image.

  • PDF

Interpolation method of head-related transfer function based on the least squares method and an acoustic modeling with a small number of measurement points (최소자승법과 음향학적 모델링 기반의 적은 개수의 측정점에 대한 머리전달함수 보간 기법)

  • Lee, Seokjin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.5
    • /
    • pp.338-344
    • /
    • 2017
  • In this paper, an interpolation method of HRTF (Head-Related Transfer Function) is proposed for small-sized measurement data set, especially. The proposed algorithm is based on acoustic modeling of HRTFs, and the algorithm tries to interpolate the HRTFs via estimation the model coefficients. However, the estimation of the model coefficients is hard if there is lack of measurement points, so the algorithm solves the problem by a data augmentation using the VBAP (Vector Based Amplitude Panning). Therefore, the proposed algorithm consists of two steps, which are data augmentation step based on VBAP and model coefficients estimation step by least squares method. The proposed algorithm was evaluated by a simulation with a measured data from CIPIC (Center for Image Processing and Integrated Computing) HRTF database, and the simulation results show that the proposed algorithm reduces mean-squared error by 1.5 dB ~ 4 dB than the conventional algorithms.

Sound Field Reconstruction Technology Using a Three Dimensional Loudspeaker Array (3차원 라우드스피커 어레이를 이용한 음장재현기술)

  • Seo, Jeong-Il;Kang, Kyeong-Ok;Fazi, Filippo M.;Nelson, Philip A.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.723-731
    • /
    • 2009
  • In this paper, we propose a novel sound field reconstruction algorithm using a three dimensional loudspeaker array for providing realistic sound field to multiple listeners. The proposed algorithm is based on minimization of the squared error between the original sound field and the reconstructed sound field by the loudspeaker array over a predefined three dimensional region of the space using a loudspeaker array surrounding the listening area. For evaluating the proposed algorithm, we constructed the three dimensional array composed of 40 loudspeakers and discuss the relevant experiment results.

Hybrid Closed-Form Solution for Wireless Localization with Range Measurements (거리정보 기반 무선위치추정을 위한 혼합 폐쇄형 해)

  • Cho, Seong Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.7
    • /
    • pp.633-639
    • /
    • 2013
  • Several estimation methods used in the range measurement based wireless localization area have individual problems. These problems may not occur according to certain application areas. However, these problems may give rise to serious problems in particular applications. In this paper, three methods, ILS (Iterative Least Squares), DS (Direct Solution), and DSRM (Difference of Squared Range Measurements) methods are considered. Problems that can occur in these methods are defined and a simple hybrid solution is presented to solve them. The ILS method is the most frequently used method in wireless localization and has local minimum problems and a large computational burden compared with closed-form solutions. The DS method requires less processing time than the ILS method. However, a solution for this method may include a complex number caused by the relations between the location of reference nodes and range measurement errors. In the near-field region of the complex solution, large estimation errors occur. In the DSRM method, large measurement errors occur when the mobile node is far from the reference nodes due to the combination of range measurement error and range data. This creates the problem of large localization errors. In this paper, these problems are defined and a hybrid localization method is presented to avoid them by integrating the DS and DSRM methods. The defined problems are confirmed and the performance of the presented method is verified by a Monte-Carlo simulation.

Estimation for the Half Logistic Distribution Based on Double Hybrid Censored Samples

  • Kang, Suk-Bok;Cho, Young-Seuk;Han, Jun-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.6
    • /
    • pp.1055-1066
    • /
    • 2009
  • Many articles have considered a hybrid censoring scheme, which is a mixture of Type-I and Type-II censoring schemes. We introduce a double hybrid censoring scheme and derive some approximate maximum likelihood estimators(AMLEs) of the scale parameter for the half logistic distribution under the proposed double hybrid censored samples. The scale parameter is estimated by approximate maximum likelihood estimation method using two different Taylor series expansion types. We also obtain the maximum likelihood estimator(MLE) and the least square estimator(LSE) of the scale parameter under the proposed double hybrid censored samples. We compare the proposed estimators in the sense of the mean squared error. The simulation procedure is repeated 10,000 times for the sample size n = 20(10)40 and various censored samples. The performances of the AMLEs and MLE are very similar in all aspects but the MLE and LSE have not a closed-form expression, some numerical method must be employed.