• Title/Summary/Keyword: penalized least squares

Search Result 15, Processing Time 0.022 seconds

A new classification method using penalized partial least squares (벌점 부분최소자승법을 이용한 분류방법)

  • Kim, Yun-Dae;Jun, Chi-Hyuck;Lee, Hye-Seon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.931-940
    • /
    • 2011
  • Classification is to generate a rule of classifying objects into several categories based on the learning sample. Good classification model should classify new objects with low misclassification error. Many types of classification methods have been developed including logistic regression, discriminant analysis and tree. This paper presents a new classification method using penalized partial least squares. Penalized partial least squares can make the model more robust and remedy multicollinearity problem. This paper compares the proposed method with logistic regression and PCA based discriminant analysis by some real and artificial data. It is concluded that the new method has better power as compared with other methods.

Cox proportional hazard model with L1 penalty

  • Hwang, Chang-Ha;Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.3
    • /
    • pp.613-618
    • /
    • 2011
  • The proposed method is based on a penalized log partial likelihood of Cox proportional hazard model with L1-penalty. We use the iteratively reweighted least squares procedure to solve L1 penalized log partial likelihood function of Cox proportional hazard model. It provide the ecient computation including variable selection and leads to the generalized cross validation function for the model selection. Experimental results are then presented to indicate the performance of the proposed procedure.

Automatic Selection of Optimal Parameter for Baseline Correction using Asymmetrically Reweighted Penalized Least Squares (Asymmetrically Reweighted Penalized Least Squares을 이용한 기준선 보정에서 최적 매개변수 자동 선택 방법)

  • Park, Aaron;Baek, Sung-June;Park, Jun-Qyu;Seo, Yu-Gyung;Won, Yonggwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.124-131
    • /
    • 2016
  • Baseline correction is very important due to influence on performance of spectral analysis in application of spectroscopy. Baseline is often estimated by parameter selection using visual inspection on analyte spectrum. It is a highly subjective procedure and can be tedious work especially with a large number of data. For these reasons, it is an objective and automatic procedure is necessary to select optimal parameter value for baseline correction. Asymmetrically reweighted penalized least squares (arPLS) based on penalized least squares was proposed for baseline correction in our previous study. The method uses a new weighting scheme based on the generalized logistic function. In this study, we present an automatic selection of optimal parameter for baseline correction using arPLS. The method computes fitness and smoothness values of fitted baseline within available range of parameters and then selects optimal parameter when the sum of normalized fitness and smoothness gets minimum. According to the experimental results using simulated data with varying baselines, sloping, curved and doubly curved baseline, and real Raman spectra, we confirmed that the proposed method can be effectively applied to optimal parameter selection for baseline correction using arPLS.

Variable selection in L1 penalized censored regression

  • Hwang, Chang-Ha;Kim, Mal-Suk;Shi, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.951-959
    • /
    • 2011
  • The proposed method is based on a penalized censored regression model with L1-penalty. We use the iteratively reweighted least squares procedure to solve L1 penalized log likelihood function of censored regression model. It provide the efficient computation of regression parameters including variable selection and leads to the generalized cross validation function for the model selection. Numerical results are then presented to indicate the performance of the proposed method.

Decision function for optimal smoothing parameter of asymmetrically reweighted penalized least squares (Asymetrically reweighted penalized least squares에서 최적의 평활화 매개변수를 위한 결정함수)

  • Park, Aa-Ron;Park, Jun-Kyu;Ko, Dae-Young;Kim, Sun-Geum;Baek, Sung-June
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.500-506
    • /
    • 2019
  • In this study, we present a decision function of optimal smoothing parameter for baseline correction using Asymmetrically reweighted penalized least squares (arPLS). Baseline correction is very important due to influence on performance of spectral analysis in application of spectroscopy. Baseline is often estimated by parameter selection using visual inspection on analyte spectrum. It is a highly subjective procedure and can be tedious work especially with a large number of data. For these reasons, an objective procedure is necessary to determine optimal parameter value for baseline correction. The proposed function is defined by modeling the median value of possible parameter range as the length and order of the background signal. The median value increases as the length of the signal increases and decreases as the degree of the signal increases. The simulated data produced a total of 112 signals combined for the 7 lengths of the signal, adding analytic signals and linear and quadratic, cubic and 4th order curve baseline respectively. According to the experimental results using simulated data with linear, quadratic, cubic and 4th order curved baseline, and real Raman spectra, we confirmed that the proposed function can be effectively applied to optimal parameter selection for baseline correction using arPLS.

Estimation and variable selection in censored regression model with smoothly clipped absolute deviation penalty

  • Shim, Jooyong;Bae, Jongsig;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.6
    • /
    • pp.1653-1660
    • /
    • 2016
  • Smoothly clipped absolute deviation (SCAD) penalty is known to satisfy the desirable properties for penalty functions like as unbiasedness, sparsity and continuity. In this paper, we deal with the regression function estimation and variable selection based on SCAD penalized censored regression model. We use the local linear approximation and the iteratively reweighted least squares algorithm to solve SCAD penalized log likelihood function. The proposed method provides an efficient method for variable selection and regression function estimation. The generalized cross validation function is presented for the model selection. Applications of the proposed method are illustrated through the simulated and a real example.

Deep LS-SVM for regression

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.827-833
    • /
    • 2016
  • In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

Penalized maximum likelihood estimation with symmetric log-concave errors and LASSO penalty

  • Seo-Young, Park;Sunyul, Kim;Byungtae, Seo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.641-653
    • /
    • 2022
  • Penalized least squares methods are important tools to simultaneously select variables and estimate parameters in linear regression. The penalized maximum likelihood can also be used for the same purpose assuming that the error distribution falls in a certain parametric family of distributions. However, the use of a certain parametric family can suffer a misspecification problem which undermines the estimation accuracy. To give sufficient flexibility to the error distribution, we propose to use the symmetric log-concave error distribution with LASSO penalty. A feasible algorithm to estimate both nonparametric and parametric components in the proposed model is provided. Some numerical studies are also presented showing that the proposed method produces more efficient estimators than some existing methods with similar variable selection performance.

Parametric Blind Restoration of Bi-level Images with Unknown Intensities

  • Kim, Daeun;Ahn, Sohyun;Kim, Jeongtae
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.319-322
    • /
    • 2016
  • We propose a parametric blind deconvolution method for bi-level images with unknown intensity levels that estimates unknown parameters for point spread functions and images by minimizing a penalized nonlinear least squares objective function based on normalized correlation coefficients and two regularization functions. Unlike conventional methods, the proposed method does not require knowledge about true intensity values. Moreover, the objective function of the proposed method can be effectively minimized, since it has the special structure of nonlinear least squares. We demonstrate the effectiveness of the proposed method through simulations and experiments.

Semi-Supervised Learning Using Kernel Estimation

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.629-636
    • /
    • 2007
  • A kernel type semi-supervised estimate is proposed. The proposed estimate is based on the penalized least squares loss and the principle of Gaussian Random Fields Model. As a result, we can estimate the label of new unlabeled data without re-computation of the algorithm that is different from the existing transductive semi-supervised learning. Also our estimate is viewed as a general form of Gaussian Random Fields Model. We give experimental evidence suggesting that our estimate is able to use unlabeled data effectively and yields good classification.

  • PDF