• Title/Summary/Keyword: regularization method

Search Result 303, Processing Time 0.024 seconds

Weighted Least Absolute Deviation Lasso Estimator

  • Jung, Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.733-739
    • /
    • 2011
  • The linear absolute shrinkage and selection operator(Lasso) method improves the low prediction accuracy and poor interpretation of the ordinary least squares(OLS) estimate through the use of $L_1$ regularization on the regression coefficients. However, the Lasso is not robust to outliers, because the Lasso method minimizes the sum of squared residual errors. Even though the least absolute deviation(LAD) estimator is an alternative to the OLS estimate, it is sensitive to leverage points. We propose a robust Lasso estimator that is not sensitive to outliers, heavy-tailed errors or leverage points.

A MODIFIED BFGS BUNDLE ALGORITHM BASED ON APPROXIMATE SUBGRADIENTS

  • Guo, Qiang;Liu, Jian-Guo
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.5_6
    • /
    • pp.1239-1248
    • /
    • 2010
  • In this paper, an implementable BFGS bundle algorithm for solving a nonsmooth convex optimization problem is presented. The typical method minimizes an approximate Moreau-Yosida regularization using a BFGS algorithm with inexact function and the approximate gradient values which are generated by a finite inner bundle algorithm. The approximate subgradient of the objective function is used in the algorithm, which can make the algorithm easier to implement. The convergence property of the algorithm is proved under some additional assumptions.

Regularized Optimization of Collaborative Filtering for Recommander System based on Big Data (빅데이터 기반 추천시스템을 위한 협업필터링의 최적화 규제)

  • Park, In-Kyu;Choi, Gyoo-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.87-92
    • /
    • 2021
  • Bias, variance, error and learning are important factors for performance in modeling a big data based recommendation system. The recommendation model in this system must reduce complexity while maintaining the explanatory diagram. In addition, the sparsity of the dataset and the prediction of the system are more likely to be inversely proportional to each other. Therefore, a product recommendation model has been proposed through learning the similarity between products by using a factorization method of the sparsity of the dataset. In this paper, the generalization ability of the model is improved by applying the max-norm regularization as an optimization method for the loss function of this model. The solution is to apply a stochastic projection gradient descent method that projects a gradient. The sparser data became, it was confirmed that the propsed regularization method was relatively effective compared to the existing method through lots of experiment.

Application of Effective Regularization to Gradient-based Seismic Full Waveform Inversion using Selective Smoothing Coefficients (선택적 평활화 계수를 이용한 그래디언트기반 탄성파 완전파형역산의 효과적인 정규화 기법 적용)

  • Park, Yunhui;Pyun, Sukjoon
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • In general, smoothing filters regularize functions by reducing differences between adjacent values. The smoothing filters, therefore, can regularize inverse solutions and produce more accurate subsurface structure when we apply it to full waveform inversion. If we apply a smoothing filter with a constant coefficient to subsurface image or velocity model, it will make layer interfaces and fault structures vague because it does not consider any information of geologic structures and variations of velocity. In this study, we develop a selective smoothing regularization technique, which adapts smoothing coefficients according to inversion iteration, to solve the weakness of smoothing regularization with a constant coefficient. First, we determine appropriate frequencies and analyze the corresponding wavenumber coverage. Then, we define effective maximum wavenumber as 99 percentile of wavenumber spectrum in order to choose smoothing coefficients which can effectively limit the wavenumber coverage. By adapting the chosen smoothing coefficients according to the iteration, we can implement multi-scale full waveform inversion while inverting multi-frequency components simultaneously. Through the successful inversion example on a salt model with high-contrast velocity structures, we can note that our method effectively regularizes the inverse solution. We also verify that our scheme is applicable to field data through the numerical example to the synthetic data containing random noise.

Compressive Sensing Recovery of Natural Images Using Smooth Residual Error Regularization (평활 잔차 오류 정규화를 통한 자연 영상의 압축센싱 복원)

  • Trinh, Chien Van;Dinh, Khanh Quoc;Nguyen, Viet Anh;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.209-220
    • /
    • 2014
  • Compressive Sensing (CS) is a new signal acquisition paradigm which enables sampling under Nyquist rate for a special kind of signal called sparse signal. There are plenty of CS recovery methods but their performance are still challenging, especially at a low sub-rate. For CS recovery of natural images, regularizations exploiting some prior information can be used in order to enhance CS performance. In this context, this paper addresses improving quality of reconstructed natural images based on Dantzig selector and smooth filters (i.e., Gaussian filter and nonlocal means filter) to generate a new regularization called smooth residual error regularization. Moreover, total variation has been proved for its success in preserving edge objects and boundary of reconstructed images. Therefore, effectiveness of the proposed regularization is verified by experimenting it using augmented Lagrangian total variation minimization. This framework is considered as a new CS recovery seeking smoothness in residual images. Experimental results demonstrate significant improvement of the proposed framework over some other CS recoveries both in subjective and objective qualities. In the best case, our algorithm gains up to 9.14 dB compared with the CS recovery using Bayesian framework.

The Joint Effect of factors on Generalization Performance of Neural Network Learning Procedure (신경망 학습의 일반화 성능향상을 위한 인자들의 결합효과)

  • Yoon YeoChang
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.343-348
    • /
    • 2005
  • The goal of this paper is to study the joint effect of factors of neural network teaming procedure. There are many factors, which may affect the generalization ability and teaming speed of neural networks, such as the initial values of weights, the learning rates, and the regularization coefficients. We will apply a constructive training algerian for neural network, then patterns are trained incrementally by considering them one by one. First, we will investigate the effect of these factors on generalization performance and learning speed. Based on these factors' effect, we will propose a joint method that simultaneously considers these three factors, and dynamically hue the learning rate and regularization coefficient. Then we will present the results of some experimental comparison among these kinds of methods in several simulated nonlinear data. Finally, we will draw conclusions and make plan for future work.

A Spline-Regularized Sinogram Smoothing Method for Filtered Backprojection Tomographic Reconstruction

  • Lee, S.J.;Kim, H.S.
    • Journal of Biomedical Engineering Research
    • /
    • v.22 no.4
    • /
    • pp.311-319
    • /
    • 2001
  • Statistical reconstruction methods in the context of a Bayesian framework have played an important role in emission tomography since they allow to incorporate a priori information into the reconstruction algorithm. Given the ill-posed nature of tomographic inversion and the poor quality of projection data, the Bayesian approach uses regularizers to stabilize solutions by incorporating suitable prior models. In this work we show that, while the quantitative performance of the standard filtered backprojection (FBP) algorithm is not as good as that of Bayesian methods, the application of spline-regularized smoothing to the sinogram space can make the FBP algorithm improve its performance by inheriting the advantages of using the spline priors in Bayesian methods. We first show how to implement the spline-regularized smoothing filter by deriving mathematical relationship between the regularization and the lowpass filtering. We then compare quantitative performance of our new FBP algorithms using the quantitation of bias/variance and the total squared error (TSE) measured over noise trials. Our numerical results show that the second-order spline filter applied to FBP yields the best results in terms of TSE among the three different spline orders considered in our experiments.

  • PDF

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.