• Title/Summary/Keyword: Regression problem

Search Result 1,658, Processing Time 0.033 seconds

A Bayesian Approach to Detecting Outliers Using Variance-Inflation Model

  • Lee, Sangjeen;Chung, Younshik
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.805-814
    • /
    • 2001
  • The problem of 'outliers', observations which look suspicious in some way, has long been one of the most concern in the statistical structure to experimenters and data analysts. We propose a model for outliers problem and also analyze it in linear regression model using a Bayesian approach with the variance-inflation model. We will use Geweke's(1996) ideas which is based on the data augmentation method for detecting outliers in linear regression model. The advantage of the proposed method is to find a subset of data which is most suspicious in the given model by the posterior probability The sampling based approach can be used to allow the complicated Bayesian computation. Finally, our proposed methodology is applied to a simulated and a real data.

  • PDF

사례기반추론을 이용한 다이렉트 마케팅의 고객반응예측모형의 통합

  • Hong, Taeho;Park, Jiyoung
    • The Journal of Information Systems
    • /
    • v.18 no.3
    • /
    • pp.375-399
    • /
    • 2009
  • In this study, we propose a integrated model of logistic regression, artificial neural networks, support vector machines(SVM), with case-based reasoning(CBR). To predict respondents in the direct marketing is the binary classification problem as like bankruptcy prediction, IDS, churn management and so on. To solve the binary problems, we employed logistic regression, artificial neural networks, SVM. and CBR. CBR is a problem-solving technique and shows significant promise for improving the effectiveness of complex and unstructured decision making, and we can obtain excellent results through CBR in this study. Experimental results show that the classification accuracy of integration model using CBR is superior to logistic regression, artificial neural networks and SVM. When we apply the customer response model to predict respondents in the direct marketing, we have to consider from the view point of profit/cost about the misclassification.

  • PDF

Optimization of Regression model Using Genetic Algorithm and Desirability Function (유전 알고리즘과 호감도 함수를 이용한 회귀모델의 최적화)

  • 안홍락;이세헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.450-453
    • /
    • 1997
  • There are many studies about optimization using genetic algorithm and desirability function. It's very important to find the optimal value of something like response surface or regression model. In this study I ind~cate the problem using the old type desirability function, and suggest the new type desirabhty functton that can fix the problem better, and simulate the model. Then I'll suggest the form of desirability function to find the optimum value of response surfaces which are made by mean and standard deviation using genetic algorithm and new type desirability function.

  • PDF

Performance Evaluation of Linear Regression, Back-Propagation Neural Network, and Linear Hebbian Neural Network for Fitting Linear Function (선형함수 fitting을 위한 선형회귀분석, 역전파신경망 및 성현 Hebbian 신경망의 성능 비교)

  • 이문규;허해숙
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.20 no.3
    • /
    • pp.17-29
    • /
    • 1995
  • Recently, neural network models have been employed as an alternative to regression analysis for point estimation or function fitting in various field. Thus far, however, no theoretical or empirical guides seem to exist for selecting the tool which the most suitable one for a specific function-fitting problem. In this paper, we evaluate performance of three major function-fitting techniques, regression analysis and two neural network models, back-propagation and linear-Hebbian-learning neural networks. The functions to be fitted are simple linear ones of a single independent variable. The factors considered are size of noise both in dependent and independent variables, portion of outliers, and size of the data. Based on comutational results performed in this study, some guidelines are suggested to choose the best technique that can be used for a specific problem concerned.

  • PDF

Features Reduction using Logistic Regression for Spam Filtering (로지스틱 회귀 분석을 이용한 스펨 필터링의 특징 축소)

  • Jung, Yong-Gyu;Lee, Bum-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.2
    • /
    • pp.13-18
    • /
    • 2010
  • Today, The much amount of spam that occupies the mail server and network storage occurs the lack of negative issues, such as overload, and for users to delete the spam should spend time, resources have a problem. Automatic spam filtering on the incidence to solve the problem is essential. A lot of Spam filters have tried to solve the problem emerged as an essential element automatically. Unlike traditional method such as Naive Bayesian, PCA through the many-dimensional data set of spam with a few spindle-dimensional process that narrowed the operation to reduce the burden on certain groups for classification Logistic regression analysis method was used to filter the spam. Through the speed and performance, it was able to get the positive results.

A statistical study of mathematical thinkings and problem-solving abilities for logical-type problems with reference to secondary talented students (중등영재학생들의 수학적 사고 선호도와 논리형 문제의 해결능력에 관한 통계적 검증 연구)

  • Pak, Hong-Kyung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.4
    • /
    • pp.198-204
    • /
    • 2009
  • It is one of important and interesting topics in mathematics education to study the process of the logical thinking and the intuitive thinking in mathematical problem-solving abilities from the viewpoint of mathematical thinking. The main purpose of the present paper is to investigate on this problem with reference to secondary talented students (students aged 16~17 years). In particular, we focus on the relationship between the preference of mathematical thinking and their problem-solving abilities for logical-type problems by applying logistic regression analysis.

Robust Nonparametric Regression Method using Rank Transformation

    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.574-574
    • /
    • 2000
  • Consider the problem of estimating regression function from a set of data which is contaminated by a long-tailed error distribution. The linear smoother is a kind of a local weighted average of response, so it is not robust against outliers. The kernel M-smoother and the lowess attain robustness against outliers by down-weighting outliers. However, the kernel M-smoother and the lowess requires the iteration for computing the robustness weights, and as Wang and Scott(1994) pointed out, the requirement of iteration is not a desirable property. In this article, we propose the robust nonparametic regression method which does not require the iteration. Robustness can be achieved not only by down-weighting outliers but also by transforming outliers. The rank transformation is a simple procedure where the data are replaced by their corresponding ranks. Iman and Conover(1979) showed the fact that the rank transformation is a robust and powerful procedure in the linear regression. In this paper, we show that we can also use the rank transformation to nonparametric regression to achieve the robustness.

Robust Nonparametric Regression Method using Rank Transformation

  • Park, Dongryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.575-583
    • /
    • 2000
  • Consider the problem of estimating regression function from a set of data which is contaminated by a long-tailed error distribution. The linear smoother is a kind of a local weighted average of response, so it is not robust against outliers. The kernel M-smoother and the lowess attain robustness against outliers by down-weighting outliers. However, the kernel M-smoother and the lowess requires the iteration for computing the robustness weights, and as Wang and Scott(1994) pointed out, the requirement of iteration is not a desirable property. In this article, we propose the robust nonparametic regression method which does not require the iteration. Robustness can be achieved not only by down-weighting outliers but also by transforming outliers. The rank transformation is a simple procedure where the data are replaced by their corresponding ranks. Iman and Conover(1979) showed the fact that the rank transformation is a robust and powerful procedure in the linear regression. In this paper, we show that we can also use the rank transformation to nonparametric regression to achieve the robustness.

  • PDF

GACV for partially linear support vector regression

  • Shim, Jooyong;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.2
    • /
    • pp.391-399
    • /
    • 2013
  • Partially linear regression is capable of providing more complete description of the linear and nonlinear relationships among random variables. In support vector regression (SVR) the hyper-parameters are known to affect the performance of regression. In this paper we propose an iterative reweighted least squares (IRWLS) procedure to solve the quadratic problem of partially linear support vector regression with a modified loss function, which enables us to use the generalized approximate cross validation function to select the hyper-parameters. Experimental results are then presented which illustrate the performance of the partially linear SVR using IRWLS procedure.

Semiparametric Kernel Fisher Discriminant Approach for Regression Problems

  • Park, Joo-Young;Cho, Won-Hee;Kim, Young-Il
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.2
    • /
    • pp.227-232
    • /
    • 2003
  • Recently, support vector learning attracts an enormous amount of interest in the areas of function approximation, pattern classification, and novelty detection. One of the main reasons for the success of the support vector machines(SVMs) seems to be the availability of global and sparse solutions. Among the approaches sharing the same reasons for success and exhibiting a similarly good performance, we have KFD(kernel Fisher discriminant) approach. In this paper, we consider the problem of function approximation utilizing both predetermined basis functions and the KFD approach for regression. After reviewing support vector regression, semi-parametric approach for including predetermined basis functions, and the KFD regression, this paper presents an extension of the conventional KFD approach for regression toward the direction that can utilize predetermined basis functions. The applicability of the presented method is illustrated via a regression example.