• Title/Summary/Keyword: 설명 변수

Search Result 2,574, Processing Time 0.029 seconds

A Deep Learning Model for Identifying The Time Lag Between Explanatory Variables and Response Variable in Regression Analysis (회귀분석에서 설명변수와 반응변수 간의 시차를 파악하는 딥러닝 모델)

  • Kim, Chaehyeon;Ryoo, Euirim;Lee, Ki Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.868-871
    • /
    • 2021
  • 기후, 경영, 경제 등 여러 분야의 회귀분석에서 설명변수가 반응변수에 일정 시차를 두고 영향을 미치는 경우들이 많다. 하지만 지금까지 대부분의 회귀분석은 설명변수가 반응변수에 즉각적으로 영향을 미치는 경우만을 가정하고 있으며, 설명변수와 반응변수 간에 존재하는 시차를 탐색하는 연구는 거의 이루어지지 않았다. 그러나 보다 정확한 회귀분석을 위해서는 설명변수와 반응변수 간에 존재하는 시차를 파악하는 것이 중요하다. 본 논문은 회귀분석 데이터가 주어졌을 때 설명변수와 반응변수 간에 존재하는 시차를 파악하는 딥러닝 모델을 제안한다. 제안하는 딥러닝 모델은 설명변수의 과거 값들 중 어떤 값이 현재 반응변수에 가장 큰 영향을 미치는지를 노드 간 가중치로 표현하고, 회귀모델의 오차를 최소화하는 가중치를 탐색한다. 훈련이 끝나면 이 가중치들을 사용하여 각 설명변수와 반응변수 간에 존재하는 시차를 파악한다. 실험을 통해 제안 방법은 시차를 고려하지 않는 기존 회귀모델에 비해 시차까지 고려함으로써 오차가 1/100 수준에 불과한 더 정확한 회귀모델을 찾을 수 있음을 확인하였다.

A Study on the Modal Split Model Using Zonal Data (존 데이터 기반 수단분담모형에 관한 연구)

  • Ryu, Si-Kyun;Rho, Jeong-Hyun;Kim, Ji-Eun
    • Journal of Korean Society of Transportation
    • /
    • v.30 no.1
    • /
    • pp.113-123
    • /
    • 2012
  • This study introduces a new type of a modal split model that use zonal data instead of cost data as independent variables. It has been indicated that the ones using cost data have deficiencies in the multicollinearity of travel time and cost variables and unpredictability of independent variables. The zonal data employed in this study include (1) socioeconomic data, (2) land use data and (3) transportation system data. The test results showed that the proposed modal split model using zonal data performs better than the other does.

Subset Selection in the Poisson Models - A Normal Predictors case - (포아송 모형에서의 설명변수 선택문제 - 정규분포 설명변수하에서 -)

  • 박종선
    • The Korean Journal of Applied Statistics
    • /
    • v.11 no.2
    • /
    • pp.247-255
    • /
    • 1998
  • In this paper, a new subset selection problem in the Poisson model is considered under the normal predictors. It turns out that the subset model has bigger valiance than that of the Poisson model with random predictors and this has been used to derive new subset selection method similar to Mallows'$C_p$.

  • PDF

Variable Selection in PLS Regression with Penalty Function (벌점함수를 이용한 부분최소제곱 회귀모형에서의 변수선택)

  • Park, Chong-Sun;Moon, Guy-Jong
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.4
    • /
    • pp.633-642
    • /
    • 2008
  • Variable selection algorithm for partial least square regression using penalty function is proposed. We use the fact that usual partial least square regression problem can be expressed as a maximization problem with appropriate constraints and we will add penalty function to this maximization problem. Then simulated annealing algorithm can be used in searching for optimal solutions of above maximization problem with penalty functions added. The HARD penalty function would be suggested as the best in several aspects. Illustrations with real and simulated examples are provided.

Note on the estimation of informative predictor subspace and projective-resampling informative predictor subspace (다변량회귀에서 정보적 설명 변수 공간의 추정과 투영-재표본 정보적 설명 변수 공간 추정의 고찰)

  • Yoo, Jae Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.657-666
    • /
    • 2022
  • An informative predictor subspace is useful to estimate the central subspace, when conditions required in usual suffcient dimension reduction methods fail. Recently, for multivariate regression, Ko and Yoo (2022) newly defined a projective-resampling informative predictor subspace, instead of the informative predictor subspace, by the adopting projective-resampling method (Li et al. 2008). The new space is contained in the informative predictor subspace but contains the central subspace. In this paper, a method directly to estimate the informative predictor subspace is proposed, and it is compapred with the method by Ko and Yoo (2022) through theoretical aspects and numerical studies. The numerical studies confirm that the Ko-Yoo method is better in the estimation of the central subspace than the proposed method and is more efficient in sense that the former has less variation in the estimation.

A study on the comparison of descriptive variables reduction methods in decision tree induction: A case of prediction models of pension insurance in life insurance company (생명보험사의 개인연금 보험예측 사례를 통해서 본 의사결정나무 분석의 설명변수 축소에 관한 비교 연구)

  • Lee, Yong-Goo;Hur, Joon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.1
    • /
    • pp.179-190
    • /
    • 2009
  • In the financial industry, the decision tree algorithm has been widely used for classification analysis. In this case one of the major difficulties is that there are so many explanatory variables to be considered for modeling. So we do need to find effective method for reducing the number of explanatory variables under condition that the modeling results are not affected seriously. In this research, we try to compare the various variable reducing methods and to find the best method based on the modeling accuracy for the tree algorithm. We applied the methods on the pension insurance of a insurance company for getting empirical results. As a result, we found that selecting variables by using the sensitivity analysis of neural network method is the most effective method for reducing the number of variables while keeping the accuracy.

  • PDF

Sample-spacing Approach for the Estimation of Mutual Information (SAMPLE-SPACING 방법에 의한 상호정보의 추정)

  • Huh, Moon-Yul;Cha, Woon-Ock
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.301-312
    • /
    • 2008
  • Mutual information is a measure of association of explanatory variable for predicting target variable. It is used for variable ranking and variable subset selection. This study is about the Sample-spacing approach which can be used for the estimation of mutual information from data consisting of continuous explanation variables and categorical target variable without estimating a joint probability density function. The results of Monte-Carlo simulation and experiments with real-world data show that m = 1 is preferable in using Sample-spacing.

Effects of Multicollinearity in Logit Model (로짓모형에 있어서 다중공선성의 영향에 관한 연구)

  • Ryu, Si-Kyun
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.1
    • /
    • pp.113-126
    • /
    • 2008
  • This research aims to explore the effects of multicollinearity on the reliability and goodness of fit of logit model. To investigate the effects of multicollinearity on the multinominal logit model, numerical experiments are performed. The exploratory variables(attributes of utility functions) which have a certain degree of correlations from (rho=) 0.0 to (rho=) 0.9 are generated and rho-squares and t-statistics which are the indices of goodness of fit and reliability of logit model are traced. From the well designed numerical experiments, following findings are validated : 1) When a new exploratory variable is added, some of rho-squares increase while the others decrease. 2) The higher relations between generic variables lead a logit model worse with respect to goodness of fit. 3) Multicollinearity has a tendency to produce over-evaluated parameters. 4) The reliability of the estimated parameter has a tendency to decrease when the correlations between attributes are high. These results suggest that we have to examine the existence of multicollinearity and perform the proper treatments to diminish multicollinearity when we develop logit model.

A credit classification method based on generalized additive models using factor scores of mixtures of common factor analyzers (공통요인분석자혼합모형의 요인점수를 이용한 일반화가법모형 기반 신용평가)

  • Lim, Su-Yeol;Baek, Jang-Sun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.235-245
    • /
    • 2012
  • Logistic discrimination is an useful statistical technique for quantitative analysis of financial service industry. Especially it is not only easy to be implemented, but also has good classification rate. Generalized additive model is useful for credit scoring since it has the same advantages of logistic discrimination as well as accounting ability for the nonlinear effects of the explanatory variables. It may, however, need too many additive terms in the model when the number of explanatory variables is very large and there may exist dependencies among the variables. Mixtures of factor analyzers can be used for dimension reduction of high-dimensional feature. This study proposes to use the low-dimensional factor scores of mixtures of factor analyzers as the new features in the generalized additive model. Its application is demonstrated in the classification of some real credit scoring data. The comparison of correct classification rates of competing techniques shows the superiority of the generalized additive model using factor scores.

Comments on the regression coefficients (다중회귀에서 회귀계수 추정량의 특성)

  • Kahng, Myung-Wook
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.4
    • /
    • pp.589-597
    • /
    • 2021
  • In simple and multiple regression, there is a difference in the meaning of regression coefficients, and not only are the estimates of regression coefficients different, but they also have different signs. Understanding the relative contribution of explanatory variables in a regression model is an important part of regression analysis. In a standardized regression model, the regression coefficient can be interpreted as the change in the response variable with respect to the standard deviation when the explanatory variable increases by the standard deviation in a situation where the values of the explanatory variables other than the corresponding explanatory variable are fixed. However, the size of the standardized regression coefficient is not a proper measure of the relative importance of each explanatory variable. In this paper, the estimator of the regression coefficient in multiple regression is expressed as a function of the correlation coefficient and the coefficient of determination. Furthermore, it is considered in terms of the effect of an additional explanatory variable and additional increase in the coefficient of determination. We also explore the relationship between estimates of regression coefficients and correlation coefficients in various plots. These results are specifically applied when there are two explanatory variables.