• 제목/요약/키워드: regression function

검색결과 2,122건 처리시간 0.021초

INFLUENCE ANALYSIS FOR GENERALIZED ESTIMATING EQUATIONS

  • Jung Kang-Mo
    • Journal of the Korean Statistical Society
    • /
    • 제35권2호
    • /
    • pp.213-224
    • /
    • 2006
  • We investigate the influence of subjects or observations on regression coefficients of generalized estimating equations using the influence function and the derivative influence measures. The influence function for regression coefficients is derived and its sample versions are used for influence analysis. The derivative influence measures under certain perturbation schemes are derived. It can be seen that the influence function method and the derivative influence measures yield the same influence information. An illustrative example in longitudinal data analysis is given and we compare the results provided by the influence function method and the derivative influence measures.

NONPARAMETRIC ESTIMATION OF THE VARIANCE FUNCTION WITH A CHANGE POINT

  • Kang Kee-Hoon;Huh Jib
    • Journal of the Korean Statistical Society
    • /
    • 제35권1호
    • /
    • pp.1-23
    • /
    • 2006
  • In this paper we consider an estimation of the discontinuous variance function in nonparametric heteroscedastic random design regression model. We first propose estimators of the change point in the variance function and then construct an estimator of the entire variance function. We examine the rates of convergence of these estimators and give results for their asymptotics. Numerical work reveals that using the proposed change point analysis in the variance function estimation is quite effective.

Variable Selection in Sliced Inverse Regression Using Generalized Eigenvalue Problem with Penalties

  • Park, Chong-Sun
    • Communications for Statistical Applications and Methods
    • /
    • 제14권1호
    • /
    • pp.215-227
    • /
    • 2007
  • Variable selection algorithm for Sliced Inverse Regression using penalty function is proposed. We noted SIR models can be expressed as generalized eigenvalue decompositions and incorporated penalty functions on them. We found from small simulation that the HARD penalty function seems to be the best in preserving original directions compared with other well-known penalty functions. Also it turned out to be effective in forcing coefficient estimates zero for irrelevant predictors in regression analysis. Results from illustrative examples of simulated and real data sets will be provided.

Censored Kernel Ridge Regression

  • Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.1045-1052
    • /
    • 2005
  • This paper deals with the estimations of kernel ridge regression when the responses are subject to randomly right censoring. The weighted data are formed by redistributing the weights of the censored data to the uncensored data. Then kernel ridge regression can be taken up with the weighted data. The hyperparameters of model which affect the performance of the proposed procedure are selected by a generalized approximate cross validation(GACV) function. Experimental results are then presented which indicate the performance of the proposed procedure.

  • PDF

New Dispersion Function in the Rank Regression

  • Choi, Young-Hun
    • Communications for Statistical Applications and Methods
    • /
    • 제9권1호
    • /
    • pp.101-113
    • /
    • 2002
  • In this paper we introduce a new score generating (unction for the rank regression in the linear regression model. The score function compares the $\gamma$'th and s\`th power of the tail probabilities of the underlying probability distribution. We show that the rank estimate asymptotically converges to a multivariate normal. further we derive the asymptotic Pitman relative efficiencies and the most efficient values of $\gamma$ and s under the symmetric distribution such as uniform, normal, cauchy and double exponential distributions and the asymmetric distribution such as exponential and lognormal distributions respectively.

로지스틱 회귀모형을 이용한 비대칭 종형 확률곡선의 추정 (Estimation of Asymmetric Bell Shaped Probability Curve using Logistic Regression)

  • 박성현;김기호;이소형
    • 응용통계연구
    • /
    • 제14권1호
    • /
    • pp.71-80
    • /
    • 2001
  • 로지스틱 회귀모형은 이항 반응자료에 대한 가장 보편적인 일반화 선형모형으로 독립변수에 대한 확률함수를 추정하는데 이용된다. 많은 실제적 상황에서 확률함수가 종형의 곡선형태로 표현되는데 이 경우에는 2차항을 포함한 로지스틱 회귀모형을 이용한 분석은 대칭성을 갖는 확률함수에 대한 가정으로 인해 비대칭 형태의 종형곡선에서는 확률함수의 신뢰성이 저하되고, 2차항을 포함하기 때문에 독립변수의 효과를 설명하기가 쉽지 않다는 제한점을 가지고 있다. 본 논문에서는 이러한 문제점을 해소하기 위해서 로지스틱 회귀분석과 반복적 이분법을 이용하여 종형의 형태에 관계없이 확률곡선을 추정하는 방법론을 제안하고 모의 실험을 통해 2차항을 포함한 로지스틱 회귀모형과 비교하고자 한다.

  • PDF

Support vector expectile regression using IRWLS procedure

  • Choi, Kook-Lyeol;Shim, Jooyong;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권4호
    • /
    • pp.931-939
    • /
    • 2014
  • In this paper we propose the iteratively reweighted least squares procedure to solve the quadratic programming problem of support vector expectile regression with an asymmetrically weighted squares loss function. The proposed procedure enables us to select the appropriate hyperparameters easily by using the generalized cross validation function. Through numerical studies on the artificial and the real data sets we show the effectiveness of the proposed method on the estimation performances.

벌점함수를 이용한 부분최소제곱 회귀모형에서의 변수선택 (Variable Selection in PLS Regression with Penalty Function)

  • 박종선;문규종
    • Communications for Statistical Applications and Methods
    • /
    • 제15권4호
    • /
    • pp.633-642
    • /
    • 2008
  • 본 논문에서는 반응변수가 하나 이상이고 설명변수들의 수가 관측치에 비하여 상대적으로 많은 경우에 널리 사용되는 부분최소제곱회귀모형에 벌점함수를 적용하여 모형에 필요한 설명변수들을 선택하는 문제를 고려하였다. 모형에 필요한 설명변수들은 각각의 잠재변수들에 대한 최적해 문제에 벌점함수를 추가한 후 모의담금질을 이용하여 선택하였다. 실제 자료에 대한 적용 결과 모형의 설명력 및 예측력을 크게 떨어뜨리지 않으면서 필요없는 변수들을 효과적으로 제거하는 것으로 나타나 부분최소제곱회귀모형에서 최적인 설명변수들의 부분집합을 선택하는데 적용될 수 있을 것이다.

경쟁적 위험하에서의 회귀분석 (Competing Risks Regression Analysis)

  • 백재욱
    • 한국신뢰성학회지:신뢰성응용연구
    • /
    • 제18권2호
    • /
    • pp.130-142
    • /
    • 2018
  • Purpose: The purpose of this study is to introduce regression method in the presence of competing risks and to show how you can use the method with hypothetical data. Methods: Survival analysis has been widely used in biostatistics division. But the same method has not been utilized in reliability division. Especially competing risks, where more than a couple of causes of failure occur and the occurrence of one event precludes the occurrence of the other events, are scattered in reliability field. But they are not utilized in the area of reliability or they are analysed in the wrong way. Specifically Kaplan-Meier method is used to calculate the probability of failure in the presence of competing risks, thereby overestimating the real probability of failure. Hence, cumulative incidence function is introduced. In addition, sample competing risks data are analysed using cumulative incidence function along with some graphs. Lastly we compare cumulative incidence functions with regression type analysis briefly. Results: We used cumulative incidence function to calculate the survival probability or failure probability in the presence of competing risks. We also drew some useful graphs depicting the failure trend over the lifetime. Conclusion: This research shows that Kaplan-Meier method is not appropriate for the evaluation of survival or failure over the course of lifetime in the presence of competing risks. Cumulative incidence function is shown to be useful in stead. Some graphs using the cumulative incidence functions are also shown to be informative.

시프트 시그모이드 분류함수를 가진 로지스틱 회귀를 이용한 신입생 중도탈락 예측모델 연구 (A Study of Freshman Dropout Prediction Model Using Logistic Regression with Shift-Sigmoid Classification Function)

  • 김동형
    • 디지털산업정보학회논문지
    • /
    • 제19권4호
    • /
    • pp.137-146
    • /
    • 2023
  • The dropout of university freshmen is a very important issue in the financial problems of universities. Moreover, the dropout rate is one of the important indicators among the external evaluation items of universities. Therefore, universities need to predict dropout students in advance and apply various dropout prevention programs targeting them. This paper proposes a method to predict such dropout students in advance. This paper is about a method for predicting dropout students. It proposes a method to select dropouts by applying logistic regression using a shift sigmoid classification function using only quantitative data from the first semester of the first year, which most universities have. It is based on logistic regression and can select the number of prediction subjects and prediction accuracy by using the shift sigmoid function as an classification function. As a result of the experiment, when the proposed algorithm was applied, the number of predicted dropout subjects varied from 100% to 20% compared to the actual number of dropout subjects, and it was found to have a prediction accuracy of 75% to 98%.