• Title/Summary/Keyword: 로그 모형

Search Result 219, Processing Time 0.025 seconds

A Brief Efficiency Measurement Way for the Korean Container Terminals Using Stochastic Frontier Analysis (확률프론티어분석을 통한 국내컨테이너 터미널의 효율성 측정방법 소고)

  • Park, Ro-Kyung
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.4
    • /
    • pp.63-87
    • /
    • 2010
  • The purpose of this paper is to measure the efficiency of Korean container terminals by using SFA(Stochastic Frontier Analysis). Inputs[Number of Employee, Quay Length, Container Terminal Area, Number of Gantry Crane], and output[TEU] are used for 3 years(2002,2003, and 2004) for 8 Korean container terminals by applying both SFA and DEA models. Empirical main results are as follows: First, Null hypothesis that technical inefficiency is not existed is rejected and in the trasnslog model, the estimate is significant. Second, time-series models show the significant results. Third, average technical efficiency of Korean container terminals are 73.49% in Cobb-Douglas model, and 79.04% in translog model. Fourth, to enhance the technical efficiency, Korean container terminals should increase the handling amount of TEUs. Fifth, both SFA and DEA models have the high Spearman ranking of correlation coefficients(84.45%). The main policy implication based on the findings of this study is that the manager of port investment and management of Ministry of Land, Transport and Maritime Affairs in Korea should introduce the SFA with DEA models for measuring the efficiency of Korean ports and terminals.

A Study on Developing Crash Prediction Model for Urban Intersections Considering Random Effects (임의효과를 고려한 도심지 교차로 교통사고모형 개발에 관한 연구)

  • Lee, Sang Hyuk;Park, Min Ho;Woo, Yong Han
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.14 no.1
    • /
    • pp.85-93
    • /
    • 2015
  • Previous studies have estimated crash prediction models with the fixed effect model which assumes the fixed value of coefficients without considering characteristics of each intersections. However the fixed effect model would estimate under estimation of the standard error resulted in over estimation of t-value. In order to overcome these shortcomings, the random effect model can be used with considering heterogeneity of AADT, geometric information and unobserved factors. In this study, data collections from 89 intersections in Daejeon and estimates of crash prediction models were conducted using the random and fixed effect negative binomial regression model for comparison and analysis of two models. As a result of model estimates, AADT, speed limits, number of lanes, exclusive right turn pockets and front traffic signal were found to be significant. For comparing statistical significance of two models, the random effect model could be better statistical significance with -1537.802 of log-likelihood at convergence comparing with -1691.327 for the fixed effect model. Also likelihood ration value was computed as 0.279 for the random effect model and 0.207 for the fixed effect model. This mean that the random effect model can be improved for statistical significance of models comparing with the fixed effect model.

A study on non-response bias adjusted estimation in business survey (사업체조사에서의 무응답 편향보정 추정에 관한 연구)

  • Chung, Hee Young;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.11-23
    • /
    • 2020
  • Sampling design should provide statistics to meet a given accuracy while saving cost and time. However, a large number of non-responses are occurring due to the deterioration of survey circumstances, which significantly reduces the accuracy of the survey results. Non-responses occur for a variety of reasons. Chung and Shin (2017, 2019) and Min and Shin (2018) found that the accuracy of estimation is improved by removing the bias caused by non-response when the response rate is an exponential or linear function of variable of interests. For that case they assumed that the error of the super population model follows normal distribution. In this study, we proposed a non-response bias adjusted estimator in the case where the error of a super population model follows the gamma distribution or the log-normal distribution in a business survey. We confirmed the superiority of the proposed estimator through simulation studies.

A Study on the Factors Affecting the Arson (방화 발생에 영향을 미치는 요인에 관한 연구)

  • Kim, Young-Chul;Bak, Woo-Sung;Lee, Su-Kyung
    • Fire Science and Engineering
    • /
    • v.28 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • This study derives the factors which affect the occurrence of arson from statistical data (population, economic, and social factors) by multiple regression analysis. Multiple regression analysis applies to 4 forms of functions, linear functions, semi-log functions, inverse log functions, and dual log functions. Also analysis respectively functions by using the stepwise progress which considered selection and deletion of the independent variable factors by each steps. In order to solve a problem of multiple regression analysis, autocorrelation and multicollinearity, Variance Inflation Factor (VIF) and the Durbin-Watson coefficient were considered. Through the analysis, the optimal model was determined by adjusted Rsquared which means statistical significance used determination, Adjusted R-squared of linear function is scored 0.935 (93.5%), the highest of the 4 forms of function, and so linear function is the optimal model in this study. Then interpretation to the optimal model is conducted. As a result of the analysis, the factors affecting the arson were resulted in lines, the incidence of crime (0.829), the general divorce rate (0.151), the financial autonomy rate (0.149), and the consumer price index (0.099).

Estimation of Occurrence Probability of Socioeconomic Damage Caused by Meteorological Drought Using Categorical Data Analysis (범주형 자료 분석을 활용한 사회경제적 가뭄 피해 발생확률 산정 : 충청북도의 적용사례를 중심으로)

  • Yu, Ji Soo;Yoo, Jiyoung;Kim, Min-ji;Kim, Tae-Woong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.348-348
    • /
    • 2021
  • 가뭄 연구의 궁극적 목표는 가뭄 발생의 메커니즘에 대한 이해를 높이고, 예측기술을 향상시켜 선제적 대응이 가능하도록 하는 것이다. 일반적으로 가뭄분석에 활용되는 가뭄지표는 연속형 변수로 간주하여 확률모형을 구축하지만, 가뭄상태와 가뭄피해 자료는 순서형 및 이산형 변수이므로 범주형 자료 분석 기법을 적용하는 것이 더 적절하다. 따라서 본 연구에서는 기상학적 가뭄과 피해발생 사이의 관계를 규명하기 위해 범주형 자료 분석 방법 중 로그선형(log-linear) 모형과 로지스틱(logistic) 회귀모형을 활용하였다. 가뭄피해 예측을 위한 가뭄 피해 정보를 수집하는 것은 매우 어려운 일이다. 가뭄의 영향으로 인해 발생할 수 있는 피해의 종류가 다양하며, 여러 분야의 이해관계자가 받아들이는 가뭄의 피해 양상이 다르기 때문이다. 본 연구에서는 국가가뭄정보포털(drought.go.kr)에서 충청북도의 가뭄피해현황 자료를 수집하였다. 30년(1991~2020년)동안 238개 읍면동 중 34개 행정구역에서 총 272건의 가뭄피해가 발생한 것으로 확인되었다. 표준강수지수(SPI)를 이용하여 분석된 지역별 연평균 가뭄발생횟수는 약 8.44회이며, 가뭄이 가장 많이 발생한 해는 2001년(평균 가뭄발생 18.7회)이었다. 강수의 부족으로 인해 발생하는 기상학적 가뭄이 사회경제적 피해를 야기하는 수문학적 가뭄으로 전이되기까지 몇 주에서 몇 달까지 시간이 소요된다. 이러한 관계를 파악하기 위해 가뭄피해 발생 여부를 예측변수, 가뭄피해 발생 이전의 가뭄상태를 설명변수로 설정하여 기상학적 가뭄 발생에 따른 가뭄피해 발생 확률을 산정하였다. 그 결과 가뭄피해 발생 당시의 가뭄상태보다 그 이전에 연속된 가뭄상태가 있을 경우 가뭄피해 발생 확률이 약 2.5배 상승하는 것으로 나타났다.

  • PDF

An Analysis on Vehicle Accident Factors of Intersections using Random Effects Tobit Regression Model (Random Effects Tobit 회귀모형을 이용한 교차로 교통사고 요인 분석)

  • Lee, Sang Hyuk;Lee, Jung-Beom
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.1
    • /
    • pp.26-37
    • /
    • 2017
  • The study is to develop safety performance functions(SPFs) for urban intersections using random effects Tobit regression model and to analyze correlations between crashes and factors. Also fixed effects Tobit regression model was estimated to compare and analyze model validation with random effects model. As a result, AADT, speed limits, number of lanes, land usage, exclusive right turn lanes and front traffic signal were found to be significant. For comparing statistical significance between random and fixed effects model, random effects Tobit regression model of total crash rate could be better statistical significance with $R^2_p$ : 0.418, log-likelihood at convergence: -3210.103, ${\rho}^2$: 0.056, MAD: 19.533, MAPE: 75.725, RMSE: 26.886 comparing with $R^2_p$ : 0.298, log-likelihood at convergence: -3276.138, ${\rho}^2$: 0.037, MAD: 20.725, MAPE: 82.473, RMSE: 27.267 for the fixed model. Also random effects Tobit regression model of injury crash rate has similar results of model statistical significant with random effects Tobit regression model.

Applications of Diamond Graph (다이아몬드 그래프의 활용 방법)

  • Hong C.S.;Ko Y.S.
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.2
    • /
    • pp.361-368
    • /
    • 2006
  • There are lots of two and three dimensional graph representing two dimensional categorical data. Among them, Li, et al. (2003) proposed Diamond Graph that projects three dimensional graph into two dimension whereby the third dimension is replaced with a diamond shape whose area and middle and vertical and horizontal lengths represent the outcome. In this paper, we use the Diamond graph to test the independence of two predictor variables for two dimensional data. And this graph could be applied for finding the best fitted log-linear model to three dimensional data.

The Comparative Study of Software Optimal Release Time of Finite NHPP Model Considering Log Linear Learning Factor (로그선형 학습요인을 이용한 유한고장 NHPP모형에 근거한 소프트웨어 최적방출시기 비교 연구)

  • Cheul, Kim Hee;Cheul, Shin Hyun
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.3-10
    • /
    • 2012
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, finite failure non-homogeneous Poisson process model, considering learning factor, presented and propose release policies of the life distribution, log linear type model which used to an area of reliability because of various shape and scale parameter. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, the parameters estimation using maximum likelihood estimation of failure time data, make out estimating software optimal release time.

Assessing the accuracy of the maximum likelihood estimator in logistic regression models (로지스틱 회귀모형에서 최우추정량의 정확도 산정)

  • 이기원;손건태;정윤식
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.2
    • /
    • pp.393-399
    • /
    • 1993
  • When we compute the maximum likelihood estimators of the parameters for the logistic regression models, which are useful in studying the relationship between the binary response variable and the explanatory variable, the standard error calculations are usually based on the second derivative of log-likelihood function. On the other hand, an estimator of the Fisher information motivated from the fact that the expectation of the cross-product of the first derivative of the log-likelihood function gives the Fisher information is expected to have similar asymptotic properties. These estimators of Fisher information are closely related with the iterative algorithm to get the maximum likelihood estimator. The average numbers of iterations to achieve the maximum likelihood estimator are compared to find out which method is more efficient, and the estimators of the variance from each method are compared as estimators of the asymptotic variance.

  • PDF

The Bayesian Approach of Software Optimal Release Time Based on Log Poisson Execution Time Model (포아송 실행시간 모형에 의존한 소프트웨어 최적방출시기에 대한 베이지안 접근 방법에 대한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.7
    • /
    • pp.1-8
    • /
    • 2009
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. The optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement is generally accepted. The Bayesian parametric inference of model using log Poisson execution time employ tool of Markov chain(Gibbs sampling and Metropolis algorithm). In a numerical example by T1 data was illustrated. make out estimating software optimal release time from the maximum likelihood estimation and Bayesian parametric estimation.