• Title/Summary/Keyword: Benefit Estimation

Search Result 235, Processing Time 0.037 seconds

Arrival Delay Estimation in Bottleneck Section of Gyeongbu Line (철도선로용량 부족에 따른 지체발생 연구 - 경부선 서울~금천구청 구간을 대상으로)

  • Lee, Jang-Ho
    • Journal of the Korean Society for Railway
    • /
    • v.18 no.4
    • /
    • pp.374-390
    • /
    • 2015
  • This research shows the relationship between the number of trains and the probability of trains with arrival delay and suggests way to estimate the benefits of improved punctuality in a bottleneck section of the Gyeongbu Line. The arrival delays of high-speed and conventional trains were estimated using the train operation data of KORAIL. Linear regression models for the probability of trains with arrival delay by train type are presented in this paper. The probabilities of trains with arrival delay were more affected by the number of conventional trains than by the number of high-speed rail trains. For the empirical analysis, a project for increasing the capacity in the Seoul~Geumcheongu office section was tested. The benefits of the improved punctuality were estimated to be 4.2~4.5 billion Korean won every year. This research has some limitations but it can help evaluate more precisely the feasibility of the project of increasing the capacity in bottleneck sections.

Estimation of lapse rate of variable annuities by using Cox proportional hazard model (Cox 비례위험모형을 이용한 변액연금 해지율의 추정)

  • Kim, Yumi;Lee, Hangsuck
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.723-736
    • /
    • 2013
  • The importance of lapse rate is highly increasing due to the introduction of Cash Flow Pricing system, non-refund-of-reserve insurance policy, and IFRS (International Financial Reporting System) to the Korean insurance market. Researches on lapse rate have mainly focused on simple data analysis and regression analysis, etc. However, lapse rate can be analyzed by survival analysis and can be well explained in terms of several covariates with Cox proportional hazard model. Guaranteed minimum benefits embedded in variable annuities require more elegant statistical analysis of lapse rate. Hence, this paper analyzes data of policyholders with variable annuities by using Cox proportional hazard model. The key variables of policy holder that influences the lapse rate are payment method, premium, lapse insured to term insured, reserve-GMXB ratio, and age.

Federated Filter Approach for GNSS Network Processing

  • Chen, Xiaoming;Vollath, Ulrich;Landau, Herbert
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.171-174
    • /
    • 2006
  • A large number of service providers in countries all over the world have established GNSS reference station networks in the last years and are using network software today to provide a correction stream to the user as a routine service. In current GNSS network processing, all the geometric related information such as ionospheric free carrier phase ambiguities from all stations and satellites, tropospheric effects, orbit errors, receiver and satellite clock errors are estimated in one centralized Kalman filter. Although this approach provides an optimal solution to the estimation problem, however, the processing time increases cubically with the number of reference stations in the network. Until now one single Personal Computer with Pentium 3.06 GHz CPU can only process data from a network consisting of no more than 50 stations in real time. In order to process data for larger networks in real time and to lower the computational load, a federated filter approach can be considered. The main benefit of this approach is that each local filter runs with reduced number of states and the computation time for the whole system increases only linearly with the number of local sensors, thus significantly reduces the computational load compared to the centralized filter approach. This paper presents the technical aspect and performance analysis of the federated filter approach. Test results show that for a network of 100 reference stations, with the centralized approach, the network processing including ionospheric modeling and network ambiguity fixing needs approximately 60 hours to process 24 hours network data in a 3.06 GHz computer, which means it is impossible to run this network in real time. With the federated filter approach, only less than 1 hour is needed, 66 times faster than the centralized filter approach. The availability and reliability of network processing remain at the same high level.

  • PDF

Investigation on the Main Exposure Sources of Nanomaterials for Nanohazards Assessment (나노위해성 관리를 위한 나노물질 주요 배출원 파악)

  • Kim, Young-Hun;Park, Jun-Su;Kim, He-Ro;Lee, Jeong-Jin;Bae, Eun-Joo;Lee, Su-Seung;Kwak, Byoung-Kyu;Choi, Kyung-Hee;Park, Kwang-Sik;Yi, Jong-Heop
    • Environmental Analysis Health and Toxicology
    • /
    • v.23 no.4
    • /
    • pp.257-265
    • /
    • 2008
  • Nanotechnology is emerging as one of the key technologies of the 21 st century and is expected to enable one to broaden the applicability across a wide range of sectors that can benefit public and improve industrial competitiveness. Already, consumer products containing nanomaterials are available in markets including coatings, computers, clothing, cosmetics, sports equipment and medical devices. Recently, Institute of Occupational Medicine in UK reported an occupational hygiene review for nanoparticles in the viewpoint of nanotoxicity. They reported that the exposure control is very important issues in workplace for exposure assessment, but no proper methods are available to measure the extent of exposures to nanoparticles in the workplace. Therefore, for the estimation of exposure of nanomaterials, we have to approach the material-balance methodology, which similarly carried out in TRI (toxic release inventory) for hazardous chemicals. In order to use this methodology, the exposure source of nanomaterials should be determined firstly. Therefore, herein we investigated the main sources and processes for the exposure to nanomaterals by conducting the survey. The results could be used to define and assess nanohazard sources.

Corrupted Region Restoration based on 2D Tensor Voting (2D 텐서 보팅에 기반 한 손상된 텍스트 영상의 복원 및 분할)

  • Park, Jong-Hyun;Toan, Nguyen Dinh;Lee, Guee-Sang
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.205-210
    • /
    • 2008
  • A new approach is proposed for restoration of corrupted regions and segmentation in natural text images. The challenge is to fill in the corrupted regions on the basis of color feature analysis by second order symmetric stick tensor. It is show how feature analysis can benefit from analyzing features using tensor voting with chromatic and achromatic components. The proposed method is applied to text images corrupted by manifold types of various noises. Firstly, we decompose an image into chromatic and achromatic components to analyze images. Secondly, selected feature vectors are analyzed by second-order symmetric stick tensor. And tensors are redefined by voting information with neighbor voters, while restore the corrupted regions. Lastly, mode estimation and segmentation are performed by adaptive mean shift and separated clustering method respectively. This approach is automatically done, thereby allowing to easily fill-in corrupted regions containing completely different structures and surrounding backgrounds. Applications of proposed method include the restoration of damaged text images; removal of superimposed noises or streaks. We so can see that proposed approach is efficient and robust in terms of restoring and segmenting text images corrupted.

Estimating Ancillary Benefits of GHG Reduction Using Contingent Valuation Method (온실가스 감축의 부수적 가치 추정)

  • Kim, Chung-Sil;Lee, Sang-Ho;Jung, Sang-Ok;Yeo, Jun-Ho;Lee, Sun-Seok
    • Journal of agriculture & life science
    • /
    • v.44 no.3
    • /
    • pp.89-97
    • /
    • 2010
  • In the contingent valuation method (CVM) survey, we employed double-bounded discrete choice (DBDC) question to investigate the willingness to pay (WTP). The estimation results for the bivariate logit model show that respondents are willing to pay 329,256 won per year. The model with covariate variables suggests that the covariate effects help describe behavioral or preference tendencies. Double-bounded models increase efficiency over single dichotomous choice models, because the answers yes-no or no-yes yield clear bounds on WTP.

Estimation of Weights in Water Management Resilience Index Using Principal Component Analysis(PCA) (주성분 분석(PCA)을 이용한 물관리 탄력성 지수의 가중치 산정)

  • Park, Jung Eun;Lim, Kwang Suop;Lee, Eul Rae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.583-583
    • /
    • 2016
  • 다양한 평가지표가 반영된 복합 지수(Composite Index)는 물관리 정책의 우선순위 결정 및 정책성과의 모니터링에 유용한 도구로 사용되고 있다. 각 지표별 중요도를 나타내는 가중치는 최종 지수의 산정에 영향을 미칠 수 있으며, 그 결정방법도 Data Envelopment Analysis(DEA), Benefit of doubt Approach(BOD), Unobserved Component Model(UCM), Budget Allocation Process(BAP), Analytic Hierarchy Process(AHP), Conjoint Analysis(CA) 등 다양하다. 본 연구에서는 여러 가지 가중치 결정방법 중 통계적 방법인 주성분 분석(Principal Component Analysis, PCA)을 사용하여 Park et al.(2016)이 제시한 물관리 탄력성 지수(Water Management Resilience Index, WMRI)에 대한 가중치를 산정하여 동일 가중치를 적용한 기존 결과와 비교하였다. 물관리 탄력성 지수는 자연조건상 물관리 취약성(Vulnerability), 기존 수자원 인프라의 견고성(Robustness), 물위기 적응전략의 다양성(Redundancy)의 3가지 부지수(sub-index)는 각각 13개, 11개, 7개의 지표(Indicator)로 구성되어 있으며, 117개 중권역을 다목적댐 하류 본류유역(범주 1), 용수공급 및 유량조절이 불가능한 지류(범주 2)와 가능한 지류(범주 3)로 분류하여 적용되었다. 각 부지수별로 추출된 3개, 5개, 3개의 주성분이 전체 자료의 76.4%, 71.2%, 63.2%를 설명하는 것으로 분석되었으며 부지수별 주성분의 고유벡터(Eigenvector)와 고유값(Eigenvalue)를 계산하고 각 지표의 가중치를 산정하였다. 주성분 분석에 의한 가중치와 동일 가중치를 적용하였을 경우와 비교해보면 취약성 부지수 1.9%, 견고성 부지수 1.9%, 다양성 부지수 2.1%의 차이가 나타나며 물관리 탄력성 지수는 0.4%의 차이를 보임에 따라 Park et al.이 제시한 연구결과의 적정성을 확인할 수 있었다. 주성분 분석은 객관적인 가중치 설정을 위한 통계적 접근방법의 하나로써 다양한 물관리 정책지수 산정시 활용될 수 있을 것이며, 향후 다른 가중치 산정방법을 적용함으로써 각 방법에 따른 지수 결과의 민감도 및 장단점을 분석할 수 있을 것으로 판단된다.

  • PDF

A Study on the Building Height Estimation and Accuracy Using Unmanned Aerial Vehicles (무인비행장치기반 건축물 높이 산출 및 정확도에 관한 연구)

  • Lee, Seung-weon;Kim, Min-Seok;Seo, Dong-Min;Baek, Seung-Chan;Hong, Won-Hwa
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.36 no.2
    • /
    • pp.79-86
    • /
    • 2020
  • In order to accommodate the increase in urban population due to government-led national planning and economic growth, many buildings such as houses and business building were supplied. Although the building law was revised and managed to manage the supplied buildings, for the sake of economic benefit, there have been buildings that are enlarged or reconstructed without declaring building permits. In order to manage these buildings, on-site surveys were conducted. but it has many personnel consumption. To solve this problem, a method of using a satellite image and a manned aircraft is utilized, but it is diseconomical and a renewal cycle is long. In addition, it is not utilized to the height, and although it is judged by the shading of the building, it has limitations that it must be calculated individually. In this study, height of the building was calculated by using the unmanned aerial vehicle with low personnel consumption, and the accuracy was verified by comparison with the building register and measured value. In this study, spatial information was constructed using a fast unmanned aerial vehicle with low manpower consumption and the building height was calculated based on this. The accuracy by comparing the calculated building height with the building register and the actual measurement.

Appropriate Adjustment according to the Supply and Demand Status and Trend of Doctors (의사 인력의 수급 현황과 추세에 따른 적정 조정)

  • Yun Hwa Jung;Ye-Seul Jang;Hyunkyu Kim;Eun-Cheol Park;Sung-In Jang
    • Health Policy and Management
    • /
    • v.33 no.4
    • /
    • pp.457-478
    • /
    • 2023
  • Background: This study aims to contribute to the adjustment of the appropriate doctor manpower by analyzing the distribution, supply and demand, and estimation of the doctor manpower. Methods: This study utilized the medical personnel data of the Ministry of Health and Welfare, population trend data of the National Statistical Office, and health insurance benefit performance data of the National Health Insurance Service. Based on 2021, we compared the number of doctors in actual supply and the number of doctors in demand according to the amount of medical use by gender and age for 250 regions. Logistic regression analysis and scenario analysis were performed to estimate the future medical workforce by considering the demand for doctors according to the future demographic structure, the size of the quota in medical schools, and the retirement rate. Results: There were 186 regions in which the supply of doctors was below average, and the average ratio of the number of doctors in supply to demand in the region was 62.1%. Conclusion: In order to increase the number of active doctors nationwide to at least 80%, 7,756 people must be allocated. The number of doctors in demand is estimated to decrease after increasing to 1.492 times in 2059. The future projected number of doctors is expected to increase to 1.349 times in 2050 and then decrease taking into account the doctor quota and the retirement rate.

Real-time private consumption prediction using big data (빅데이터를 이용한 실시간 민간소비 예측)

  • Seung Jun Shin;Beomseok Seo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.13-38
    • /
    • 2024
  • As economic uncertainties have increased recently due to COVID-19, there is a growing need to quickly grasp private consumption trends that directly reflect the economic situation of private economic entities. This study proposes a method of estimating private consumption in real-time by comprehensively utilizing big data as well as existing macroeconomic indicators. In particular, it is intended to improve the accuracy of private consumption estimation by comparing and analyzing various machine learning methods that are capable of fitting ultra-high-dimensional big data. As a result of the empirical analysis, it has been demonstrated that when the number of covariates including big data is large, variables can be selected in advance and used for model fit to improve private consumption prediction performance. In addition, as the inclusion of big data greatly improves the predictive performance of private consumption after COVID-19, the benefit of big data that reflects new information in a timely manner has been shown to increase when economic uncertainty is high.