• Title/Summary/Keyword: Adequacy decision

Search Result 59, Processing Time 0.025 seconds

The Study on the Elaboration of Technology Valuation Model and the Adequacy of Volatility based on Real Options (실물옵션 기반 기술가치 평가모델 정교화와 변동성 유효구간에 관한 연구)

  • Sung, Tae-Eung;Lee, Jongtaik;Kim, Byunghoon;Jun, Seung-Pyo;Park, Hyun-Woo
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.732-753
    • /
    • 2017
  • Recently, when evaluating the technology values in the fields of biotechnology, pharmaceuticals and medicine, we have needed more to estimate those values in consideration of the period and cost for the commercialization to be put into in future. The existing discounted cash flow (DCF) method has limitations in that it can not consider consecutive investment or does not reflect the probabilistic property of commercialized input cost of technology-applied products. However, since the value of technology and investment should be considered as opportunity value and the information of decision-making for resource allocation should be taken into account, it is regarded desirable to apply the concept of real options, and in order to reflect the characteristics of business model for the target technology into the concept of volatility in terms of stock price which we usually apply to in evaluation of a firm's value, we need to consider 'the continuity of stock price (relatively minor change)' and 'positive condition'. Thus, as discussed in a lot of literature, it is necessary to investigate the relationship among volatility, underlying asset values, and cost of commercialization in the Black-Scholes model for estimating the technology value based on real options. This study is expected to provide more elaborated real options model, by mathematically deriving whether the ratio of the present value of the underlying asset to the present value of the commercialization cost, which reflects the uncertainty in the option pricing model (OPM), is divided into the "no action taken" (NAT) area under certain threshold conditions or not, and also presenting the estimation logic for option values according to the observation variables (or input values).

Comparison of Deterministic and Probabilistic Approaches through Cases of Exposure Assessment of Child Products (어린이용품 노출평가 연구에서의 결정론적 및 확률론적 방법론 사용실태 분석 및 고찰)

  • Jang, Bo Youn;Jeong, Da-In;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.3
    • /
    • pp.223-232
    • /
    • 2017
  • Objectives: In response to increased interest in the safety of children's products, a risk management system is being prepared through exposure assessment of hazardous chemicals. To estimate exposure levels, risk assessors are using deterministic and probabilistic approaches to statistical methodology and a commercialized Monte Carlo simulation based on tools (MCTool) to efficiently support calculation of the probability density functions. This study was conducted to analyze and discuss the usage patterns and problems associated with the results of these two approaches and MCTools used in the case of probabilistic approaches by reviewing research reports related to exposure assessment for children's products. Methods: We collected six research reports on exposure and risk assessment of children's products and summarized the deterministic results and corresponding underlying distributions for exposure dose and concentration results estimated through deterministic and probabilistic approaches. We focused on mechanisms and differences in the MCTools used for decision making with probabilistic distributions to validate the simulation adequacy in detail. Results: The estimation results of exposure dose and concentration from the deterministic approaches were 0.19-3.98 times higher than the results from the probabilistic approach. For the probabilistic approach, the use of lognormal, Student's T, and Weibull distributions had the highest frequency as underlying distributions of the input parameters. However, we could not examine the reasons for the selection of each distribution because of the absence of test-statistics. In addition, there were some cases estimating the discrete probability distribution model as the underlying distribution for continuous variables, such as weight. To find the cause of abnormal simulations, we applied two MCTools used for all reports and described the improper usage routes of MCTools. Conclusions: For transparent and realistic exposure assessment, it is necessary to 1) establish standardized guidelines for the proper use of the two statistical approaches, including notes by MCTool and 2) consider the development of a new software tool with proper configurations and features specialized for risk assessment. Such guidelines and software will make exposure assessment more user-friendly, consistent, and rapid in the future.

Actual Disinfection and Sterilization Control in Korean Healthcare Facilities (국내 의료기관의 소독과 멸균 관리 실태)

  • Jeong, Sun Young;Choi, Jeong Hwa;Kim, Eun Kyoung;Kim, Su Mi;Son, Hee;Cho, Nan Hyoung;Choi, Ji Youn;Park, Eun Suk;Park, Jin Hee;Lee, Ji Young;Choi, Soon Im;Woo, Jin Ha;Kim, Og Son
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.21 no.4
    • /
    • pp.392-402
    • /
    • 2014
  • Purpose: This study was done to investigate the status of disinfection and sterilization in healthcare facilities. Method: A survey of 193 Korean healthcare facilities was conducted from February 8 to March 7, 2013. Data were analyzed using descriptive statistics, ${\chi}^2$ test, Fisher's exact test, one-way ANOVA, Scheffe with SPSS WIN 18.0. Results: Of the healthcare facilities 93.2% had specific guidelines for disinfection/sterilization, but only 47.9% had a committee on disinfection/sterilization for decision-making, less than half (42.7%) conducted regular monitoring of actual practices, while 83.9% had established procedures for recovery in case of problems with the disinfection process and 89.0% kept records and archives of disinfection practices. Cleaning process, selection of chemical disinfectants and process of disinfection and sterilization were found to be inadequate in some healthcare facilities. Perception score for adequacy of medical instruments was 8.10, environmental disinfection was 7.20, and sterilizer management was 8.45 out of a possible 10. Conclusion: Compared to larger institutions, smaller healthcare facilities had less effective disinfection and sterilization management systems, while some facilities showed inadequate practices for medical equipment and general sterilization. Better academic and state-level support is recommended for smaller facilities in order to establish a better system-wide management system.

Gabor Wavelet Analysis for Face Recognition in Medical Asset Protection (의료자산보호에서 얼굴인식을 위한 가보 웨이블릿 분석)

  • Jun, In-Ja;Chung, Kyung-Yong;Lee, Young-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.11
    • /
    • pp.10-18
    • /
    • 2011
  • Medical asset protection is important in each medical institution especially because of the law on private medical record protection and face recognition for this protection is one of the most interesting and challenging problems. In recognizing human faces, the distortion of face images can be caused by the change of pose, illumination, expressions and scale. It is difficult to recognize faces due to the locations of lights and the directions of lights. In order to overcome those problems, this paper presents an analysis of coefficients of Gabor wavelets, kernel decision, feature point, size of kernel, for face recognition in CCTV surveillance. The proposed method consists of analyses. The first analysis is to select of the kernel from images, the second is an coefficient analysis for kernel sizes and the last is the measure of changes in garbo kernel sizes according to the change of image sizes. Face recognitions are processed using the coefficients of experiment results and success rate is 97.3%. Ultimately, this paper suggests empirical application to verify the adequacy and the validity with the proposed method. Accordingly, the satisfaction and the quality of services will be improved in the face recognition area.

Study on Variability of WTP Estimates by the Estimation Methods using Dichotomous Choice Contingent Valuation Data (양분선택형 조건부가치측정(CV) 자료의 추정방법에 따른 지불의사금액의 변동성 연구)

  • Shin, Youngchul
    • Environmental and Resource Economics Review
    • /
    • v.25 no.1
    • /
    • pp.1-25
    • /
    • 2016
  • This study investigated the variability of WTP estimates(i.e. mean or median) with ad hoc assumptions of specific parametric probability distributions(i.e. normal, logistic, lognormal, and exponential distribution) to estimate WTP function using dichotomous choice CV data on mortality risk reduction. From the perspective of policy decision, the variability of these WTP estimates are intolerable in comparison with those of Turnbull nonparametric estimation method which is free from ad hoc distribution assumptions. The Turnbull nonparametric estimation can avoid a kind of misspecification bias due to ad hoc assumption of specific parametric distributions. Furthermore, the WTP estimates by Turnbull nonparametric estimation are robust because the similar estimates are elicited from a dichotomous choice or double dichotomous choice CV data, and the statistically significant WTP estimates can be obtained even though it is not possible by parametric estimation methods. If there are considerable variability among those WTP estimates by parametric estimation methods in condition with no criteria of model adequacy, the mean WTPs from Turnbull nonparametric estimation can be the robust estimates without ad hoc assumptions, which can avoid controversial issues in the perspective of policy decisions.

Evaluation of Basin-Specific Water Use through Development of Water Use Assessment Index (이수평가지수 개발을 통한 유역별 물이용 특성 평가)

  • Baeck, Seung Hyub;Choi, Si Jung
    • Journal of Wetlands Research
    • /
    • v.15 no.3
    • /
    • pp.367-380
    • /
    • 2013
  • In this study, sub-indicators, and thematic mid-indexes to evaluate the water use characteristics were selected through historical data analysis and factor analysis, and consisted of the subject approach framework. And the integrated index was developed to evaluate water use characteristics of the watershed. Using developed index, the water use characteristics were assessed for 812 standard basins with the exception for North Korea using data of 1990 to 2007 from the relevant agencies. A sensitivity analysis is conducted for this study to determine the proper way through various normalization and weighting methods. To increase the objectivity of developed index, the history of the damage indicators are excluded in the analysis. In addition, in order to ensure its reliability, results from index with and without consideration of the damage history were compared. Also, the index is also applied to real data for 2008 Gangwon region to verify its field applicability. Through the validation process this index confirmed the adequacy for the indicators selection and calculation method. The results of this study were analyzed based on the spatial and time vulnerability of the basin's water use, which can be applied to various parts such as priority decision-making for water business or policy, mitigations for the vulnerable components of the basin, and supporting measures to establishment by providing relevant information about it.

A Study on the Classification of Unstructured Data through Morpheme Analysis

  • Kim, SungJin;Choi, NakJin;Lee, JunDong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.105-112
    • /
    • 2021
  • In the era of big data, interest in data is exploding. In particular, the development of the Internet and social media has led to the creation of new data, enabling the realization of the era of big data and artificial intelligence and opening a new chapter in convergence technology. Also, in the past, there are many demands for analysis of data that could not be handled by programs. In this paper, an analysis model was designed and verified for classification of unstructured data, which is often required in the era of big data. Data crawled DBPia's thesis summary, main words, and sub-keyword, and created a database using KoNLP's data dictionary, and tokenized words through morpheme analysis. In addition, nouns were extracted using KAIST's 9 part-of-speech classification system, TF-IDF values were generated, and an analysis dataset was created by combining training data and Y values. Finally, The adequacy of classification was measured by applying three analysis algorithms(random forest, SVM, decision tree) to the generated analysis dataset. The classification model technique proposed in this paper can be usefully used in various fields such as civil complaint classification analysis and text-related analysis in addition to thesis classification.

Extended Adaptation Database Construction for Oriental Medicine Prescriptions Based on Academic Information (학술 정보 기반 한의학 처방을 위한 확장 적응증 데이터베이스 구축)

  • Lee, So-Min;Baek, Yeon-Hee;Song, Sang-Ho;CHRISTOPHER, RETITI DIOP EMANE;Han, Xuan-Zhong;Hong, Seong-Yeon;Kim, Ik-Su;Lim, Jong-Tea;Bok, Kyoung-Soo;TRAN, MINH NHAT;NGUYEN, QUYNH HOANG NGAN;Kim, So-Young;Kim, An-Na;Lee, Sang-Hun;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.367-375
    • /
    • 2021
  • The quality of medical care can be defined as four types such as effectiveness, efficiency, adequacy, and scientific-technical quality. For the management of scientific-technical aspects, medical institutions annually disseminate the latest knowledge in the form of conservative education. However, there is an obvious limit to the fact that the latest knowledge is distributed quickly enough to the clinical site with only one-time conservative education. If intelligent information processing technologies such as big data and artificial intelligence are applied to the medical field, they can overcome the limitations of having to conduct research with only a small amount of information. In this paper, we construct databases on which the existing medicine prescription adaptations can be extended. To do this, we collect, store, manage, and analyze information related to oriental medicine at domestic and abroad Journals. We design a processing and analysis technique for oriental medicine evidence research data for the construction of a database of oriental medicine prescription extended adaption. Results can be used as a basic content of evidence-based medicine prescription information in the oriental medicine-related decision support services.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF