• Title/Summary/Keyword: Probabilistic Evaluation

Search Result 559, Processing Time 0.025 seconds

Capacity Credit and Reasonable ESS Evaluation of Power System Including WTG combined with Battery Energy Storage System (에너지저장장치와 결합한 WTG를 포함하는 전력계통의 Capacity Credit 평가 및 ESS 적정규모 평가방안)

  • Oh, Ungjin;Lee, Yeonchan;Choi, Jaeseok;Lim, Jintaek
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.6
    • /
    • pp.923-933
    • /
    • 2016
  • This paper proposes a new method for evaluating Effective Load Carrying Capability(ELCC) and capacity credit(C.C.) of power system including Wind Turbine Generator(WTG) combined with Battery Energy Storage System(BESS). WTG can only generate electricity power when the fuel(wind) is available. Because of fluctuation of wind speed, WTG generates intermittent power. In view point of reliability of power system, intermittent power of WTG is similar with probabilistic characteristics based on power on-off due to mechanical availability of conventional generator. Therefore, high penetration of WTG will occur difficulties in power operation. The high penetration of numerous and large capacity WTG can make risk to power system adequacy, quality and stability. Therefore, the penetration of WTG is limited in the world. In recent, it is expected that BESS installed at wind farms may smooth the wind power fluctuation. This study develops a new method to assess how much is penetration of WTG able to extended when Wind Turbine Generator(WTG) is combined with Battery Energy Storage System(BESS). In this paper, the assessment equation of capacity credit of WTG combined with BESS is formulated newly. The simulation program, is called GNRL_ESS, is developed in this study. This paper demonstrates a various case studies of ELCC and capacity credit(C.C.) of power system containing WTG combined with BESS using model system as similar as Jeju island power system. The case studies demonstrate that not only reasonable BESS capacity for a WTG but also permissible penetration percent of WTG combined with BESS and reasonable WTG capacity for a BESS can be decided.

Probabilistic analysis of tunnel collapse: Bayesian method for detecting change points

  • Zhou, Binghua;Xue, Yiguo;Li, Shucai;Qiu, Daohong;Tao, Yufan;Zhang, Kai;Zhang, Xueliang;Xia, Teng
    • Geomechanics and Engineering
    • /
    • v.22 no.4
    • /
    • pp.291-303
    • /
    • 2020
  • The deformation of the rock surrounding a tunnel manifests due to the stress redistribution within the surrounding rock. By observing the deformation of the surrounding rock, we can not only determine the stability of the surrounding rock and supporting structure but also predict the future state of the surrounding rock. In this paper, we used grey system theory to analyse the factors that affect the deformation of the rock surrounding a tunnel. The results show that the 5 main influencing factors are longitudinal wave velocity, tunnel burial depth, groundwater development, surrounding rock support type and construction management level. Furthermore, we used seismic prospecting data, preliminary survey data and excavated section monitoring data to establish a neural network learning model to predict the total amount of deformation of the surrounding rock during tunnel collapse. Subsequently, the probability of a change in deformation in each predicted section was obtained by using a Bayesian method for detecting change points. Finally, through an analysis of the distribution of the change probability and a comparison with the actual situation, we deduced the survey mark at which collapse would most likely occur. Surface collapse suddenly occurred when the tunnel was excavated to this predicted distance. This work further proved that the Bayesian method can accurately detect change points for risk evaluation, enhancing the accuracy of tunnel collapse forecasting. This research provides a reference and a guide for future research on the probability analysis of tunnel collapse.

Markov's Modeling for Screening Strategies for Colorectal Cancer

  • Barouni, Mohsen;Larizadeh, Mohammad Hassan;Sabermahani, Asma;Ghaderi, Hossien
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.10
    • /
    • pp.5125-5129
    • /
    • 2012
  • Economic decision models are being increasingly used to assess medical interventions. Advances in this field are mainly due to enhanced processing capacity of computers, availability of specific software to perform the necessary tasks, and refined mathematical techniques. We here estimated the incremental cost-effectiveness of ten strategies for colon cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and profit of chemotherapy in Iran. We used a Markov model to measure the costs and quality-adjusted life expectancy of a 50-year-old average-risk Iranian without screening and with screening by each test. In this paper, we tested the model with data from the Ministry of Health and published literature. We considered costs from the perspective of a health insurance organization, with inflation to 2011, the Iranian Rial being converted into US dollars. We focused on three tests for the 10 strategies considered currently being used for population screening in some Iranians provinces (Kerman, Golestan Mazandaran, Ardabil, and Tehran): low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 39%, 60% and 76%, and mortality by 50%, 69% and 78%, respectively, compared with no screening. These approaches generated ICER (incremental cost-effectiveness ratios) of $9067, $654 and $8700 per QALY (quality-adjusted life year), respectively. Sensitivity analyses were conducted to assess the influence of various scales on the economic evaluation of screening. The results were sensitive to probabilistic sensitivity analysis. Colonoscopy every ten years yielded the greatest net health value. Screening for colon cancer is economical and cost-effective over conventional levels of WTP8.

On Unicast Routing Algorithm Based on Estimated Path for Delay Constrained Least Cost (경로 추정 기반의 지연시간을 고려한 저비용 유니캐스트 라우팅 알고리즘)

  • Kim, Moon-Seong;Bang, Young-Cheol;Choo, Hyun-Seung
    • Journal of Internet Computing and Services
    • /
    • v.8 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • The development of efficient Quality of Service (QoS) routing algorithms in high speed networks is very difficult since divergent services require various quality conditions, If the QoS parameter we concern is to measure the delay on that link, then the routing algorithm obtains the Least Delay (LD) path, Meanwhile, if the parameter is to measure of the link cast, then it calculates the Least Cost (LC) path. The Delay Constrained Least Cast (DCLC) path problem of the mixed issues on LD and LC has been shown to be NP-hard. The path cost of LD path is relatively mere expensive than that of LC path, and the path delay of LC path is relatively higher than that of LD path in DCLC problem. In this paper. we propose the algorithm based on estimated path for the DCLC problem and investigate its performance, It employs a new parameter which is probabilistic combination of cost and delay, We have performed empirical evaluation that compares our proposed algorithm with the DCUR in various network situations.

  • PDF

Ingestion Dose Evaluation of Korean Based on Dynamic Model in a Severe Accident

  • Kwon, Dahye;Hwang, Won-Tae;Jae, Moosung
    • Journal of Radiation Protection and Research
    • /
    • v.43 no.2
    • /
    • pp.50-58
    • /
    • 2018
  • Background: In terms of the Level 3 probabilistic safety assessment (Level 3 PSA), ingestion of food that had been exposed to radioactive materials is important to assess the intermediate- and long-term radiological dose. Because the ingestion dose is considerably dependent upon the agricultural and dietary characteristics of each country, the reliability of the assessment results may become diminished if the characteristics of a foreign country are considered. Thus, this study intends to evaluate and analyze the ingestion dose of Korean during a severe accident by completely considering the available agricultural and dietary characteristics in Korea. Materials and Methods: This study uses COMIDA2, which is a program based on dynamic food chain model. It sets the parameters that are appropriate to Korean characteristics so that we can evaluate the inherent ingestion dose of Korean. The results were analyzed by considering the accident date and food category with regard to the $^{137}Cs$. Results and Discussion: The dose and contribution of the food category depicted distinctive differences based on the accident date. Particularly, the ingestion dose during the first and second years depicted a considerable difference by the accident date. However, after the third year, the effect of foliar absorption was negligible and exhibited a similar tendency along with the order of root uptake rate based on the food category. Conclusion: In this study, the agricultural and dietary characteristics of Korea were analyzed and evaluated the ingestion dose of Korean during a severe accident using COMIDA2. By considering the inherent characteristics of Korean, it can be determined that the results of this study will significantly contribute to the reliability of the Level 3 PSA.

Ranked Web Service Retrieval by Keyword Search (키워드 질의를 이용한 순위화된 웹 서비스 검색 기법)

  • Lee, Kyong-Ha;Lee, Kyu-Chul;Kim, Kyong-Ok
    • The Journal of Society for e-Business Studies
    • /
    • v.13 no.2
    • /
    • pp.213-223
    • /
    • 2008
  • The efficient discovery of services from a large scale collection of services has become an important issue[7, 24]. We studied a syntactic method for Web service discovery, rather than a semantic method. We regarded a service discovery as a retrieval problem on the proprietary XML formats, which were service descriptions in a registry DB. We modeled services and queries as probabilistic values and devised similarity-based retrieval techniques. The benefits of our way are follows. First, our system supports ranked service retrieval by keyword search. Second, we considers both of UDDI data and WSDL definitions of services amid query evaluation time. Last, our technique can be easily implemented on the off-theshelf DBMS and also utilize good features of DBMS maintenance.

  • PDF

FAULT DETECTION COVERAGE QUANTIFICATION OF AUTOMATIC TEST FUNCTIONS OF DIGITAL I&C SYSTEM IN NPPS

  • Choi, Jong-Gyun;Lee, Seung-Jun;Kang, Hyun-Gook;Hur, Seop;Lee, Young-Jun;Jang, Seung-Cheol
    • Nuclear Engineering and Technology
    • /
    • v.44 no.4
    • /
    • pp.421-428
    • /
    • 2012
  • Analog instrument and control systems in nuclear power plants have recently been replaced with digital systems for safer and more efficient operation. Digital instrument and control systems have adopted various fault-tolerant techniques that help the system correctly and safely perform the specific required functions regardless of the presence of faults. Each fault-tolerant technique has a different inspection period, from real-time monitoring to monthly testing. The range covered by each faulttolerant technique is also different. The digital instrument and control system, therefore, adopts multiple barriers consisting of various fault-tolerant techniques to increase the total fault detection coverage. Even though these fault-tolerant techniques are adopted to ensure and improve the safety of a system, their effects on the system safety have not yet been properly considered in most probabilistic safety analysis models. Therefore, it is necessary to develop an evaluation method that can describe these features of digital instrument and control systems. Several issues must be considered in the fault coverage estimation of a digital instrument and control system, and two of these are addressed in this work. The first is to quantify the fault coverage of each fault-tolerant technique implemented in the system, and the second is to exclude the duplicated effect of fault-tolerant techniques implemented simultaneously at each level of the system's hierarchy, as a fault occurring in a system might be detected by one or more fault-tolerant techniques. For this work, a fault injection experiment was used to obtain the exact relations between faults and multiple barriers of faulttolerant techniques. This experiment was applied to a bistable processor of a reactor protection system.

The Evaluation of Non-Destructive Formulas on Compressive Strength Using the Reliability Based on Probability (확률 기반의 신뢰도를 이용한 비파괴 압축강도 추정식 평가)

  • Park, Jin-Woo;Choo, Jin-Ho;Park, Gwang-Rim;Hwang, In-Baek;Shin, Yong-Suk
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.19 no.4
    • /
    • pp.25-34
    • /
    • 2015
  • Proposed equation is used many time in calculation of concrete compressive strength using the non-destructive testing at precision safety diagnosis. Most of proposed equation is suggested in abroad and have an error to estimate concrete compressive strength in the domestic. Therefor, proposed equation is low reliability to estimate concrete compressive and it has a significant effect in reliability of precision safety diagnosis. Nevertheless, It is possible to increase the reliability through a number of experiments from this problem that occurs in some localized part. This paper is proposed assessment formula of reliability related core compressive strength to increase the reliability. It is verified that reliability of proposed assessment formula is useful by probabilistic techniques. It is compared with each graphs of concrete compressive strength of proposed equation. It has been found that the present methods are very efficient.

Proposal of the Modified Management Criteria Value in Earth Retaining Structure using Measured Data (계측자료를 이용한 흙막이 구조물의 수정된 관리기준치 제안)

  • Kim, Jueng-Kyu;Park, Heung-Gyu;Nam, Jin-Won
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.20 no.1
    • /
    • pp.95-103
    • /
    • 2016
  • The absolute value management method is widely used in the most of the earth retaining construction, which evaluates the safety by comparing measurement result and management criteria. Therefore, the management criteria is the standard to evaluate the safety of the site, and in other words, the criteria is a direct factor of the evaluation. That means that the safety of the site can not be acquired if the management criteria is not proper, even though the measurement system is perfectly set. However, many of field technicians do not have rely on the current management criteria, and they even recognize the necessity of the revision. Therefore, in this study, the necessity of the revision was studied. Also, the optimum criteria selection and the application were performed based on the test results of earth retaining deflection and probabilistic theory. The absolute value management method was used for this study. The details are tabulated.

Probability-based Deep Learning Clustering Model for the Collection of IoT Information (IoT 정보 수집을 위한 확률 기반의 딥러닝 클러스터링 모델)

  • Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.18 no.3
    • /
    • pp.189-194
    • /
    • 2020
  • Recently, various clustering techniques have been studied to efficiently handle data generated by heterogeneous IoT devices. However, existing clustering techniques are not suitable for mobile IoT devices because they focus on statically dividing networks. This paper proposes a probabilistic deep learning-based dynamic clustering model for collecting and analyzing information on IoT devices using edge networks. The proposed model establishes a subnet by applying the frequency of the attribute values collected probabilistically to deep learning. The established subnets are used to group information extracted from seeds into hierarchical structures and improve the speed and accuracy of dynamic clustering for IoT devices. The performance evaluation results showed that the proposed model had an average 13.8 percent improvement in data processing time compared to the existing model, and the server's overhead was 10.5 percent lower on average than the existing model. The accuracy of extracting IoT information from servers has improved by 8.7% on average from previous models.