• Title/Summary/Keyword: probabilistic modeling

Search Result 228, Processing Time 0.022 seconds

Stability Analysis of Embankment Overtopping by Initial Fluctuating Water Level (초기 변동수위를 고려한 제방 월류에 따른 안정성 분석)

  • Kim, Jin-Young;Kim, Tae-Heon;Kim, You-Seong;Kim, Jae-Hong
    • Journal of the Korean Geotechnical Society
    • /
    • v.31 no.8
    • /
    • pp.51-62
    • /
    • 2015
  • It is not possible to provide resonable evidence for embankment (or dam) overtopping in geotechnical engineering, and conventional analysis by hydrologic design has not provided the evidence for the overflow. However, hydrologic design analysis using Copula function demonstrates the possibility that dam overflow occurs when estimating rainfall probability with rainfall data for 40 years based on fluctuating water level of a dam. Hydrologic dam risk analysis depends on complex hydrologic analyses in that probabilistic relationship needs to be established to quantify various uncertainties associated with modeling process and inputs. The systematic approaches to uncertainty analysis for hydrologic risk analysis have not been addressed yet. In this paper, the initial level of a dam for stability of a dam is generally determined by normal pool level or limiting the level of the flood, but overflow of probability and instability of a dam depend on the sensitivity analysis of the initial level of a dam. In order to estimate the initial level, Copula function and HEC-5 rainfall-runoff model are used to estimate posterior distributions of the model parameters. For geotechnical engineering, slope stability analysis was performed to investigate the difference between rapid drawdown and overtopping of a dam. As a result, the slope instability in overtopping of a dam was more dangerous than that of rapid drawdown condition.

A Design and Implementation of Reliability Analyzer for Embedded Software using Markov Chain Model and Unit Testing (내장형 소프트웨어 마르코프 체인 모델과 단위 테스트를 이용한 내장형 소프트웨어 신뢰도 분석 도구의 설계와 구현)

  • Kwak, Dong-Gyu;Yoo, Chae-Woo;Choi, Jae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.1-10
    • /
    • 2011
  • As requirements of embedded system get complicated, the tool for analyzing the reliability of embedded software is being needed. A probabilistic modeling is used as the way of analyzing the reliability of a software and to apply it to embedded software controlling multiple devices. So, it is necessary to specialize that to embedded software. Also, existing reliability analyzers should measure the transition probability of each condition in different ways and doesn't consider reusing the model once used. In this paper, we suggest a reliability analyzer for embedded software using embedded software Markov chin model and a unit testing tool. Embedded software Markov chain model is model specializing Markov chain model which is used for analyzing reliability to an embedded software. And a unit testing tool has host-target structure which is appropriate to development environment of embedded software. This tool can analyze the reliability more easily than existing tool by automatically measuring the transition probability between units for analyzing reliability from the result of unit testing. It can also directly apply the test result updated by unit testing tool by representing software model as a XML oriented document and has the advantage that many developers can access easily using the web oriented interface and SVN store. In this paper, we show reliability analyzing of a example by so doing show usefulness of reliability analyzer.

Fast Bayesian Inversion of Geophysical Data (지구물리 자료의 고속 베이지안 역산)

  • Oh, Seok-Hoon;Kwon, Byung-Doo;Nam, Jae-Cheol;Kee, Duk-Kee
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.3
    • /
    • pp.161-174
    • /
    • 2000
  • Bayesian inversion is a stable approach to infer the subsurface structure with the limited data from geophysical explorations. In geophysical inverse process, due to the finite and discrete characteristics of field data and modeling process, some uncertainties are inherent and therefore probabilistic approach to the geophysical inversion is required. Bayesian framework provides theoretical base for the confidency and uncertainty analysis for the inference. However, most of the Bayesian inversion require the integration process of high dimension, so massive calculations like a Monte Carlo integration is demanded to solve it. This method, though, seemed suitable to apply to the geophysical problems which have the characteristics of highly non-linearity, we are faced to meet the promptness and convenience in field process. In this study, by the Gaussian approximation for the observed data and a priori information, fast Bayesian inversion scheme is developed and applied to the model problem with electric well logging and dipole-dipole resistivity data. Each covariance matrices are induced by geostatistical method and optimization technique resulted in maximum a posteriori information. Especially a priori information is evaluated by the cross-validation technique. And the uncertainty analysis was performed to interpret the resistivity structure by simulation of a posteriori covariance matrix.

  • PDF

Relative Navigation Study Using Multiple PSD Sensor and Beacon Module Based on Kalman Filter (복수 PSD와 비콘을 이용한 칼만필터 기반 상대항법에 대한 연구)

  • Song, Jeonggyu;Jeong, Junho;Yang, Seungwon;Kim, Seungkeun;Suk, Jinyoung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.3
    • /
    • pp.219-229
    • /
    • 2018
  • This paper proposes Kalman Filter-based relative navigation algorithms for proximity tasks such as rendezvous/docking/cluster-operation of spacecraft using PSD Sensors and Infrared Beacon Modules. Numerical simulations are performed for comparative analysis of the performance of each relative-navigation technique. Based on the operation principle and optical modeling of the PSD Sensor and the Infrared Beacon Module used in the relative navigation algorithm, a measurement model for the Kalman filter is constructed. The Extended Kalman Filter(EKF) and the Unscented Kalman Filter(UKF) are used as probabilistic relative navigation based on measurement fusion to utilize kinematics and dynamics information on translational and rotation motions of satellites. Relative position and relative attitude estimation performance of two filters is compared. Especially, through the simulation of various scenarios, performance changes are also investigated depending on the number of PSD Sensors and IR Beacons in target and chaser satellites.

Simulation-Based Stochastic Markup Estimation System $(S^2ME)$ (시뮬레이션을 기반(基盤)으로 하는 영업이윤율(營業利潤率) 추정(推定) 시스템)

  • Yi, Chang-Yong;Kim, Ryul-Hee;Lim, Tae-Kyung;Kim, Wha-Jung;Lee, Dong-Eun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2007.11a
    • /
    • pp.109-113
    • /
    • 2007
  • This paper introduces a system, Simulation based Stochastic Markup Estimation System (S2ME), for estimating optimum markup for a project. The system was designed and implemented to better represent the real world system involved in construction bidding. The findings obtained from the analysis of existing assumptions used in the previous quantitative markup estimation methods were incorporated to improve the accuracy and predictability of the S2ME. The existing methods has four categories of assumption as follows; (1) The number of competitors and who is the competitors are known, (2) A typical competitor, who is fictitious, is assumed for easy computation, (3) the ratio of bid price against cost estimate (B/C) is assumed to follow normal distribution, (4) The deterministic output obtained from the probabilistic equation of existing models is assumed to be acceptable. However, these assumptions compromise the accuracy of prediction. In practice, the bidding patterns of the bidders are randomized in competitive bidding. To complement the lack of accuracy contributed by these assumptions, bidding project was randomly selected from the pool of bidding database in the simulation experiment. The probability to win the bid in the competitive bidding was computed using the profile of the competitors appeared in the selected bidding project record. The expected profit and probability to win the bid was calculated by selecting a bidding record randomly in an iteration of the simulation experiment under the assumption that the bidding pattern retained in historical bidding DB manifest revival. The existing computation, which is handled by means of deterministic procedure, were converted into stochastic model using simulation modeling and analysis technique as follows; (1) estimating the probability distribution functions of competitors' B/C which were obtained from historical bidding DB, (2) analyzing the sensitivity against the increment of markup using normal distribution and actual probability distribution estimated by distribution fitting, (3) estimating the maximum expected profit and optimum markup range. In the case study, the best fitted probability distribution function was estimated using the historical bidding DB retaining the competitors' bidding behavior so that the reliability was improved by estimating the output obtained from simulation experiment.

  • PDF

Model Development Determining Probabilistic Ramp Merge Capacity Including Forced Merge Type (강제합류 형태를 포함한 확률적 연결로 합류용량 산정 모형 개발)

  • KIM, Sang Gu
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.3
    • /
    • pp.107-120
    • /
    • 2003
  • Over the decades, a lot of studies have dealt with the traffic characteristics and phenomena at a merging area. However, relatively few analytical techniques have been developed to evaluate the traffic flow at the area and, especially, the ramp merging capacity has rarely been. This study focused on the merging behaviors that were characterized by the relationship between the shoulder lane flow and the on-ramp flow, and modeled these behaviors to determine ramp merge capacity by using gap acceptance theory. In the process of building the model, both an ideal mergence and a forced mergence were considered when ramp-merging vehicles entered the gap provided by the flow of the shoulder lane. In addition, the model for the critical gap was proposed because the critical gap was the most influential factor to determine merging capacity in the developed models. The developed models showed that the merging capacity value was on the increase as the critical gap decreased and the shoulder lane volume increased. This study has a meaning of modeling the merging behaviors including the forced merging type to determine ramp merging capacity more precisely. The findings of this study would help analyze traffic phenomena and understand traffic behaviors at a merging area, and might be applicable to decide the primary parameters of on-ramp control by considering the effects of ramp merging flow.

Quantitative Microbial Risk Assessment Model for Staphylococcus aureus in Kimbab (김밥에서의 Staphylococcus aureus에 대한 정량적 미생물위해평가 모델 개발)

  • Bahk, Gyung-Jin;Oh, Deog-Hwan;Ha, Sang-Do;Park, Ki-Hwan;Joung, Myung-Sub;Chun, Suk-Jo;Park, Jong-Seok;Woo, Gun-Jo;Hong, Chong-Hae
    • Korean Journal of Food Science and Technology
    • /
    • v.37 no.3
    • /
    • pp.484-491
    • /
    • 2005
  • Quantitative microbial risk assessment (QMRA) analyzes potential hazard of microorganisms on public health and offers structured approach to assess risks associated with microorganisms in foods. This paper addresses specific risk management questions associated with Staphylococcus aureus in kimbab and improvement and dissemination of QMRA methodology, QMRA model was developed by constructing four nodes from retail to table pathway. Predictive microbial growth model and survey data were combined with probabilistic modeling to simulate levels of S. aureus in kimbab at time of consumption, Due to lack of dose-response models, final level of S. aureus in kimbeb was used as proxy for potential hazard level, based on which possibility of contamination over this level and consumption level of S. aureus through kimbab were estimated as 30.7% and 3.67 log cfu/g, respectively. Regression sensitivity results showed time-temperature during storage at selling was the most significant factor. These results suggested temperature control under $10^{\circ}C$ was critical control point for kimbab production to prevent growth of S. aureus and showed QMRA was useful for evaluation of factors influencing potential risk and could be applied directly to risk management.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.