• Title/Summary/Keyword: Markov

Search Result 2,414, Processing Time 0.038 seconds

Enhancing the radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty quantification

  • Nguyen, Duc Hai;Kwon, Hyun-Han;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.123-123
    • /
    • 2020
  • The present study is aimed to correcting radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty analysis of water levels contributed at each stage in the process. For this reason, a long short-term memory (LSTM) network is used to reproduce three-hour mean areal precipitation (MAP) forecasts from the quantitative precipitation forecasts (QPFs) of the McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE). The Gangnam urban catchment located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 24 heavy rainfall events, 22 grid points from the MAPLE system and the observed MAP values estimated from five ground rain gauges of KMA Automatic Weather System. The corrected MAP forecasts were input into the developed coupled 1D/2D model to predict water levels and relevant inundation areas. The results indicate the viability of the proposed framework for generating three-hour MAP forecasts and urban flooding predictions. For the analysis uncertainty contributions of the source related to the process, the Bayesian Markov Chain Monte Carlo (MCMC) using delayed rejection and adaptive metropolis algorithm is applied. For this purpose, the uncertainty contributions of the stages such as QPE input, QPF MAP source LSTM-corrected source, and MAP input and the coupled model is discussed.

  • PDF

Gas dynamics and star formation in dwarf galaxies: the case of DDO 210

  • Oh, Se-Heon;Zheng, Yun;Wang, Jing
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.75.4-75.4
    • /
    • 2019
  • We present a quantitative analysis of the relationship between the gas dynamics and star formation history of DDO 210 which is an irregular dwarf galaxy in the local Universe. We perform profile analysis of an high-resolution neutral hydrogen (HI) data cube of the galaxy taken with the large Very Large Array (VLA) survey, LITTLE THINGS using newly developed algorithm based on a Bayesian Markov Chain Monte Carlo (MCMC) technique. The complex HI structure and kinematics of the galaxy are decomposed into multiple kinematic components in a quantitative way like 1) bulk motions which are most likely to follow the underlying circular rotation of the disk, 2) non-circular motions deviating from the bulk motions, and 3) kinematically cold and warm components with narrower and wider velocity dispersion. The decomposed kinematic components are then spatially correlated with the distribution of stellar populations obtained from the color-magnitude diagram (CMD) fitting method. The cold and warm gas components show negative and positive correlations between their velocity dispersions and the surface star formation rates of the populations with ages of < 40 Myr and 100~400 Myr, respectively. The cold gas is most likely to be associated with the young stellar populations. Then the stellar feedback of the young populations could influence the warm gas. The age difference between the populations which show the correlations indicates the time delay of the stellar feedback.

  • PDF

A novel Metropolis-within-Gibbs sampler for Bayesian model updating using modal data based on dynamic reduction

  • Ayan Das;Raj Purohit Kiran;Sahil Bansal
    • Structural Engineering and Mechanics
    • /
    • v.87 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • The paper presents a Bayesian Finite element (FE) model updating methodology by utilizing modal data. The dynamic condensation technique is adopted in this work to reduce the full system model to a smaller model version such that the degrees of freedom (DOFs) in the reduced model correspond to the observed DOFs, which facilitates the model updating procedure without any mode-matching. The present work considers both the MPV and the covariance matrix of the modal parameters as the modal data. Besides, the modal data identified from multiple setups is considered for the model updating procedure, keeping in view of the realistic scenario of inability of limited number of sensors to measure the response of all the interested DOFs of a large structure. A relationship is established between the modal data and structural parameters based on the eigensystem equation through the introduction of additional uncertain parameters in the form of modal frequencies and partial mode shapes. A novel sampling strategy known as the Metropolis-within-Gibbs (MWG) sampler is proposed to sample from the posterior Probability Density Function (PDF). The effectiveness of the proposed approach is demonstrated by considering both simulated and experimental examples.

Model-independent Constraints on Type Ia Supernova Light-curve Hyperparameters and Reconstructions of the Expansion History of the Universe

  • Koo, Hanwool;Shafieloo, Arman;Keeley, Ryan E.;L'Huillier, Benjamin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.48.4-49
    • /
    • 2020
  • We reconstruct the expansion history of the universe using type Ia supernovae (SN Ia) in a manner independent of any cosmological model assumptions. To do so, we implement a nonparametric iterative smoothing method on the Joint Light-curve Analysis (JLA) data while exploring the SN Ia light-curve hyperparameter space by Markov Chain Monte Carlo (MCMC) sampling. We test to see how the posteriors of these hyperparameters depend on cosmology, whether using different dark energy models or reconstructions shift these posteriors. Our constraints on the SN Ia light-curve hyperparameters from our model-independent analysis are very consistent with the constraints from using different parameterizations of the equation of state of dark energy, namely the flat ΛCDM cosmology, the Chevallier-Polarski-Linder model, and the Phenomenologically Emergent Dark Energy (PEDE) model. This implies that the distance moduli constructed from the JLA data are mostly independent of the cosmological models. We also studied that the possibility the light-curve parameters evolve with redshift and our results show consistency with no evolution. The reconstructed expansion history of the universe and dark energy properties also seem to be in good agreement with the expectations of the standard ΛCDM model. However, our results also indicate that the data still allow for considerable flexibility in the expansion history of the universe. This work is published in ApJ.

  • PDF

Bayesian model update for damage detection of a steel plate girder bridge

  • Xin Zhou;Feng-Liang Zhang;Yoshinao Goi;Chul-Woo Kim
    • Smart Structures and Systems
    • /
    • v.31 no.1
    • /
    • pp.29-43
    • /
    • 2023
  • This study investigates the possibility of damage detection of a real bridge by means of a modal parameter-based finite element (FE) model update. Field moving vehicle experiments were conducted on an actual steel plate girder bridge. In the damage experiment, cracks were applied to the bridge to simulate damage states. A fast Bayesian FFT method was employed to identify and quantify uncertainties of the modal parameters then these modal parameters were used in the Bayesian model update. Material properties and boundary conditions are taken as uncertainties and updated in the model update process. Observations showed that although some differences existed in the results obtained from different model classes, the discrepancy between modal parameters of the FE model and those experimentally obtained was reduced after the model update process, and the updated parameters in the numerical model were indeed affected by the damage. The importance of boundary conditions in the model updating process is also observed. The capability of the MCMC model update method for application to the actual bridge structure is assessed, and the limitation of FE model update in damage detection of bridges using only modal parameters is observed.

Improving LTC using Markov Chain Model of Sensory Neurons and Synaptic Plasticity (감각 뉴런의 마르코프 체인 모델과 시냅스 가소성을 이용한 LTC 개선)

  • Lee, Junhyeok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.150-152
    • /
    • 2022
  • In this work, we propose a model that considers the behavior and synaptic plasticity of sensory neurons based on Liquid Time-constant Network (LTC). The neuron connection structure was experimented with four types: the increasing number of neurons, the decreasing number, the decreasing number, and the decreasing number. In this study, we experimented using a time series prediction dataset to see if the performance of the changed model improved compared to LTC. Experimental results show that the application of modeling of sensory neurons does not always bring about performance improvements, but improves performance through proper selection of learning rules depending on the type of dataset. In addition, the connective structure of neurons showed improved performance when it was less than four layers.

  • PDF

A Study on the Implementation of Crawling Robot using Q-Learning

  • Hyunki KIM;Kyung-A KIM;Myung-Ae CHUNG;Min-Soo KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.15-20
    • /
    • 2023
  • Machine learning is comprised of supervised learning, unsupervised learning and reinforcement learning as the type of data and processing mechanism. In this paper, as input and output are unclear and it is difficult to apply the concrete modeling mathematically, reinforcement learning method are applied for crawling robot in this paper. Especially, Q-Learning is the most effective learning technique in model free reinforcement learning. This paper presents a method to implement a crawling robot that is operated by finding the most optimal crawling method through trial and error in a dynamic environment using a Q-learning algorithm. The goal is to perform reinforcement learning to find the optimal two motor angle for the best performance, and finally to maintain the most mature and stable motion about EV3 Crawling robot. In this paper, for the production of the crawling robot, it was produced using Lego Mindstorms with two motors, an ultrasonic sensor, a brick and switches, and EV3 Classroom SW are used for this implementation. By repeating 3 times learning, total 60 data are acquired, and two motor angles vs. crawling distance graph are plotted for the more understanding. Applying the Q-learning reinforcement learning algorithm, it was confirmed that the crawling robot found the optimal motor angle and operated with trained learning, and learn to know the direction for the future research.

CRFNet: Context ReFinement Network used for semantic segmentation

  • Taeghyun An;Jungyu Kang;Dooseop Choi;Kyoung-Wook Min
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.822-835
    • /
    • 2023
  • Recent semantic segmentation frameworks usually combine low-level and high-level context information to achieve improved performance. In addition, postlevel context information is also considered. In this study, we present a Context ReFinement Network (CRFNet) and its training method to improve the semantic predictions of segmentation models of the encoder-decoder structure. Our study is based on postprocessing, which directly considers the relationship between spatially neighboring pixels of a label map, such as Markov and conditional random fields. CRFNet comprises two modules: a refiner and a combiner that, respectively, refine the context information from the output features of the conventional semantic segmentation network model and combine the refined features with the intermediate features from the decoding process of the segmentation model to produce the final output. To train CRFNet to refine the semantic predictions more accurately, we proposed a sequential training scheme. Using various backbone networks (ENet, ERFNet, and HyperSeg), we extensively evaluated our model on three large-scale, real-world datasets to demonstrate the effectiveness of our approach.

A refinement and abstraction method of the SPZN formal model for intelligent networked vehicles systems

  • Yang Liu;Yingqi Fan;Ling Zhao;Bo Mi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.64-88
    • /
    • 2024
  • Security and reliability are the utmost importance facts in intelligent networked vehicles. Stochastic Petri Net and Z (SPZN) as an excellent formal verification tool for modeling concurrent systems, can effectively handles concurrent operations within a system, establishes relationships among components, and conducts verification and reasoning to ensure the system's safety and reliability in practical applications. However, the application of a system with numerous nodes to Petri Net often leads to the issue of state explosion. To tackle these challenges, a refinement and abstraction method based on SPZN is proposed in this paper. This approach can not only refine and abstract the Stochastic Petri Net but also establish a corresponding relationship with the Z language. In determining the implementation rate of transitions in Stochastic Petri Net, we employ the interval average and weighted average method, which significantly reduces the time and space complexity compared to alternative techniques and is suitable for expert systems at various levels. This reduction facilitates subsequent comprehensive system analysis and module analysis. Furthermore, by analyzing the properties of Markov Chain isomorphism in the case study, recommendations for minimizing system risks in the application of intelligent parking within the intelligent networked vehicle system can be put forward.

Packet loss pattern modeling of cdma2000 mobile Internet channel for network-adaptive multimedia service (cdma2000 통신망에서 적응적인 멀티미디어 서비스를 위한 패킷 손실 모델링)

  • Suh Won-Bum;Park Sung-Hee;Suh Doug-Young;Shin Ji-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1B
    • /
    • pp.52-63
    • /
    • 2004
  • Packet loss process of cdma2000 mobile Internet channel deployed in Korea is modeled as a two state Markov process known as Gilbert model. This paper proposes the procedures to derive four parameters of the our modified Gilbert model from packet loss trace taken from two major cdma2000 networks in Korea. These four parameters are derived in various situations, that is, with fixed and moving terminals, in open field and urban areas. They can be used to produce synthetic packet loss patterns for study of the channel. Moreover, if they are calculated on-line during multimedia service, they can be used to make loss protection controls adaptive to network condition.