• Title/Summary/Keyword: Bayesian Error Rate

Search Result 36, Processing Time 0.024 seconds

Asset Price, the Exchange Rate, and Trade Balances in China: A Sign Restriction VAR Approach

  • Kim, Wongi
    • East Asian Economic Review
    • /
    • v.22 no.3
    • /
    • pp.371-400
    • /
    • 2018
  • Although asset price is an important factor in determining changes in external balances, no studies have investigated it from the Chinese perspective. In this study, I empirically examine the underlying driving forces of China's trade balances, particularly the role of asset price and the real exchange rate. To this end, I estimate a sign-restricted structural vector autoregressive model with quarterly time series data for China, using the Bayesian method. The results show that changes in asset price affect China's trade balances through private consumption and investment. Also, an appreciation of the real exchange rate tends to deteriorate trade balances in China. Furthermore, forecast error variance decomposition results indicate that changes in asset price (stock price and housing price) explain about 20% variability of trade balances, while changes in the real exchange rate can explain about 10%.

An Analysis on Incident Cases of Dynamic Positioning Vessels (Dynamic Positioning 선박들의 사고사례 분석)

  • Chae, Chong-Ju;Jung, Yun-Chul
    • Journal of Navigation and Port Research
    • /
    • v.39 no.3
    • /
    • pp.149-156
    • /
    • 2015
  • The Dynamic Positioning System consists of 7 elements which are namely Power system, Human machine interface, DP Computer, Position Reference System(PRS), Sensors, Thruster system and DP Operator. Incidents like loss of position(LOP) on DP vessel usually occur due to errors in these 7 elements. The purpose of this study is to find out safety operation method of DP vessel through qualitative and quantitative analyze of DP LOP incidents which are submitted to IMCA every year. The 612 DP LOP incidents submitted from 2001 to 2010 were analyzed to find out the main cause of the incidents and its rate among other causes. Consequently, the highest rate of incidents involving DP elements are PRS errors. DP computer, Power system, Human error and thruster system came next. The PRS has been analyzed and a flowchart was drawn through expert brainstorming. Also, the conditional probability has been analyzed through Bayesian Networks based on this flowchart. Consequentially, the main causes of drive off incidents were DGPS, microwave radar and HPR. Also, this study identified the main causes of DGPS errors through Bayesian Networks. These causes are signal blocked, electric components failure, relative mode error, signal weak or fail.

Development of Quantitative Risk Assessment Methodology for the Maritime Transportation Accident of Merchant Ship (상선 운항 사고의 양적 위기평가기법 개발)

  • Yim, Jeong-Bin
    • Journal of Navigation and Port Research
    • /
    • v.33 no.1
    • /
    • pp.9-19
    • /
    • 2009
  • This paper describes empirical approach methodology for the quantitative risk assessment of maritime transportation accident (MTA) of a merchant ship. The principal aim of this project is to estimate the risk of MTA that could degrade the ship safety by analyzing the underlying factors contributing to MTA based on the IMO's Formal Safety Assessment techniques and, by assessing the probabilistic risk level of MTA based on the quantitative risk assessment methodology. The probabilistic risk level of MTA to Risk Index (RI) composed with Probability Index (PI) and Severity Index (SI) can be estimated from proposed Maritime Transportation Accident Model (MTAM) based on Bayesian Network with Bayesian theorem Then the applicability of the proposed MTAM can be evaluated using the scenario group with 355 core damaged accident history. As evaluation results, the correction rate of estimated PI, $r_{Acc}$ is shown as 82.8%, the over ranged rate of PI variable sensitivity with $S_p{\gg}1.0$ and $S_p{\ll}1.0$ is shown within 10%, the averaged error of estimated SI, $\bar{d_{SI}}$ is shown as 0.0195 and, the correction rate of estimated RI, $r_{Acc}$(%), is shown as 91.8%. These results clearly shown that the proposed accident model and methodology can be use in the practical maritime transportation field.

A Comparison of Analysis Methods for Work Environment Measurement Databases Including Left-censored Data (불검출 자료를 포함한 작업환경측정 자료의 분석 방법 비교)

  • Park, Ju-Hyun;Choi, Sangjun;Koh, Dong-Hee;Park, Donguk;Sung, Yeji
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.32 no.1
    • /
    • pp.21-30
    • /
    • 2022
  • Objectives: The purpose of this study is to suggest an optimal method by comparing the analysis methods of work environment measurement datasets including left-censored data where one or more measurements are below the limit of detection (LOD). Methods: A computer program was used to generate left-censored datasets for various combinations of censoring rate (1% to 90%) and sample size (30 to 300). For the analysis of the censored data, the simple substitution method (LOD/2), β-substitution method, maximum likelihood estimation (MLE) method, Bayesian method, and regression on order statistics (ROS)were all compared. Each method was used to estimate four parameters of the log-normal distribution: (1) geometric mean (GM), (2) geometric standard deviation (GSD), (3) 95th percentile (X95), and (4) arithmetic mean (AM) for the censored dataset. The performance of each method was evaluated using relative bias and relative root mean squared error (rMSE). Results: In the case of the largest sample size (n=300), when the censoring rate was less than 40%, the relative bias and rMSE were small for all five methods. When the censoring rate was large (70%, 90%), the simple substitution method was inappropriate because the relative bias was the largest, regardless of the sample size. When the sample size was small and the censoring rate was large, the Bayesian method, the β-substitution method, and the MLE method showed the smallest relative bias. Conclusions: The accuracy and precision of all methods tended to increase as the sample size was larger and the censoring rate was smaller. The simple substitution method was inappropriate when the censoring rate was high, and the β-substitution method, MLE method, and Bayesian method can be widely applied.

A Development of Markov Chain Monte Carlo History Matching Technique for Subsurface Characterization (지하 불균질 예측 향상을 위한 마르코프 체인 몬테 카를로 히스토리 매칭 기법 개발)

  • Jeong, Jina;Park, Eungyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.20 no.3
    • /
    • pp.51-64
    • /
    • 2015
  • In the present study, we develop two history matching techniques based on Markov chain Monte Carlo method where radial basis function and Gaussian distribution generated by unconditional geostatistical simulation are employed as the random walk transition kernels. The Bayesian inverse methods for aquifer characterization as the developed models can be effectively applied to the condition even when the targeted information such as hydraulic conductivity is absent and there are transient hydraulic head records due to imposed stress at observation wells. The model which uses unconditional simulation as random walk transition kernel has advantage in that spatial statistics can be directly associated with the predictions. The model using radial basis function network shares the same advantages as the model with unconditional simulation, yet the radial basis function network based the model does not require external geostatistical techniques. Also, by employing radial basis function as transition kernel, multi-scale nested structures can be rigorously addressed. In the validations of the developed models, the overall predictabilities of both models are sound by showing high correlation coefficient between the reference and the predicted. In terms of the model performance, the model with radial basis function network has higher error reduction rate and computational efficiency than with unconditional geostatistical simulation.

Genotype-Calling System for Somatic Mutation Discovery in Cancer Genome Sequence (암 유전자 배열에서 체세포 돌연변이 발견을 위한 유전자형 조사 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.3009-3015
    • /
    • 2013
  • Next-generation sequencing (NGS) has enabled whole genome and transcriptome single nucleotide variant (SNV) discovery in cancer and method of the most fundamental being determining an individual's genotype from multiple aligned short read sequences at a position. Bayesian algorithm estimate parameter using posterior genotype probabilities and other method, EM algorithm, estimate parameter using maximum likelihood estimate method in observed data. Here, we propose a novel genotype-calling system and compare and analyze the effect of sample size(S = 50, 100 and 500) on posterior estimate of sequencing error rate, somatic mutation status and genotype probability. The result is that estimate applying Bayesian algorithm even for 50 of small sample size approached real parameter than estimate applying EM algorithm in small sample more accurately.

Compressive Sensing Recovery of Natural Images Using Smooth Residual Error Regularization (평활 잔차 오류 정규화를 통한 자연 영상의 압축센싱 복원)

  • Trinh, Chien Van;Dinh, Khanh Quoc;Nguyen, Viet Anh;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.209-220
    • /
    • 2014
  • Compressive Sensing (CS) is a new signal acquisition paradigm which enables sampling under Nyquist rate for a special kind of signal called sparse signal. There are plenty of CS recovery methods but their performance are still challenging, especially at a low sub-rate. For CS recovery of natural images, regularizations exploiting some prior information can be used in order to enhance CS performance. In this context, this paper addresses improving quality of reconstructed natural images based on Dantzig selector and smooth filters (i.e., Gaussian filter and nonlocal means filter) to generate a new regularization called smooth residual error regularization. Moreover, total variation has been proved for its success in preserving edge objects and boundary of reconstructed images. Therefore, effectiveness of the proposed regularization is verified by experimenting it using augmented Lagrangian total variation minimization. This framework is considered as a new CS recovery seeking smoothness in residual images. Experimental results demonstrate significant improvement of the proposed framework over some other CS recoveries both in subjective and objective qualities. In the best case, our algorithm gains up to 9.14 dB compared with the CS recovery using Bayesian framework.

Forecasting Long-Term Steamflow from a Small Waterhed Using Artificial Neural Network (인공신경망 이론을 이용한 소유역에서의 장기 유출 해석)

  • 강문성;박승우
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.43 no.2
    • /
    • pp.69-77
    • /
    • 2001
  • An artificial neural network model was developed to analyze and forecast daily steamflow flow a small watershed. Error Back propagation neural networks (EBPN) of daily rainfall and runoff data were found to have a high performance in simulating stremflow. The model adopts a gradient descent method where the momentum and adaptive learning rate concepts were employed to minimize local minima value problems and speed up the convergence of EBP method. The number of hidden nodes was optimized using Bayesian information criterion. The resulting optimal EBPN model for forecasting daily streamflow consists of three rainfall and four runoff data (Model34), and the best number of the hidden nodes were found to be 13. The proposed model simulates the daily streamflow satisfactorily by comparison compared to the observed data at the HS#3 watershed of the Baran watershed project, which is 391.8 ha and has relatively steep topography and complex land use.

  • PDF

SVM Based Speaker Verification Using Sparse Maximum A Posteriori Adaptation

  • Kim, Younggwan;Roh, Jaeyoung;Kim, Hoirin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.277-281
    • /
    • 2013
  • Modern speaker verification systems based on support vector machines (SVMs) use Gaussian mixture model (GMM) supervectors as their input feature vectors, and the maximum a posteriori (MAP) adaptation is a conventional method for generating speaker-dependent GMMs by adapting a universal background model (UBM). MAP adaptation requires the appropriate amount of input utterance due to the number of model parameters to be estimated. On the other hand, with limited utterances, unreliable MAP adaptation can be performed, which causes adaptation noise even though the Bayesian priors used in the MAP adaptation smooth the movements between the UBM and speaker dependent GMMs. This paper proposes a sparse MAP adaptation method, which is known to perform well in the automatic speech recognition area. By introducing sparse MAP adaptation to the GMM-SVM-based speaker verification system, the adaptation noise can be mitigated effectively. The proposed method utilizes the L0 norm as a regularizer to induce sparsity. The experimental results on the TIMIT database showed that the sparse MAP-based GMM-SVM speaker verification system yields a 42.6% relative reduction in the equal error rate with few additional computations.

  • PDF

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.