• Title/Summary/Keyword: Probabilistic variation

Search Result 172, Processing Time 0.024 seconds

Development of Pedestrian Fatality Model using Bayesian-Based Neural Network (베이지안 신경망을 이용한 보행자 사망확률모형 개발)

  • O, Cheol;Gang, Yeon-Su;Kim, Beom-Il
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.2 s.88
    • /
    • pp.139-145
    • /
    • 2006
  • This paper develops pedestrian fatality models capable of producing the probability of pedestrian fatality in collision between vehicles and pedestrians. Probabilistic neural network (PNN) and binary logistic regression (BLR) ave employed in modeling pedestrian fatality pedestrian age, vehicle type, and collision speed obtained from reconstructing collected accidents are used as independent variables in fatality models. One of the nice features of this study is that an iterative sampling technique is used to construct various training and test datasets for the purpose of better performance comparison Statistical comparison considering the variation of model Performances is conducted. The results show that the PNN-based fatality model outperforms the BLR-based model. The models developed in this study that allow us to predict the pedestrian fatality would be useful tools for supporting the derivation of various safety Policies and technologies to enhance Pedestrian safety.

Constructing Database and Probabilistic Analysis for Ultimate Bearing Capacity of Aggregate Pier (쇄석다짐말뚝의 극한지지력 데이터베이스 구축 및 통계학적 분석)

  • Park, Joon-Mo;Kim, Bum-Joo;Jang, Yeon-Soo
    • Journal of the Korean Geotechnical Society
    • /
    • v.30 no.8
    • /
    • pp.25-37
    • /
    • 2014
  • In load and resistance factor design (LRFD) method, resistance factors are typically calibrated using resistance bias factors obtained from either only the data within ${\pm}2{\sigma}$ or the data except the tail values of an assumed probability distribution to increase the reliability of the database. However, the data selection approach has a shortcoming that any low-quality data inadvertently included in the database may not be removed. In this study, a data quality evaluation method, developed based on the quality of static load test results, the engineering characteristics of in-situ soil, and the dimension of aggregate piers, is proposed for use in constructing database. For the evaluation of the method, a total 65 static load test results collected from various literatures, including static load test reports, were analyzed. Depending on the quality of the database, the comparison between bias factors, coefficients of variation, and resistance factors showed that uncertainty in estimating bias factors can be reduced by using the proposed data quality evaluation method when constructing database.

Reliability analysis of reinforced concrete haunched beams shear capacity based on stochastic nonlinear FE analysis

  • Albegmprli, Hasan M.;Cevik, Abdulkadir;Gulsan, M. Eren;Kurtoglu, Ahmet Emin
    • Computers and Concrete
    • /
    • v.15 no.2
    • /
    • pp.259-277
    • /
    • 2015
  • The lack of experimental studies on the mechanical behavior of reinforced concrete (RC) haunched beams leads to difficulties in statistical and reliability analyses. This study performs stochastic and reliability analyses of the ultimate shear capacity of RC haunched beams based on nonlinear finite element analysis. The main aim of this study is to investigate the influence of uncertainty in material properties and geometry parameters on the mechanical performance and shear capacity of RC haunched beams. Firstly, 65 experimentally tested RC haunched beams and prismatic beams are analyzed via deterministic nonlinear finite element method by a special program (ATENA) to verify the efficiency of utilized numerical models, the shear capacity and the crack pattern. The accuracy of nonlinear finite element analyses is verified by comparing the results of nonlinear finite element and experiments and both results are found to be in a good agreement. Afterwards, stochastic analyses are performed for each beam where the RC material properties and geometry parameters are assigned to take probabilistic values using an advanced simulating procedure. As a result of stochastic analysis, statistical parameters are determined. The statistical parameters are obtained for resistance bias factor and the coefficient of variation which were found to be equal to 1.053 and 0.137 respectively. Finally, reliability analyses are accomplished using the limit state functions of ACI-318 and ASCE-7 depending on the calculated statistical parameters. The results show that the RC haunched beams have higher sensitivity and riskiness than the RC prismatic beams.

Response Variability of Laminated Composite Plates with Random Elastic Modulus (탄성계수의 불확실성에 의한 복합적층판 구조의 응답변화도)

  • Noh, Hyuk-Chun
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.4
    • /
    • pp.335-345
    • /
    • 2008
  • In this study, we suggest a stochastic finite element scheme for the probabilistic analysis of the composite laminated plates, which have been applied to variety of mechanical structures due to their high strength to weight ratios. The applied concept in the formulation is the weighted integral method, which has been shown to give the most accurate results among others. We take into account the elastic modulus and in-plane shear modulus as random. For individual random parameters, independent stochastic field functions are assumed, and the effect of these random parameters on the response are estimated based on the exponentially varying auto- and cross-correlation functions. Based on example analyses, we suggest that composite plates show a less coefficient of variation than plates of isotropic and orthotropic materials. For the validation of the proposed scheme, Monte Carlo analysis is also performed, and the results are compared with each other.

Comparison of Machine Learning-Based Radioisotope Identifiers for Plastic Scintillation Detector

  • Jeon, Byoungil;Kim, Jongyul;Yu, Yonggyun;Moon, Myungkook
    • Journal of Radiation Protection and Research
    • /
    • v.46 no.4
    • /
    • pp.204-212
    • /
    • 2021
  • Background: Identification of radioisotopes for plastic scintillation detectors is challenging because their spectra have poor energy resolutions and lack photo peaks. To overcome this weakness, many researchers have conducted radioisotope identification studies using machine learning algorithms; however, the effect of data normalization on radioisotope identification has not been addressed yet. Furthermore, studies on machine learning-based radioisotope identifiers for plastic scintillation detectors are limited. Materials and Methods: In this study, machine learning-based radioisotope identifiers were implemented, and their performances according to data normalization methods were compared. Eight classes of radioisotopes consisting of combinations of 22Na, 60Co, and 137Cs, and the background, were defined. The training set was generated by the random sampling technique based on probabilistic density functions acquired by experiments and simulations, and test set was acquired by experiments. Support vector machine (SVM), artificial neural network (ANN), and convolutional neural network (CNN) were implemented as radioisotope identifiers with six data normalization methods, and trained using the generated training set. Results and Discussion: The implemented identifiers were evaluated by test sets acquired by experiments with and without gain shifts to confirm the robustness of the identifiers against the gain shift effect. Among the three machine learning-based radioisotope identifiers, prediction accuracy followed the order SVM > ANN > CNN, while the training time followed the order SVM > ANN > CNN. Conclusion: The prediction accuracy for the combined test sets was highest with the SVM. The CNN exhibited a minimum variation in prediction accuracy for each class, even though it had the lowest prediction accuracy for the combined test sets among three identifiers. The SVM exhibited the highest prediction accuracy for the combined test sets, and its training time was the shortest among three identifiers.

Reliability Analysis of Final Settlement Using Terzaghi's Consolidation Theory (테르자기 압밀이론을 이용한 최종압밀침하량에 관한 신뢰성 해석)

  • Chae, Jong Gil;Jung, Min Su
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6C
    • /
    • pp.349-358
    • /
    • 2008
  • In performing the reliability analysis for predicting the settlement with time of alluvial clay layer at Kobe airport, the uncertainties of geotechnical properties were examined based on the stochastic and probabilistic theory. By using Terzaghi's consolidation theory as the objective function, the failure probability was normalized based on AFOSM method. As the result of reliability analysis, the occurrence probabilities for the cases of the target settlement of ${\pm}10%,\;{\pm}25%$ of the total settlement from the deterministic analysis were 30~50%, 60%~90%, respectively. Considering that the variation coefficients of input variable are almost similar as those of past researches, the acceptable error range of the total settlement would be expected in the range of 10% of the predicted total settlement. As the result of sensitivity analysis, the factors which affect significantly on the settlement analysis were the uncertainties of the compression coefficient Cc, the pre-consolidation stress Pc, and the prediction model employed. Accordingly, it is very important for the reliable prediction with high reliability to obtain reliable soil properties such as Cc and Pc by performing laboratory tests in which the in-situ stress and strain conditions are properly simulated.

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

Classification of Axis-symmetric Flaws with Non-Symmetric Cross-Sections using Simulated Eddy Current Testing Signals (모사 와전류 탐상신호를 이용한 비대칭 단면을 갖는 축대칭 결함의 형상분류)

  • Song, S.J.;Kim, C.H.;Shin, Y.K.;Lee, H.B.;Park, Y.W.;Yim, C.J.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.21 no.5
    • /
    • pp.510-517
    • /
    • 2001
  • This paper describes an initial study for the application of eddy current pattern recognition approaches to more realistic flaw characterization in steam generator tubes. For this purpose, finite-element model-based theoretical eddy current testing (ECT) signals are simulated from 5 types of OD flaws with the variation in flaw size parameters and testing frequency. In addition, three kinds of software are developed for the convenience in the application of steps in pattern recognition approaches such as feature extraction feature selection and classification by probabilistic neural networks (PNNs). The cross point of the ECT signals simulated from flaws with non-symmetric cross-sections shows the deviation from the origin of the impedance plane. New features taking advantages of this phenomenon are added to complete the feature set with a total of 18 features. Then, classification with PNNs are performed based on this feature set. The PNN classifiers show high performance for the identification of symmetry in the cross-section of a flaw. However, they show very limited success in the interrogation of the sharpness of flaw tips.

  • PDF

Variation of probability of sonar detection by internal waves in the South Western Sea of Jeju Island (제주 서남부해역에서 내부파에 의한 소나 탐지확률 변화)

  • An, Sangkyum;Park, Jungyong;Choo, Youngmin;Seong, Woojae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.31-38
    • /
    • 2018
  • Based on the measured data in the south western sea of Jeju Island during the SAVEX15(Shallow Water Acoustic Variability EXperiment 2015), the effect of internal waves on the PPD (Predictive Probability of Detection) of a sonar system was analyzed. The southern west sea of Jeju Island has complex flows due to internal waves and USC (Underwater Sound Channel). In this paper, sonar performance is predicted by probabilistic approach. The LFM (Linear Frequency Modulation) and MLS (Maximum Length Sequence) signals of 11 kHz - 31 kHz band of SAVEX15 data were processed to calculate the TL (Transmission Loss) and NL (Noise Level) at a distance of approximately 2.8 km from the source and the receiver. The PDF (Probability Density Function) of TL and NL is convoluted to obtain the PDF of the SE (Signal Excess) and the PPD according to the depth of the source and receiver is calculated. Analysis of the changes in the PPD over time when there are internal waves such as soliton packet and internal tide has confirmed that the PPD value is affected by different aspects.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.