• Title/Summary/Keyword: Benchmark Data

Search Result 696, Processing Time 0.029 seconds

Current Status of ACE Format Libraries for MCNP at Nuclear Data Center of KAERI

  • Kim, Do Heon;Gil, Choong-Sup;Lee, Young-Ouk
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.3
    • /
    • pp.191-195
    • /
    • 2016
  • Background: The current status of ACE format MCNP/MCNPX libraries by NDC of KAERI is presented with a short description of each library. Materials and Methods: Validation calculations with recent nuclear data evaluations ENDF/BV-II. 0, ENDF/B-VII.1, JEFF-3.2, and JENDL-4.0 have been carried out by the MCNP5 code for 119 criticality benchmark problems taken from the expanded criticality validation suite supplied by LANL. The overall performances of the ACE format KN-libraries have been analyzed in comparison with the results calculated with the ENDF/B-VII.0-based ENDF70 library of LANL. Results and Discussion: It was confirmed that the ENDF/B-VII.1-based KNE71 library showed better performances than the others by comparing the RMS errors and ${chi}^2$ values for five benchmark categories as well as whole benchmark problems. ENDF/B-VII.1 and JEFF-3.2 have a tendency to yield more reliable MCNP calculation results within certain confidence intervals regarding the total uncertainties for the $k_{eff}$ values. Conclusion: It is found that the adoption of the latest evaluated nuclear data might ensure better outcomes in various research and development areas.

Verification of OpenMC for fast reactor physics analysis with China experimental fast reactor start-up tests

  • Guo, Hui;Huo, Xingkai;Feng, Kuaiyuan;Gu, Hanyang
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3897-3908
    • /
    • 2022
  • High-fidelity nuclear data libraries and neutronics simulation tools are essential for the development of fast reactors. The IAEA coordinated research project on "Neutronics Benchmark of CEFR Start-Up Tests" offers valuable data for the qualification of nuclear data libraries and neutronics codes. This paper focuses on the verification and validation of the CEFR start-up modelling using OpenMC Monte-Carlo code against the experimental measurements. The OpenMC simulation results agree well with the measurements in criticality, control rod worth, sodium void reactivity, temperature reactivity, subassembly swap reactivity, and reaction distribution. In feedback coefficient evaluations, an additional state method shows high consistency with lower uncertainty. Among 122 relative errors in the benchmark of the distribution of nuclear reaction, 104 errors are less than 10% and 84 errors are less than 5%. The results demonstrate the high reliability of OpenMC for its application in fast reactor simulations. In the companion paper, the influence of cross-section libraries is investigated using neutronics modelling in this paper.

ASUSD nuclear data sensitivity and uncertainty program package: Validation on fusion and fission benchmark experiments

  • Kos, Bor;Cufar, Aljaz;Kodeli, Ivan A.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2151-2161
    • /
    • 2021
  • Nuclear data (ND) sensitivity and uncertainty (S/U) quantification in shielding applications is performed using deterministic and probabilistic approaches. In this paper the validation of the newly developed deterministic program package ASUSD (ADVANTG + SUSD3D) is presented. ASUSD was developed with the aim of automating the process of ND S/U while retaining the computational efficiency of the deterministic approach to ND S/U analysis. The paper includes a detailed description of each of the programs contained within ASUSD, the computational workflow and validation results. ASUSD was validated on two shielding benchmark experiments from the Shielding Integral Benchmark Archive and Database (SINBAD) - the fission relevant ASPIS Iron 88 experiment and the fusion relevant Frascati Neutron Generator (FNG) Helium Cooled Pebble Bed (HCPB) Test Blanket Module (TBM) mock-up experiment. The validation process was performed in two stages. Firstly, the Denovo discrete ordinates transport solver was validated as a standalone solver. Secondly, the ASUSD program package as a whole was validated as a ND S/U analysis tool. Both stages of the validation process yielded excellent results, with a maximum difference of 17% in final uncertainties due to ND between ASUSD and the stochastic ND S/U approach. Based on these results, ASUSD has proven to be a user friendly and computationally efficient tool for deterministic ND S/U analysis of shielding geometries.

A Benchmark Test of Spatial Big Data Processing Tools and a MapReduce Application

  • Nguyen, Minh Hieu;Ju, Sungha;Ma, Jong Won;Heo, Joon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.5
    • /
    • pp.405-414
    • /
    • 2017
  • Spatial data processing often poses challenges due to the unique characteristics of spatial data and this becomes more complex in spatial big data processing. Some tools have been developed and provided to users; however, they are not common for a regular user. This paper presents a benchmark test between two notable tools of spatial big data processing: GIS Tools for Hadoop and SpatialHadoop. At the same time, a MapReduce application is introduced to be used as a baseline to evaluate the effectiveness of two tools and to derive the impact of number of maps/reduces on the performance. By using these tools and New York taxi trajectory data, we perform a spatial data processing related to filtering the drop-off locations within Manhattan area. Thereby, the performance of these tools is observed with respect to increasing of data size and changing number of worker nodes. The results of this study are as follows 1) GIS Tools for Hadoop automatically creates a Quadtree index in each spatial processing. Therefore, the performance is improved significantly. However, users should be familiar with Java to handle this tool conveniently. 2) SpatialHadoop does not automatically create a spatial index for the data. As a result, its performance is much lower than GIS Tool for Hadoop on a same spatial processing. However, SpatialHadoop achieved the best result in terms of performing a range query. 3) The performance of our MapReduce application has increased four times after changing the number of reduces from 1 to 12.

Criticality benchmarking of ENDF/B-VIII.0 and JEFF-3.3 neutron data libraries with RMC code

  • Zheng, Lei;Huang, Shanfang;Wang, Kan
    • Nuclear Engineering and Technology
    • /
    • v.52 no.9
    • /
    • pp.1917-1925
    • /
    • 2020
  • New versions of ENDF/B and JEFF data libraries have been released during the past two years with significant updates in the neutron reaction sublibrary and the thermal neutron scattering sublibrary. In order to get a more comprehensive impression of the criticality quality of these two latest neutron data libraries, and to provide reference for the selection of the evaluated nuclear data libraries for the science and engineering applications of the Reactor Monte Carlo code RMC, the criticality benchmarking of the two latest neutron data libraries has been performed. RMC was employed as the computational tools, whose processing capability for the continuous representation ENDF/B-VIII.0 thermal neutron scattering laws was developed. The RMC criticality validation suite consisting of 116 benchmarks was established for the benchmarking work. The latest ACE format data libraries of the neutron reaction and the thermal neutron scattering laws for ENDF/B-VIII.0, ENDF/B-VII.1, and JEFF-3.3 were downloaded from the corresponding official sites. The ENDF/B-VII.0 data library was also employed to provide code-to-code validation for RMC. All the calculations for the four different data libraries were performed by using a parallel version of RMC, and all the calculated standard deviations are lower than 30pcm. Comprehensive analyses including the C/E values with uncertainties, the δk/σ values, and the metrics of χ2 and < |Δ| >, were conducted and presented. The calculated keff eigenvalues based on the four data libraries generally agree well with the benchmark evaluations for most cases. Among the 116 criticality benchmarks, the numbers of the calculated keff eigenvalues which agree with the benchmark evaluations within 3σ interval (with a confidence level of 99.6%) are 107, 109, 112, and 113 for ENDF/B-VII.0, ENDF/B-VII.1, ENDF/B-VIII.0 and JEFF-3.3, respectively. The present results indicate that the ENDF/B-VIII.0 neutron data library has a better performance on average.

Performance Evaluation of Data Archive System for High-Speed Saving of Correlated Result of Daejeon Correlator (대전상관기의 상관결과 고속저장을 위한 데이터아카이브 시스템의 성능시험)

  • Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Oh, Chung-Sik;Yun, Young-Joo;Jung, Jin-Seung;Jung, Dong-Kyu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.55-63
    • /
    • 2014
  • In this paper, we introduce the performance evaluation of data archive system for saving correlation result of Daejeon correlator with high-data rate. Daejeon correlator supports various correlation modes, but the speed of correlation result is affected by correlator according to the integration time in each mode. Maximum data rate of Daejeon correlator is 1.4GB/s in case of C1 mode with 25.6ms integration time. In this research, the performance evaluation of the proposed data archive system is conducted for saving correlation results connected with 4 10GbE optical cable with VCS (VLBI Correlation Subsystem), which is the core system of Daejeon correlator. For the experiments, the data archive system for 2 benders was selected and benchmark test was performed. In this paper, the developed data generation program of VCS correlation result file for benchmark test and evaluation results are described.

A Bayesian joint model for continuous and zero-inflated count data in developmental toxicity studies

  • Hwang, Beom Seuk
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.239-250
    • /
    • 2022
  • In many applications, we frequently encounter correlated multiple outcomes measured on the same subject. Joint modeling of such multiple outcomes can improve efficiency of inference compared to independent modeling. For instance, in developmental toxicity studies, fetal weight and number of malformed pups are measured on the pregnant dams exposed to different levels of a toxic substance, in which the association between such outcomes should be taken into account in the model. The number of malformations may possibly have many zeros, which should be analyzed via zero-inflated count models. Motivated by applications in developmental toxicity studies, we propose a Bayesian joint modeling framework for continuous and count outcomes with excess zeros. In our model, zero-inflated Poisson (ZIP) regression model would be used to describe count data, and a subject-specific random effects would account for the correlation across the two outcomes. We implement a Bayesian approach using MCMC procedure with data augmentation method and adaptive rejection sampling. We apply our proposed model to dose-response analysis in a developmental toxicity study to estimate the benchmark dose in a risk assessment.

Construction of Virtual Images for a Benchmark Test of 3D-PTV Algorithms for Flows

  • Hwang, Tae-Gyu;Doh, Deog-Hee;Hong, Seong-Dae;Kenneth D. Kihm
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.8
    • /
    • pp.1185-1194
    • /
    • 2004
  • Virtual images for PIV are produced for the construction of a benchmark test tool of PTV systems, Camera parameters obtained by an actual experiment are used to construct the virtual images, LES(Large Eddy Simulation) data sets of a channel flow are used for generation of the virtual images, Using the virtual images and the camera's parameters. three-dimensional velocity vectors are obtained for a channel flow. The capabilities of a 3D-PTV algorithm are investigated by comparing the results obtained by the virtual images and those by an actual measurement for the channel flow.

Analytical Coexistence Benchmark for Assessing the Utmost Interference Tolerated by IEEE 802.20

  • Abdulla, Mouhamed;Shayan, Yousef R.
    • Journal of Information Processing Systems
    • /
    • v.7 no.1
    • /
    • pp.43-52
    • /
    • 2011
  • Whether it is crosstalk, harmonics, or in-band operation of wireless technologies, interference between a reference system and a host of offenders is virtually unavoidable. In past contributions, a benchmark has been established and considered for coexistence analysis with a number of technologies including FWA, UMTS, and WiMAX. However, the previously presented model does not take into account the mobility factor of the reference node in addition to a number of interdependent requirements regarding the link direction, channel state, data rate and system factors; hence limiting its applicability for the MBWA (IEEE 802.20) standard. Thus, over diverse modes, in this correspondence we analytically derived the greatest aggregate interference level tolerated for high-fidelity transmission tailored specifically for the MBWA standard. Our results, in the form of benchmark indicators, should be of particular interest to peers analyzing and researching RF coexistence scenarios with this new protocol.

Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation (딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크)

  • Choi, Hyukdoo
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.