• Title/Summary/Keyword: Benchmark verification

Search Result 71, Processing Time 0.024 seconds

A Minimization Technique for BDD based on Microcanonical Optimization (Microcanonical Optimization을 이용한 BDD의 최소화 기법)

  • Lee, Min-Na;Jo, Sang-Yeong
    • The KIPS Transactions:PartA
    • /
    • v.8A no.1
    • /
    • pp.48-55
    • /
    • 2001
  • Using BDD, we can represent Boolean functions uniquely and compactly, Hence, BDD have become widely used for CAD applications, such as logic synthesis, formal verification, and etc. The size of the BDD representation for a function is very sensitive to the choice of orderings on the input variables. Therefore, it is very important to find a good variable ordering which minimize the size of the BDD. Since finding an optimal ordering is NP-complete, several heuristic algorithms have been proposed to find good variable orderings. In this paper, we propose a variable ordering algorithm based on the $\mu$O(microcanonical optimization). $\mu$O consists of two distinct procedures that are alternately applied : Initialization and Sampling. The initialization phase is to executes a fast local search, the sampling phase leaves the local optimum obtained in the previous initialization while remaining close to that area of search space. The proposed algorithm has been experimented on well known benchmark circuits and shows superior performance compared to a algorithm based on simulated annealing.

  • PDF

Search space pruning technique for optimization of decision diagrams (결정 다이어그램의 최적화를 위한 탐색공간 축소 기법)

  • Song, Moon-Bae;Dong, Gyun-Tak;Chang, Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.2113-2119
    • /
    • 1998
  • The optimization problem of BDDs plays an improtant role in the area of logic synthesis and formal verification. Since the variable ordering has great impacts on the size and form of BDD, finding a good variable order is very important problem. In this paper, a new variable ordering scheme called incremental optimization algorithm is presented. The proposed algorithm reduces search space more than a half of that of the conventional sifting algorithm, and computing time has been greatly reduced withoug depreciating the performance. Moreover, the incremental optimization algorithm is very simple than other variable reordering algorithms including the sifting algorithm. The proposed algorithm has been implemented and the efficiency has been show using may benchmark circuits.

  • PDF

A TWO-DIMENSIONAL FINITE VOLUME METHOD FOR TRANSIENT SIMULATION OF TIME- AND SCALE-DEPENDENT TRANSPORT IN HETEROGENEOUS AQUIFER SYSTEMS

  • Liu, F.;Turner, I.;Ahn, V.;Su, N.
    • Journal of applied mathematics & informatics
    • /
    • v.11 no.1_2
    • /
    • pp.215-241
    • /
    • 2003
  • In this paper, solute transport in heterogeneous aquifers using a modified Fokker-Planck equation (MFPE) is investigated. This newly developed mathematical model is characterised with a time-, scale-dependent dispersivity. A two-dimensional finite volume quadrilateral mesh method (FVQMM) based on a quadrilateral background interpolation mesh is developed for analysing the model. The FVQMM transforms the coupled non-linear partial differential equations into a system of differential equations, which is solved using backward differentiation formulae of order one through five in order to advance the solution in time. Three examples are presented to demonstrate the model verification and utility. Henry's classic benchmark problem is used to show that the MFPE captures significant features of transport phenomena in heterogeneous porous media including enhanced transport of salt in the upper layer due to its parameters that represent the dependence of transport processes on scale and time. The time and scale effects are investigated. Numerical results are compared with published results on the some problems.

Analysis of Encryption Algorithm Performance by Workload in BigData Platform (빅데이터 플랫폼 환경에서의 워크로드별 암호화 알고리즘 성능 분석)

  • Lee, Sunju;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1305-1317
    • /
    • 2019
  • Although encryption for data protection is essential in the big data platform environment of public institutions and corporations, much performance verification studies on encryption algorithms considering actual big data workloads have not been conducted. In this paper, we analyzed the performance change of AES, ARIA, and 3DES for each of six workloads of big data by adding data and nodes in MongoDB environment. This enables us to identify the optimal block-based cryptographic algorithm for each workload in the big data platform environment, and test the performance of MongoDB by testing various workloads in data and node configurations using the NoSQL Database Benchmark (YCSB). We propose an optimized architecture that takes into account.

Development and verification of PWR core transient coupling calculation software

  • Li, Zhigang;An, Ping;Zhao, Wenbo;Liu, Wei;He, Tao;Lu, Wei;Li, Qing
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3653-3664
    • /
    • 2021
  • In PWR three-dimensional transient coupling calculation software CORCA-K, the nodal Green's function method and diagonal implicit Runge Kutta method are used to solve the spatiotemporal neutron dynamic diffusion equation, and the single-phase closed channel model and one-dimensional cylindrical heat conduction transient model are used to calculate the coolant temperature and fuel temperature. The LMW, NEACRP and PWR MOX/UO2 benchmarks and FangJiaShan (FJS) nuclear power plant (NPP) transient control rod move cases are used to verify the CORCA-K. The effects of burnup, fuel effective temperature and ejection rate on the control rod ejection process of PWR are analyzed. The conclusions are as follows: (1) core relative power and fuel Doppler temperature are in good agreement with the results of benchmark and ADPRES, and the deviation between with the reference results is within 3.0% in LMW and NEACRP benchmarks; 2) the variation trend of FJS NPP core transient parameters is consistent with the results of SMART and ADPRES. And the core relative power is in better agreement with the SMART when weighting coefficient is 0.7. Compared with SMART, the maximum deviation is -5.08% in the rod ejection condition and while -5.09% in the control rod complex movement condition.

Validation of MCS code for shielding calculation using SINBAD

  • Feng, XiaoYong;Zhang, Peng;Lee, Hyunsuk;Lee, Deokjung;Lee, Hyun Chul
    • Nuclear Engineering and Technology
    • /
    • v.54 no.9
    • /
    • pp.3429-3439
    • /
    • 2022
  • The MCS code is a computer code developed by the Ulsan National Institute of Science and Technology (UNIST) for simulation and calculation of nuclear reactor systems based on the Monte Carlo method. The code is currently used to solve two main types of reactor physics problems, namely, criticality problems and radiation shielding problems. In this paper, the radiation shielding capability of the MCS code is validated by simulating some selected SINBAD (Shielding Integral Benchmark Archive and Database) experiments. The whole validation was performed in two ways. Firstly, the functionality and computational rationality of the MCS code was verified by comparing the simulation results with those of MCNP code. Secondly, the validity and computational accuracy of the MCS code was confirmed by comparing the simulation results with the experimental results of SINBAD. The simulation results of the MCS code are highly consistent with the those of the MCNP code, and they are within the 2σ error bound of the experiment results. It shows that the calculation results of the MCS code are reliable when simulating the radiation shielding problems.

Verification of neutronics and thermal-hydraulic coupled system with pin-by-pin calculation for PWR core

  • Zhigang Li;Junjie Pan;Bangyang Xia;Shenglong Qiang;Wei Lu;Qing Li
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3213-3228
    • /
    • 2023
  • As an important part of the digital reactor, the pin-by-pin wise fine coupling calculation is a research hotspot in the field of nuclear engineering in recent years. It provides more precise and realistic simulation results for reactor design, operation and safety evaluation. CORCA-K a nodal code is redeveloped as a robust pin-by-pin wise neutronics and thermal-hydraulic coupled calculation code for pressurized water reactor (PWR) core. The nodal green's function method (NGFM) is used to solve the three-dimensional space-time neutron dynamics equation, and the single-phase single channel model and one-dimensional heat conduction model are used to solve the fluid field and fuel temperature field. The mesh scale of reactor core simulation is raised from the nodal-wise to the pin-wise. It is verified by two benchmarks: NEACRP 3D PWR and PWR MOX/UO2. The results show that: 1) the pin-by-pin wise coupling calculation system has good accuracy and can accurately simulate the key parameters in steady-state and transient coupling conditions, which is in good agreement with the reference results; 2) Compared with the nodal-wise coupling calculation, the pin-by-pin wise coupling calculation improves the fuel peak temperature, the range of power distribution is expanded, and the lower limit is reduced more.

Development of Radiation Shielding Analysis Program Using Discrete Elements Method in X-Y Geometry (2차원 직각좌표계에서 DEM을 이용한 방사선차폐해석 프로그램개발)

  • Park, Ho-Sin;Kim, Jong-Kyung
    • Nuclear Engineering and Technology
    • /
    • v.25 no.1
    • /
    • pp.51-62
    • /
    • 1993
  • A computational program [TDET] of the particle transport equation is developed on radiation shielding problem in two-dimensional cartesian geometry based on the discrete element method. Not like the ordinary discrete ordinates method, the quadrature set of angles is not fixed but steered by the spatially dependent angular fluxes. The angular dependence of the scattering source term in the particle transport equation is described by series expansion in spherical harmonics, and the energy dependence of the particles is considered as well. Three different benchmark tests are made for verification of TDET : For the ray effect analysis on a square absorber with a flat isotropic source, the results of TDET calculation are quite well conformed to those of MORSE-CG calculation while TDET ameliorates the ray effect more effectively than S$_{N}$ calculation. In the analysis of the streaming leakage through a narrow vacuum duct in a shield, TDET shows conspicuous and remarkable results of streaming leakage through the duct as well as MORSE-CG does, and quite better than S$_{N}$ calculation. In a realistic reactor shielding situation which treats in two cases of the isotropic scattering and of linearly anisotropic scattering with two groups of energy, TDET calculations show local ray effect between neighboring meshes compared with S$_{N}$ calculations in which the ray effect extends broadly over several meshes.eshes.

  • PDF

Round robin analysis of vessel failure probabilities for PTS events in Korea

  • Jhung, Myung Jo;Oh, Chang-Sik;Choi, Youngin;Kang, Sung-Sik;Kim, Maan-Won;Kim, Tae-Hyeon;Kim, Jong-Min;Kim, Min Chul;Lee, Bong Sang;Kim, Jong-Min;Kim, Kyuwan
    • Nuclear Engineering and Technology
    • /
    • v.52 no.8
    • /
    • pp.1871-1880
    • /
    • 2020
  • Round robin analyses for vessel failure probabilities due to PTS events are proposed for plant-specific analyses of all types of reactors developed in Korea. Four organizations, that are responsible for regulation, operation, research and design of the nuclear power plant in Korea, participated in the round robin analysis. The vessel failure probabilities from the probabilistic fracture mechanics analyses are calculated to assure the structural integrity of the reactor pressure vessel during transients that are expected to initiate PTS events. The failure probabilities due to various parameters are compared with each other. All results are obtained based on several assumptions about material properties, flaw distribution data, and transient data such as pressure, temperature, and heat transfer coefficient. The realistic input data can be used to obtain more realistic failure probabilities. The various results presented in this study will be helpful not only for benchmark calculations, result comparisons, and verification of PFM codes developed but also as a contribution to knowledge management for the future generation.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.