• Title/Summary/Keyword: System code benchmark

Search Result 59, Processing Time 0.03 seconds

The design of a 32-bit Microprocessor for a Sequence Control using an Application Specification Integrated Circuit(ASIC) (ICEIC'04)

  • Oh Yang
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.486-490
    • /
    • 2004
  • Programmable logic controller (PLC) is widely used in manufacturing system or process control. This paper presents the design of a 32-bit microprocessor for a sequence control using an Application Specification Integrated Circuit (ASIC). The 32-bit microprocessor was designed by a VHDL with top down method; the program memory was separated from the data memory for high speed execution of 274 specified sequence instructions. Therefore it was possible that sequence instructions could be operated at the same time during the instruction fetch cycle. And in order to reduce the instruction decoding time and the interface time of the data memory interface, an instruction code size was implemented by 32-bits. And the real time debugging as single step run, break point run was implemented. Pulse instruction, step controller, master controllers, BIN and BCD type arithmetic instructions, barrel shit instructions were implemented for many used in PLC system. The designed microprocessor was synthesized by the S1L50000 series which contains 70,000 gates with 0.65um technology of SEIKO EPSON. Finally, the benchmark was performed to show that designed 32-bit microprocessor has better performance than Q4A PLC of Mitsubishi Corporation.

  • PDF

Three-dimensional Rotordynamic Analysis Considering Bearing Support Effects (베어링 지지 효과를 고려한 3차원 로터동역학 해석)

  • Park, Hyo-Keun;Kim, Dong-Man;Kim, Yu-Sung;Kim, Myung-Kuk;Chen, Seung-Bae;Kim, Dong-Hyun
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.2 s.119
    • /
    • pp.105-113
    • /
    • 2007
  • In this study, three-dimensional rotordynamic analyses have been conducted using equivalent beam, hybrid and full three-dimensional models. The present computational method is based on the general finite element method with rotating gyroscopic effects of the rotor system. General purpose commercial finite element code, SAMCEF which includes practical rotordynamics module with various types of rotor analysis tools and bearing elements is applied. For the purpose of numerical verification, comparison study for a benchmark rotor model with support bearings is performed first. Detailed finite element models based on three different modeling concepts are constructed and then computational analyses are conducted for the realistic and complex three-dimensional rotor system. The results for rotor stability and mass unbalance response are presented and compared with the experimental vibration test data conducted herein.

A Comparative Study on the Performance of Intrusion Detection using Decision Tree and Artificial Neural Network Models (의사결정트리와 인공 신경망 기법을 이용한 침입탐지 효율성 비교 연구)

  • Jo, Seongrae;Sung, Haengnam;Ahn, Byunghyuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.4
    • /
    • pp.33-45
    • /
    • 2015
  • Currently, Internet is used an essential tool in the business area. Despite this importance, there is a risk of network attacks attempting collection of fraudulence, private information, and cyber terrorism. Firewalls and IDS(Intrusion Detection System) are tools against those attacks. IDS is used to determine whether a network data is a network attack. IDS analyzes the network data using various techniques including expert system, data mining, and state transition analysis. This paper tries to compare the performance of two data mining models in detecting network attacks. They are decision tree (C4.5), and neural network (FANN model). I trained and tested these models with data and measured the effectiveness in terms of detection accuracy, detection rate, and false alarm rate. This paper tries to find out which model is effective in intrusion detection. In the analysis, I used KDD Cup 99 data which is a benchmark data in intrusion detection research. I used an open source Weka software for C4.5 model, and C++ code available for FANN model.

Numerical investigation of turbulent lid-driven flow using weakly compressible smoothed particle hydrodynamics CFD code with standard and dynamic LES models

  • Tae Soo Choi;Eung Soo Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3367-3382
    • /
    • 2023
  • Smoothed Particle Hydrodynamics (SPH) is a Lagrangian computational fluid dynamics method that has been widely used in the analysis of physical phenomena characterized by large deformation or multi-phase flow analysis, including free surface. Despite the recent implementation of eddy-viscosity models in SPH methodology, sophisticated turbulent analysis using Lagrangian methodology has been limited due to the lack of computational performance and numerical consistency. In this study, we implement the standard and dynamic Smagorinsky model and dynamic Vreman model as sub-particle scale models based on a weakly compressible SPH solver. The large eddy simulation method is numerically identical to the spatial discretization method of smoothed particle dynamics, enabling the intuitive implementation of the turbulence model. Furthermore, there is no additional filtering process required for physical variables since the sub-grid scale filtering is inherently processed in the kernel interpolation. We simulate lid-driven flow under transition and turbulent conditions as a benchmark. The simulation results show that the dynamic Vreman model produces consistent results with experimental and numerical research regarding Reynolds averaged physical quantities and flow structure. Spectral analysis also confirms that it is possible to analyze turbulent eddies with a smaller length scale using the dynamic Vreman model with the same particle size.

Criticality Analyses of Spent Fuel Shipping Cask (핵연료(核燃料) 수송용기(輸送容器)에 대(對)한 핵림계분석(核臨界分析))

  • Min, Duck-Kee;Ro, Seung-Gy;Kwack, Eun-Ho
    • Journal of Radiation Protection and Research
    • /
    • v.9 no.2
    • /
    • pp.97-102
    • /
    • 1984
  • Criticality analyses of the KSC-1(Korean Shipping Cask-1) spent fuel shipping cask have been performed with the help of KENO-IV Monte Carlo computer code and 19-group CSLIB 19 cross section set which was generated from AMPX modular system. The analyses followed a benchmark calculation which has been made regard to the B & W CX-10 criticality facility in order to validate the Monte Carlo code cross section set described above. The KSC-1 shipping cask seems to be safe in the criticality point of view for the transport of one PWR spent fuel assembly under the normal conditions as well as the hypothetical accident conditions.

  • PDF

Three Dimensional Finite Element Analysis of Filling Stage in Casting Process Using Adaptive Grid Refinement Technique (3차원 적응 격자 세분화를 이용한 주조 공정의 충전 해석)

  • Kim Ki Don;Jeong Jun Ho;Yang Dong Yol
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.29 no.5 s.236
    • /
    • pp.568-576
    • /
    • 2005
  • A 3-D finite element model combined with a volume tracking method is presented in this work to simulate the mold filling for casting processes. Especially, the analysis involves an adaptive grid method that is created under a criterion of element categorization of filling states and locations in the total region at each time step. By using an adaptive grid wherein the elements, finer than those in internal and external regions, are distributed at the surface region through refinement and coarsening procedures, a more efficient analysis of transient fluid flow with free surface is achieved. Adaptive grid based on VOF method is developed in tetrahedral element system. Through a 3-D analysis of the benchmark test of the casting process, the efficiency of the proposed adaptive grid method is verified. Developed FE code is applied to a typical industrial part of the casting process such as aluminum road wheel.

Microarchitectural Defense and Recovery Against Buffer Overflow Attacks (버퍼 오버플로우 공격에 대한 마이크로구조적 방어 및 복구 기법)

  • Choi, Lynn;Shin, Yong;Lee, Sang-Hoon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.178-192
    • /
    • 2006
  • The buffer overflow attack is the single most dominant and lethal form of security exploits as evidenced by recent worm outbreaks such as Code Red and SQL Stammer. In this paper, we propose microarchitectural techniques that can detect and recover from such malicious code attacks. The idea is that the buffer overflow attacks usually exhibit abnormal behaviors in the system. This kind of unusual signs can be easily detected by checking the safety of memory references at runtime, avoiding the potential data or control corruptions made by such attacks. Both the hardware cost and the performance penalty of enforcing the safety guards are negligible. In addition, we propose a more aggressive technique called corruption recovery buffer (CRB), which can further increase the level of security. Combined with the safety guards, the CRB can be used to save suspicious writes made by an attack and can restore the original architecture state before the attack. By performing detailed execution-driven simulations on the programs selected from SPEC CPU2000 benchmark, we evaluate the effectiveness of the proposed microarchitectural techniques. Experimental data shows that enforcing a single safety guard can reduce the number of system failures substantially by protecting the stack against return address corruptions made by the attacks. Furthermore, a small 1KB CRB can nullify additional data corruptions made by stack smashing attacks with only less than 2% performance penalty.

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.

An Efficient Test Data Compression/Decompression for Low Power Testing (저전력 테스트를 고려한 효율적인 테스트 데이터 압축 방법)

  • Chun Sunghoon;Im Jung-Bin;Kim Gun-Bae;An Jin-Ho;Kang Sungho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.2 s.332
    • /
    • pp.73-82
    • /
    • 2005
  • Test data volume and power consumption for scan vectors are two major problems in system-on-a-chip testing. Therefore, this paper proposes a new test data compression/decompression method for low power testing. The method is based on analyzing the factors that influence test parameters: compression ratio, power reduction and hardware overhead. To improve the compression ratio and the power reduction ratio, the proposed method is based on Modified Statistical Coding (MSC), Input Reduction (IR) scheme and the algorithms of reordering scan flip-flops and reordering test pattern sequence in a preprocessing step. Unlike previous approaches using the CSR architecture, the proposed method is to compress original test data, not $T_{diff}$, and decompress the compressed test data without the CSR architecture. Therefore, the proposed method leads to better compression ratio with lower hardware overhead and lower power consumption than previous works. An experimental comparison on ISCAS '89 benchmark circuits validates the proposed method.