• Title/Summary/Keyword: Supercomputer Performance

Search Result 58, Processing Time 0.021 seconds

A Study on Scalability of Profiling Method Based on Hardware Performance Counter for Optimal Execution of Supercomputer (슈퍼컴퓨터 최적 실행 지원을 위한 하드웨어 성능 카운터 기반 프로파일링 기법의 확장성 연구)

  • Choi, Jieun;Park, Guenchul;Rho, Seungwoo;Park, Chan-Yeol
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.221-230
    • /
    • 2020
  • Supercomputer that shares limited resources to multiple users needs a way to optimize the execution of application. For this, it is useful for system administrators to get prior information and hint about the applications to be executed. In most high-performance computing system operations, system administrators strive to increase system productivity by receiving information about execution duration and resource requirements from users when executing tasks. They are also using profiling techniques that generates the necessary information using statistics such as system usage to increase system utilization. In a previous study, we have proposed a scheduling optimization technique by developing a hardware performance counter-based profiling technique that enables characterization of applications without further understanding of the source code. In this paper, we constructed a profiling testbed cluster to support optimal execution of the supercomputer and experimented with the scalability of the profiling method to analyze application characteristics in the built cluster environment. Also, we experimented that the profiling method can be utilized in actual scheduling optimization with scalability even if the application class is reduced or the number of nodes for profiling is minimized. Even though the number of nodes used for profiling was reduced to 1/4, the execution time of the application increased by 1.08% compared to profiling using all nodes, and the scheduling optimization performance improved by up to 37% compared to sequential execution. In addition, profiling by reducing the size of the problem resulted in a quarter of the cost of collecting profiling data and a performance improvement of up to 35%.

Analysis of Traffic and Attack Frequency in the NURION Supercomputing Service Network (누리온 슈퍼컴퓨팅서비스 네트워크에서 트래픽 및 공격 빈도 분석)

  • Lee, Jae-Kook;Kim, Sung-Jun;Hong, Taeyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.5
    • /
    • pp.113-120
    • /
    • 2020
  • KISTI(Korea Institute of Science and Technology Information) provides HPC(High Performance Computing) service to users of university, institute, government, affiliated organization, company and so on. The NURION, supercomputer that launched its official service on Jan. 1, 2019, is the fifth supercomputer established by the KISTI. The NURION has 25.7 petaflops computation performance. Understanding how supercomputing services are used and how researchers are using is critical to system operators and managers. It is central to monitor and analysis network traffic. In this paper, we briefly introduce the NURION system and supercomputing service network with security configuration. And we describe the monitoring system that checks the status of supercomputing services in real time. We analyze inbound/outbound traffics and abnormal (attack) IP addresses data that are collected in the NURION supercomputing service network for 11 months (from January to November 1919) using time series and correlation analysis method.

A Hybrid Cloud Testing System Based on Virtual Machines and Networks

  • Chen, Jing;Yan, Honghua;Wang, Chunxiao;Liu, Xuyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1520-1542
    • /
    • 2020
  • Traditional software testing typically uses many physical resources to manually build various test environments, resulting in high resource costs and long test time due to limited resources, especially for small enterprises. Cloud computing can provide sufficient low-cost virtual resources to alleviate these problems through the virtualization of physical resources. However, the provision of various test environments and services for implementing software testing rapidly and conveniently based on cloud computing is challenging. This paper proposes a multilayer cloud testing model based on cloud computing and implements a hybrid cloud testing system based on virtual machines (VMs) and networks. This system realizes the automatic and rapid creation of test environments and the remote use of test tools and test services. We conduct experiments on this system and evaluate its applicability in terms of the VM provision time, VM performance and virtual network performance. The experimental results demonstrate that the performance of the VMs and virtual networks is satisfactory and that this system can improve the test efficiency and reduce test costs through rapid virtual resource provision and convenient test services.

Intelligent u-Learning and Research Environment for Computational Science on Mobile Device

  • Park, Sun-Rae;Jin, Duseok;Lee, Jongsuk Ruth;Cho, Kum Won;Lee, Kyu-Chul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.709-722
    • /
    • 2014
  • In the $21^{st}$ century, IT reform has led to the development of cyber-infrastructure owing to the outstanding enhancement of computer and network performance. The ripple effect has continued to increase. Accordingly, this study suggests a new computational research environment using mobile devices. In order to simplify the access of supercomputer, Science AppStore, task management and virtualization technologies are developed on mobile devices. User can be able to research by utilizing computational science SW such as compressible flow solver and nano device simulation tool that in installed on supercomputer in mobile environments. Also, this research environment makes it possible to monitor the simulation result and covers 14 university, 33 subjects, and 1,202 individuals.

Study of Dark Matter at e+e- Collider using KISTI-5 Supercomputer

  • Park, Kihong;Cho, Kihyeon
    • International Journal of Contents
    • /
    • v.17 no.3
    • /
    • pp.67-73
    • /
    • 2021
  • Dark matter is barely known because it cannot be explained using the Standard Model. In addition, dark matter has not been detected yet. It is currently being explored through various ways. In this paper, we studied dark matter in an electron-positron collider using MadGraph5. The signal channel is e+e- → 𝜇+𝜇-A' where A' decays to dimuon. We studied the cross-section by increasing the center-of-mass energy. Central processing unit (CPU) time of simulation was compared with that using a local Linux machine and a KISTI-5 supercomputer (Knight Landing and Skylake). Furthermore, one or more cores were used for comparing CPU time among machines. Results of this study will enable the exploration of dark matter in electron-positron experiments. This study also serves as a reference for optimizing high-energy physics simulation toolkits.

Development of Pre- and Post-processing System for Supercomputing-based Large-scale Structural Analysis (슈퍼컴퓨팅 기반의 대규모 구조해석을 위한 전/후처리 시스템 개발)

  • Kim, Jae-Sung;Lee, Sang-Min;Lee, Jae-Yeol;Jeong, Hee-Seok;Lee, Seung-Min
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.2
    • /
    • pp.123-131
    • /
    • 2012
  • The requirements for computational resources to perform the structural analysis are increasing rapidly. The size of the current analysis problems that are required from practical industry is typically large-scale with more than millions degrees of freedom (DOFs). These large-scale analysis problems result in the requirements of high-performance analysis codes as well as hardware systems such as supercomputer systems or cluster systems. In this paper, the pre- and post-processing system for supercomputing based large-scale structural analysis is presented. The proposed system has 3-tier architecture and three main components; geometry viewer, pre-/post-processor and supercomputing manager. To analyze large-scale problems, the ADVENTURE solid solver was adopted as a general-purpose finite element solver and the supercomputer named 'tachyon' was adopted as a parallel computational platform. The problem solving performance and scalability of this structural analysis system is demonstrated by illustrative examples with different sizes of degrees of freedom.

Modeling and Performance Analysis of MAC Protocol for WBAN with Finite Buffer

  • Shu, Minglei;Yuan, Dongfeng;Chen, Changfang;Wang, Yinglong;Zhang, Chongqing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4436-4452
    • /
    • 2015
  • The IEEE 802.15.6 standard is introduced to satisfy all the requirements for monitoring systems operating in, on, or around the human body. In this paper, analytical models are developed for evaluating the performance of the IEEE 802.15.6 CSMA/CA-based medium access control protocol for wireless body area networks (WBAN) under unsaturation condition. We employ a three-dimensional Markov chain to model the backoff procedure, and an M/G/1/K queuing system to describe the packet queues in the buffer. The throughput and delay performances of WBAN operating in the beacon mode are analyzed in heterogeneous network comprised of different user priorities. Simulation results are included to demonstrate the accuracy of the proposed analytical model.

A Study of Dark Photon at the Electron-Positron Collider Experiments Using KISTI-5 Supercomputer

  • Park, Kihong;Cho, Kihyeon
    • Journal of Astronomy and Space Sciences
    • /
    • v.38 no.1
    • /
    • pp.55-63
    • /
    • 2021
  • The universe is well known to be consists of dark energy, dark matter and the standard model (SM) particles. The dark matter dominates the density of matter in the universe. The dark matter is thought to be linked with dark photon which are hypothetical hidden sector particles similar to photons in electromagnetism but potentially proposed as force carriers. Due to the extremely small cross-section of dark matter, a large amount of data is needed to be processed. Therefore, we need to optimize the central processing unit (CPU) time. In this work, using MadGraph5 as a simulation tool kit, we examined the CPU time, and cross-section of dark matter at the electron-positron collider considering three parameters including the center of mass energy, dark photon mass, and coupling constant. The signal process pertained to a dark photon, which couples only to heavy leptons. We only dealt with the case of dark photon decaying into two muons. We used the simplified model which covers dark matter particles and dark photon particles as well as the SM particles. To compare the CPU time of simulation, one or more cores of the KISTI-5 supercomputer of Nurion Knights Landing and Skylake and a local Linux machine were used. Our results can help optimize high-energy physics software through high-performance computing and enable the users to incorporate parallel processing.

HPC(High Performance Computer) Linux Clustering for UltraSPARC(64bit-RISC processor) (UltraSPARC(64bit-RISC processor)을 위한 고성능 컴퓨터 리눅스 클러스터링)

  • 김기영;조영록;장종권
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.45-48
    • /
    • 2003
  • We can easily buy network system for high performance micro-processor, progress computer architecture is caused of high bandwidth and low delay time. Coupling PC-based commodity technology with distributed computing methodologies provides an important advance in the development of single-user dedicated systems. Lately Network is joined PC or workstation by computers of high performance and low cost. Than it make intensive that Cluster system is resembled supercomputer. Unix, Linux, BSD, NT(Windows series) can use Cluster system OS(operating system). I'm chosen linux gain low cost, high performance and open technical documentation. This paper is benchmark performance of Beowulf clustering by UltraSPARC-1K(64bit-RISC processor). Benchmark tools use MPI(Message Passing Interface) and NetPIPE. Beowulf is a class of experimental parallel workstations developed to evaluate and characterize the design space of this new operating point in price-performance.

  • PDF