• Title/Summary/Keyword: high-performance computing

Search Result 1,097, Processing Time 0.032 seconds

DEVELOPMENT OF SUPERCOMPUTING APPLICATION TECHNOLOGY AND ITS ACHIEVEMENTS (슈퍼컴퓨팅 응용기술 개발 및 성과)

  • Kim, J.H.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2006.10a
    • /
    • pp.207-207
    • /
    • 2006
  • Hardware technologies for high-performance computing has been developing continuously. However, actual performance of software cannot keep up with the speed of development in hardware technologies, because hardware architectures become more and more complicated and hardware scales become larger. So, software technique to utilize high-performance computing systems more efficiently plays more important role in realizing high-performance computing for computational science. In this paper, the effort to enhance software performance on large and complex high-performance computing systems such as performance optimization and parallelization will be presented. Our effort to serve high-performance computational kernels such as high-performance sparse solvers and the achievements through this effort also will be introduced.

  • PDF

The Effect of Mesh Reordering on Laplacian Smoothing for Nonuniform Memory Access Architecture-based High Performance Computing Systems (NUMA구조를 가진 고성능 컴퓨팅 시스템에서의 메쉬 재배열의 라플라시안 스무딩에 대한 효과)

  • Kim, Jbium
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.82-88
    • /
    • 2014
  • We study the effect of mesh reordering on Laplacian smoothing for parallel high performance computing systems. Specifically, we use the Reverse-Cuthill McKee algorithm to reorder meshes and use Laplacian Smoothing to improve the mesh quality on Nonuniform memory access architecture-based parallel high performance computing systems. First, we investigate the effect of using mesh reordering on Laplacian smoothing for a single core system and extend the idea to NUMA-based high performance computing systems.

Simulation-based Design Verification for High-performance Computing System

  • Jeong Taikyeong T.
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.12
    • /
    • pp.1605-1612
    • /
    • 2005
  • This paper presents the knowledge and experience we obtained by employing multiprocessor systems as a computer simulation design verification to study high-performance computing system. This paper also describes a case study of symmetric multiprocessors (SMP) kernel on a 32 CPUs CC-NUMA architecture using an actual architecture. A small group of CPUs of CC-NUMA, high-performance computer system, is clustered into a processing node or cluster. By simulating the system design verification tools; we discussed SMP OS kernel on a CC-NUMA multiprocessor architecture performance which is $32\%$ of the total execution time and remote memory access latency is occupied $43\%$ of the OS time. In this paper, we demonstrated our simulation results for multiprocessor, high-performance computing system performance, using simulation-based design verification.

  • PDF

High Performance Computing: Infrastructure, Application, and Operation

  • Park, Byung-Hoon;Kim, Youngjae;Kim, Byoung-Do;Hong, Taeyoung;Kim, Sungjun;Lee, John K.
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.4
    • /
    • pp.280-286
    • /
    • 2012
  • The last decades have witnessed an increasingly indispensible role of high performance computing (HPC) in science, business and financial sectors, as well as military and national security areas. To introduce key aspects of HPC to a broader community, an HPC session was organized for the first time ever for the United States and Korea Conference (UKC) during 2012. This paper summarizes four invited talks that each covers scientific HPC applications, large-scale parallel file systems, administration/maintenance of supercomputers, and green technology towards building power efficient supercomputers of the next generation.

On the Performance of Oracle Grid Engine Queuing System for Computing Intensive Applications

  • Kolici, Vladi;Herrero, Albert;Xhafa, Fatos
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.491-502
    • /
    • 2014
  • In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.

A Study on Knowledge Unit for High-Performance Computing in Computational Science (계산과학분야의 고성능컴퓨팅에 관한 지식단위 연구)

  • Yoon, Heejun;Ahn, Seongjin
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.1021-1026
    • /
    • 2018
  • Computational science is at an early stage and is not yet fully active, and the high-performance computing required in the field of computational science is at present a special subject of parallel and distributed computing in computer science. Additionally, there are too few education courses which teach high-performance computing from basic to advanced levels. In this study, we derive the knowledge units needed to learn high-performance computing, an important research tool in computational science. Using ACM the Computer Science Curricula 2013 (CS2013), we examine the validity and reliability of 89 knowledge units and eleven knowledge units with high validity and reliability, after which nine core knowledge units and two optional knowledge units are proposed. The eleven proposed knowledge units are expected to contribute to the development of the high-performance computing curriculum necessary to teach computational science.

Performance Measurement and Analysis of Virtual Desktop Service using Benchmarking Tool (벤치마킹 도구글 이용한 가상 데스크탑 서비스 성능 측정 및 분석)

  • Kim, Sun-Wook;Oh, Soo-Cheol;Choi, Ji-Hyeok;Kim, Seong-Woon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.18-21
    • /
    • 2016
  • VDI 기반 가상 세스크탑 사용자에게 높은 만족도를 제공하기 위해서는 높은 CPU 속도 및 충분한 메모리와 저장공간, 네트워크 대역폭 등의 고성는 디바이스를 제공할 필요가 있다. 하지만 VDI 구축은 많은 비용이 소모되므로 서비스 사용자 수를 고려하여 해당 인프라 규모를 신중하게 결정할 필요가 있다. 특히 클라우드 기반 VDI 서비스에서는 각 서버에서 구동하는 가상 데스크탑의 수가 증가함에 따라 하이퍼바이저의 관리에 대한 연산이 증가하여 서버의 가용성이 감소된다. 본 논문에서는 한국전자통신연구원에서 개발한 클라우드 DaaS 시스템을 기반으로 VDI 산업 표준 성능 테스트 도구인 LoginVSI를 사용하여, 서비스 규모별 최적의 VDI 솔루션을 찾아내고 구축하기 위한 성능 측청 방법을 제시하고 결과를 분석 한다.

Modeling the Growth of Bulk Single Crystals via High Performance Computing

  • Andrew Yeckel;Kwon, Yong-Il;Jeffrey J. Derby
    • Proceedings of the Korea Association of Crystal Growth Conference
    • /
    • 1997.06a
    • /
    • pp.115-120
    • /
    • 1997
  • We have developed new algorithms for solution of the three-dimensional, time-dependent Navier-Stokes equations that utilize massively parallel supercomputing implemented on the Connection Machine 5. Here, we apply these techniques to analyze he fluid flows that occur during the growth of the tow nonlinear optical crystals-potassium dihydrogen phosphate (KDP), which is producted in a novel rapid growth system under development by the Lawrence Livermore National Laboratory Laser Division, and Potassium titanyl phosphate(KTP), which is grown from a high-temperature aqueous solution.

  • PDF

A Performance Comparison of Parallel Programming Models on Edge Devices (엣지 디바이스에서의 병렬 프로그래밍 모델 성능 비교 연구)

  • Dukyun Nam
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.165-172
    • /
    • 2023
  • Heterogeneous computing is a technology that utilizes different types of processors to perform parallel processing. It maximizes task processing and energy efficiency by leveraging various computing resources such as CPUs, GPUs, and FPGAs. On the other hand, edge computing has developed with IoT and 5G technologies. It is a distributed computing that utilizes computing resources close to clients, thereby offloading the central server. It has evolved to intelligent edge computing combined with artificial intelligence. Intelligent edge computing enables total data processing, such as context awareness, prediction, control, and simple processing for the data collected on the edge. If heterogeneous computing can be successfully applied in the edge, it is expected to maximize job processing efficiency while minimizing dependence on the central server. In this paper, experiments were conducted to verify the feasibility of various parallel programming models on high-end and low-end edge devices by using benchmark applications. We analyzed the performance of five parallel programming models on the Raspberry Pi 4 and Jetson Orin Nano as low-end and high-end devices, respectively. In the experiment, OpenACC showed the best performance on the low-end edge device and OpenSYCL on the high-end device due to the stability and optimization of system libraries.

The development of the high effective and stoppageless file system for high performance computing (High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현)

  • Park, Yeong-Bae;Choe, Seung-Hwan;Lee, Sang-Ho;Kim, Gyeong-Su;Gong, Yong-Jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF