• Title/Summary/Keyword: HPC cluster

Search Result 26, Processing Time 0.021 seconds

Improving Performance of HPC Clusters by Including Non-Dedicated Nodes on a LAN (LAN상의 비전용 노드를 포함한 HPC 클러스터의 확장에 의한 성능 향상)

  • Park, Pil-Seong
    • Journal of Information Technology Services
    • /
    • v.7 no.4
    • /
    • pp.209-219
    • /
    • 2008
  • Recently the number of Internet firms providing useful information like weather forecast data is growing. However most of such information is not prepared in accordance with customers' demand, resulting in relatively low customer satisfaction. To upgrade the service quality, it is recommended to devise a system for customers to get involved in the process of service production, which normally requires a huge investment on supporting computer systems like clusters. In this paper, as a way to cut down the budget for computer systems but to improve the performance, we extend the HPC cluster system to include other Internet servers working independently on the same LAN, to make use of their idle times. We also deal with some issues resulting from the extension, like the security problem and a possible deadlock caused by overload on some non-dedicated nodes. At the end, we apply the technique in the solution of some 2D grid problem.

Expansion of An HPC Cluster Over SSH Tunnel (SSH 터널을 이용한 HPC 클러스터의 확장)

  • Park, Pil-Seong;Kumar, Harshit
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2009.11a
    • /
    • pp.539-543
    • /
    • 2009
  • 실시간으로 데이터를 처리하여 빠른 서비스를 제공하기 위해 PC 클러스터가 널리 사용되고 있다. 본 논문에서는 PC 클러스터의 한 종류인 HPC 클러스터에 전용 노드를 추가하는 대신, 방화벽 외부의 네트워크 상에 존재하는 비전용 노드의 유휴시간을 활용하도록 클러스터를 확장하여 성능을 향상시키는 경우 발생하는 NFS 등의 보안 문제를 SSH 터널링을 사용하여 해결하는 방안을 제시하고 암호화된 NFS의 성능을 실험하였다.

  • PDF

Deployment and Performance Analysis of Data Transfer Node Cluster for HPC Environment (HPC 환경을 위한 데이터 전송 노드 클러스터 구축 및 성능분석)

  • Hong, Wontaek;An, Dosik;Lee, Jaekook;Moon, Jeonghoon;Seok, Woojin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.197-206
    • /
    • 2020
  • Collaborative research in science applications based on HPC service needs rapid transfers of massive data between research colleagues over wide area network. With regard to this requirement, researches on enhancing data transfer performance between major superfacilities in the U.S. have been conducted recently. In this paper, we deploy multiple data transfer nodes(DTNs) over high-speed science networks in order to move rapidly large amounts of data in the parallel filesystem of KISTI's Nurion supercomputer, and perform transfer experiments between endpoints with approximately 130ms round trip time. We have shown the results of transfer throughput in different size file sets and compared them. In addition, it has been confirmed that the DTN cluster with three nodes can provide about 1.8 and 2.7 times higher transfer throughput than a single node in two types of concurrency and parallelism settings.

Comparative and Combined Performance Studies of OpenMP and MPI Codes (OpenMP와 MPI 코드의 상대적, 혼합적 성능 고찰)

  • Lee Myung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.157-162
    • /
    • 2006
  • Recent High Performance Computing (HPC) platforms can be classified as Shared-Memory Multiprocessors (SMP), Massively Parallel Processors (MPP), and Clusters of computing nodes. These platforms are deployed in many scientific and engineering applications which require very high demand on computing power. In order to realize an optimal performance for these applications, it is crucial to find and use the suitable computing platforms and programming paradigms. In this paper, we use SPEC HPC 2002 benchmark suite developed in various parallel programming models (MPI, OpenMP, and hybrid of MPI/OpenMP) to find an optimal computing environments and programming paradigms for them through their performance analyses.

Workflow-based Bio Data Analysis System for HPC (HPC 환경을 위한 워크플로우 기반의 바이오 데이터 분석 시스템)

  • Ahn, Shinyoung;Kim, ByoungSeob;Choi, Hyun-Hwa;Jeon, Seunghyub;Bae, Seungjo;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.97-106
    • /
    • 2013
  • Since human genome project finished, the cost for human genome analysis has decreased very rapidly. This results in the sharp increase of human genome data to be analyzed. As the need for fast analysis of very large bio data such as human genome increases, non IT researchers such as biologists should be able to execute fast and effectively many kinds of bio applications, which have a variety of characteristics, under HPC environment. To accomplish this purpose, a biologist need to define a sequence of bio applications as workflow easily because generally bio applications should be combined and executed in some order. This bio workflow should be executed in the form of distributed and parallel computing by allocating computing resources efficiently under HPC cluster system. Through this kind of job, we can expect better performance and fast response time of very large bio data analysis. This paper proposes a workflow-based data analysis system specialized for bio applications. Using this system, non-IT scientists and researchers can analyze very large bio data easily under HPC environment.

Assessment of sediment profiles applying nuclear techniques: use of a nucleonic gauge in Panama Canal watershed

  • Xavier Sanchez;Henry Hoo;Patrick Brisset;Reinhardt Pinzon
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.4236-4243
    • /
    • 2022
  • An industrial nuclear technique based on the use of an X-ray profiler was implemented to estimate the densities or concentrations of sediments present in an Atlantic maritime zone in the areas subjected to dredging under the governance of the Panama Canal Authority (ACP). The sediment profiles show in most areas there is a concentration of between 1.00-1.15 g/cm3 except for one area in particular, the density starts at 1.20 g/cm3 and even reaches values greater than 1.50 g/cm3; therefore, an already consolidated sediment is present, which, depending on the depth found. Values of 1.265 g/cm3, 1.297 g/cm3, 1.185 g/cm3 obtained by ACP previous studies are within the range of 1.20-1.30 g/cm3 measured with the nucleonic gauge. However, it should be noted that during the tests with the X ray profiler, sediment densities values greater than the aforementioned limit were also obtained that varying according at depths close to 12 m and 18 m with values reached up to 1.513 g/cm3 and 1.60 g/cm3, respectively. This demonstrates that sediment accumulation depends on depth. This nucleonic gauge is feasible technique for the study of the sedimentation phenomenon in channel basins and even in other projects nationwide.

Fuzzy modeling using HPC-MEANS algorhthm and genetic algorithm

  • Ryu, Kye-Won;Lee, Won-Gyu;Kim, Seong-Hwan;Noh, Heung-Sik;Park, Mignon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.113-116
    • /
    • 1994
  • In this paper. we suggest new fuzzy modeling algorithm, which can be easily implemented, by combining HPC-MEANS Algorithm and Genetic Algorithm. HPC-MEANS used to cluster the sample data in input-output space will hyper planes and to make structure identification roughly and Genetic Algorithm is used to nine the premise and consequent parameters. For the validity of suggested methods we model the system with I/O data from known system. and then compare two systems.

  • PDF

A Striped Checkpointing Scheme for the Cluster System with the Distributed RAID (분산 RAID 기반의 클러스터 시스템을 위한 분할된 결함허용정보 저장 기법)

  • Chang, Yun-Seok
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.123-130
    • /
    • 2003
  • This paper presents a new striped checkpointing scheme for serverless cluster computers, where the local disks are attached to the cluster nodes collectively form a distributed RAID with a single I/O space. Striping enables parallel I/O on the distributed disks and staggering avoids network bottleneck in the distributed RAID. We demonstrate how to reduce the checkpointing overhead and increase the availability by striping and staggering dynamically for communication intensive applications. Linpack HPC Benchamark and MPI programs are applied to these checkpointing schemes for performance evaluation on the 16-nodes cluster system. Benchmark results prove the benefits of the striped checkpointing scheme compare to the existing schemes, and these results are useful to design the efficient checkpointing scheme for fast rollback recovery from any single node failure in a cluster system.