• Title/Summary/Keyword: high-performance computing (HPC)

Search Result 64, Processing Time 0.032 seconds

Deployment and Performance Analysis of Data Transfer Node Cluster for HPC Environment (HPC 환경을 위한 데이터 전송 노드 클러스터 구축 및 성능분석)

  • Hong, Wontaek;An, Dosik;Lee, Jaekook;Moon, Jeonghoon;Seok, Woojin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.197-206
    • /
    • 2020
  • Collaborative research in science applications based on HPC service needs rapid transfers of massive data between research colleagues over wide area network. With regard to this requirement, researches on enhancing data transfer performance between major superfacilities in the U.S. have been conducted recently. In this paper, we deploy multiple data transfer nodes(DTNs) over high-speed science networks in order to move rapidly large amounts of data in the parallel filesystem of KISTI's Nurion supercomputer, and perform transfer experiments between endpoints with approximately 130ms round trip time. We have shown the results of transfer throughput in different size file sets and compared them. In addition, it has been confirmed that the DTN cluster with three nodes can provide about 1.8 and 2.7 times higher transfer throughput than a single node in two types of concurrency and parallelism settings.

Implementation and Performance Analysis of Hadoop MapReduce over Lustre Filesystem (러스터 파일 시스템 기반 하둡 맵리듀스 실행 환경 구현 및 성능 분석)

  • Kwak, Jae-Hyuck;Kim, Sangwan;Huh, Taesang;Hwang, Soonwook
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.561-566
    • /
    • 2015
  • Hadoop is becoming widely adopted in scientific and commercial areas as an open-source distributed data processing framework. Recently, for real-time processing and analysis of data, an attempt to apply high-performance computing technologies to Hadoop is being made. In this paper, we have expanded the Hadoop Filesystem library to support Lustre, which is a popular high-performance parallel distributed filesystem, and implemented the Hadoop MapReduce execution environment over the Lustre filesystem. We analysed Hadoop MapReduce over Lustre by using Hadoop standard benchmark tools. We found that Hadoop MapReduce over Lustre execution has a performance 2-13 times better than a typical Hadoop MapReduce execution.

HPC(High Performance Computer) Linux Clustering for UltraSPARC(64bit-RISC processor) (UltraSPARC(64bit-RISC processor)을 위한 고성능 컴퓨터 리눅스 클러스터링)

  • 김기영;조영록;장종권
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.45-48
    • /
    • 2003
  • We can easily buy network system for high performance micro-processor, progress computer architecture is caused of high bandwidth and low delay time. Coupling PC-based commodity technology with distributed computing methodologies provides an important advance in the development of single-user dedicated systems. Lately Network is joined PC or workstation by computers of high performance and low cost. Than it make intensive that Cluster system is resembled supercomputer. Unix, Linux, BSD, NT(Windows series) can use Cluster system OS(operating system). I'm chosen linux gain low cost, high performance and open technical documentation. This paper is benchmark performance of Beowulf clustering by UltraSPARC-1K(64bit-RISC processor). Benchmark tools use MPI(Message Passing Interface) and NetPIPE. Beowulf is a class of experimental parallel workstations developed to evaluate and characterize the design space of this new operating point in price-performance.

  • PDF

Spark Framework Based on a Heterogenous Pipeline Computing with OpenCL (OpenCL을 활용한 이기종 파이프라인 컴퓨팅 기반 Spark 프레임워크)

  • Kim, Daehee;Park, Neungsoo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.2
    • /
    • pp.270-276
    • /
    • 2018
  • Apache Spark is one of the high performance in-memory computing frameworks for big-data processing. Recently, to improve the performance, general-purpose computing on graphics processing unit(GPGPU) is adapted to Apache Spark framework. Previous Spark-GPGPU frameworks focus on overcoming the difficulty of an implementation resulting from the difference between the computation environment of GPGPU and Spark framework. In this paper, we propose a Spark framework based on a heterogenous pipeline computing with OpenCL to further improve the performance. The proposed framework overlaps the Java-to-Native memory copies of CPU with CPU-GPU communications(DMA) and GPU kernel computations to hide the CPU idle time. Also, CPU-GPU communication buffers are implemented with switching dual buffers, which reduce the mapped memory region resulting in decreasing memory mapping overhead. Experimental results showed that the proposed Spark framework based on a heterogenous pipeline computing with OpenCL had up to 2.13 times faster than the previous Spark framework using OpenCL.

A Study on the Revitalization of High Performance Computing in Korea

  • Choi, Younkeun;Lee, Hyungjin;Jeong, Hyonam;Cho, Jaehyuk
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.129-136
    • /
    • 2016
  • Crucial aspects to successfully realizing the re-emergence of a contemporary and sustainable supercomputing community in South Korea will involve the devoted efforts and support from key government and R&D organizations. We suggest various supplementation plans regarding the roles of support for the statutory plan. This includes the committee and the plans which are often missing necessary support systems that help competent ministries to plan properly according to the missions of the research center. This dissertation suggests that adjustment in the HPC trends will depend upon exposing and correcting problems in the law as well as overall improvement of the law. Also, the total development of a super computing market is necessary. The results of these guidelines will create a spread of demand for supercomputing for national IT resource sharing, and will foster the development of supercomputer specialists worldwide. Other major end results include significant increases in research productivity and increased rates of product development.

Big Data Security and Privacy: A Taxonomy with Some HPC and Blockchain Perspectives

  • Alsulbi, Khalil;Khemakhem, Maher;Basuhail, Abdullah;Eassa, Fathy;Jambi, Kamal Mansur;Almarhabi, Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.43-55
    • /
    • 2021
  • The amount of Big Data generated from multiple sources is continuously increasing. Traditional storage methods lack the capacity for such massive amounts of data. Consequently, most organizations have shifted to the use of cloud storage as an alternative option to store Big Data. Despite the significant developments in cloud storage, it still faces many challenges, such as privacy and security concerns. This paper discusses Big Data, its challenges, and different classifications of security and privacy challenges. Furthermore, it proposes a new classification of Big Data security and privacy challenges and offers some perspectives to provide solutions to these challenges.

PARALLEL CFD SIMULATIONS OF PROJECTILE FLOW FIELDS WITH MICROJETS

  • Sahu Jubaraj;Heavey Karen R.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2006.05a
    • /
    • pp.94-99
    • /
    • 2006
  • As part of a Department of Defense Grand Challenge Project, advanced high performance computing (HPC) time-accurate computational fluid dynamics (CFD) techniques have been developed and applied to a new area of aerodynamic research on microjets for control of small and medium caliber projectiles. This paper describes a computational study undertaken to determine the aerodynamic effect of flow control in the afterbody regions of spin-stabilyzed projectiles at subsonic and low transonic speeds using an advanced scalable unstructured flow solver in various parallel computers such as the IBM SP4 and Linux Cluster. High efficiency is achieved for both steady and time-accurate unsteady flow field simulations using advanced scalable Navier-Stokes computational techniques. Results relating to the code's portability and its performance on the Linux clusters are also addressed. Numerical simulations with the unsteady microjets show the jets to substantially alter the flow field both near the jet and the base region of the projectile that in turn affects the forces and moments even at zero degree angle of attack. The results have shown the potential of HPC CFD simulations on parallel machines to provide to provide insight into the jet interaction flow fields leading to improve designs.

  • PDF

Analysis of Traffic and Attack Frequency in the NURION Supercomputing Service Network (누리온 슈퍼컴퓨팅서비스 네트워크에서 트래픽 및 공격 빈도 분석)

  • Lee, Jae-Kook;Kim, Sung-Jun;Hong, Taeyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.5
    • /
    • pp.113-120
    • /
    • 2020
  • KISTI(Korea Institute of Science and Technology Information) provides HPC(High Performance Computing) service to users of university, institute, government, affiliated organization, company and so on. The NURION, supercomputer that launched its official service on Jan. 1, 2019, is the fifth supercomputer established by the KISTI. The NURION has 25.7 petaflops computation performance. Understanding how supercomputing services are used and how researchers are using is critical to system operators and managers. It is central to monitor and analysis network traffic. In this paper, we briefly introduce the NURION system and supercomputing service network with security configuration. And we describe the monitoring system that checks the status of supercomputing services in real time. We analyze inbound/outbound traffics and abnormal (attack) IP addresses data that are collected in the NURION supercomputing service network for 11 months (from January to November 1919) using time series and correlation analysis method.

Digitalization as an aggregate performance in the energy transition for nuclear industry

  • Florencia de los Angeles Renteria del Toro;Chen Hao;Akira Tokuhiro;Mario Gomez-Fernandez;Armando Gomez-Torres
    • Nuclear Engineering and Technology
    • /
    • v.56 no.4
    • /
    • pp.1267-1276
    • /
    • 2024
  • The emerging technologies at the industrial level have deployed rapidly within the energy transition process innovations. The nuclear industry incorporates several technologies like Artificial Intelligence (AI), Machine Learning (ML), Digital Twins, High-Performance-Computing (HPC) and Quantum Computing (QC), among others. Factors identifications are explained to set up a regulatory framework in the digitalization era, providing new capabilities paths for nuclear technologies in the forthcoming years. The Analytical Network Process (ANP) integrates the quantitative-qualitative decision-making analysis to assess the implementation of different aspects in the digital transformation for the New-Energy Transition Era (NETE) with a Nuclear Power Infrastructure Development (NPID).

Development of CAE Service Platform Based on Cloud Computing Concept (클라우드 컴퓨팅기반 CAE서비스 플랫폼 개발)

  • Cho, Sang-Hyun
    • Journal of Korea Foundry Society
    • /
    • v.31 no.4
    • /
    • pp.218-223
    • /
    • 2011
  • Computer Aided Engineering (CAE) is very helpful field for every manufacturing industry including foundry. It covers CAD, CAM, and simulation technology also, and becomes as common sense in developing new products and processes. In South Korea, more than 600 foundries exist, and their average employee number is less than 40. Moreover, average age of them becomes higher. To break out these situations of foundry, software tools can be effective, and many commercial software tools had already been introduced. But their high costs and risks of investment act as difficulties in introducing the software tools to SMEs (Small and Medium size Enterprise). So we had developed cloud computing platform to propagate the CAE technologies to foundries. It includes HPC (High Performance Computing), platforms and software. So that users can try, enjoy, and utilize CAE software at cyber space without any investment. In addition, we also developed platform APIs (Application Programming Interface) to import not only our own CAE codes but also 3rd-party's packages to our cloud-computing platforms. As a result, CAE developers can upload their products on cloud platforms and distribute them through internet.