• Title/Summary/Keyword: supercomputer

Search Result 142, Processing Time 0.039 seconds

Development of Grid Service Based Molecular Docking Application (그리드 서비스 기반 분자 다킹 어플리케이션 개발)

  • Lee, HwaMin;Chin, SungHo;Lee, JongHyuk;Park, Seongbin;Yu, HeonChang
    • The Journal of Korean Association of Computer Education
    • /
    • v.9 no.4
    • /
    • pp.63-74
    • /
    • 2006
  • A molecular docking is thc process of reducing an unmanageable number of compounds to a limited number of compounds for the target of interest by means of computational simulation. And it is one of a large scale scientific application that requires large computing power and data storage capability. Previous applications or software for molecular docking were developed to be run on a supercomputer, a workstation, or a cluster computer. However the virtual screening using a supercomputer has a problem that a supercomputer is very expensive and the virtual screening using a workstation or a cluster-computer requires a long execution time. Thus we propose Grid service based molecular docking application. We designed a resource broker and a data broker for supporting efficient molecular docking service and developed various services for molecular docking.

  • PDF

Performance Characterization of Tachyon Supercomputer using Hybrid Multi-zone NAS Parallel Benchmarks (하이브리드 병렬 프로그램을 이용한 타키온 슈퍼컴퓨터의 성능)

  • Park, Nam-Kyu;Jeong, Yoon-Su;Yi, Hong-Suk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.138-144
    • /
    • 2010
  • Tachyon primary system which introduces recently is a high performance supercomputer that composed with AMD Barcelona nodes. In this paper, we will verify the performance and parallel scalability of TachyonIn by using multi-zone NAS Parallel Benchmark(NPB) which is one of a program with hybrid parallel method. To test performance of hybrid parallel execution, B and C classes of BT-MZ in NPB version 3.3 were used. And the parallel scalability test has finished with Tachyon's 1024 processes. It is the first time in Korea to get a result of hybrid parallel computing calculation using more than 1024 processes. Hybrid parallel method in high performance computing system with multi-core technology like Tachyon describes that it can be very efficient and useful parallel performance benchmarks.

Analysis of Traffic and Attack Frequency in the NURION Supercomputing Service Network (누리온 슈퍼컴퓨팅서비스 네트워크에서 트래픽 및 공격 빈도 분석)

  • Lee, Jae-Kook;Kim, Sung-Jun;Hong, Taeyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.5
    • /
    • pp.113-120
    • /
    • 2020
  • KISTI(Korea Institute of Science and Technology Information) provides HPC(High Performance Computing) service to users of university, institute, government, affiliated organization, company and so on. The NURION, supercomputer that launched its official service on Jan. 1, 2019, is the fifth supercomputer established by the KISTI. The NURION has 25.7 petaflops computation performance. Understanding how supercomputing services are used and how researchers are using is critical to system operators and managers. It is central to monitor and analysis network traffic. In this paper, we briefly introduce the NURION system and supercomputing service network with security configuration. And we describe the monitoring system that checks the status of supercomputing services in real time. We analyze inbound/outbound traffics and abnormal (attack) IP addresses data that are collected in the NURION supercomputing service network for 11 months (from January to November 1919) using time series and correlation analysis method.

A Study on Scalability of Profiling Method Based on Hardware Performance Counter for Optimal Execution of Supercomputer (슈퍼컴퓨터 최적 실행 지원을 위한 하드웨어 성능 카운터 기반 프로파일링 기법의 확장성 연구)

  • Choi, Jieun;Park, Guenchul;Rho, Seungwoo;Park, Chan-Yeol
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.221-230
    • /
    • 2020
  • Supercomputer that shares limited resources to multiple users needs a way to optimize the execution of application. For this, it is useful for system administrators to get prior information and hint about the applications to be executed. In most high-performance computing system operations, system administrators strive to increase system productivity by receiving information about execution duration and resource requirements from users when executing tasks. They are also using profiling techniques that generates the necessary information using statistics such as system usage to increase system utilization. In a previous study, we have proposed a scheduling optimization technique by developing a hardware performance counter-based profiling technique that enables characterization of applications without further understanding of the source code. In this paper, we constructed a profiling testbed cluster to support optimal execution of the supercomputer and experimented with the scalability of the profiling method to analyze application characteristics in the built cluster environment. Also, we experimented that the profiling method can be utilized in actual scheduling optimization with scalability even if the application class is reduced or the number of nodes for profiling is minimized. Even though the number of nodes used for profiling was reduced to 1/4, the execution time of the application increased by 1.08% compared to profiling using all nodes, and the scheduling optimization performance improved by up to 37% compared to sequential execution. In addition, profiling by reducing the size of the problem resulted in a quarter of the cost of collecting profiling data and a performance improvement of up to 35%.

HUGE DIRECT NUMERICAL SIMULATION OF TURBULENT COMBUSTION - TOWARD PERFECT SIMULATION OF IC ENGINE -

  • Tanahashi, Mamoru;Seo, Takehiko;Sato, Makoto;Tsunemi, Akihiko;Miyauchi, Toshio
    • Journal of computational fluids engineering
    • /
    • v.13 no.4
    • /
    • pp.114-125
    • /
    • 2008
  • Current state and perspective of DNS of turbulence and turbulent combustion are discussed with feature trend of the fastest supercomputer in the world. Based on the perspective of DNS of turbulent combustion, possibility of perfect simulations of IC engine is shown. In 2020, the perfect simulation will be realized with 30 billion grid points by 1EXAFlops supercomputer, which requires 4 months CPU time. The CPU time will be reduced to about 4 days if several developments were achieved in the current fundamental researches. To shorten CPU time required for DNS of turbulent combustion, two numerical methods are introduced to full-explicit full-compressible DNS code. One is compact finite difference filter to reduce spatial resolution requirements and numerical oscillations in small scales, and another is well-known point-implicit scheme to avoid quite small time integration of the order of nanosecond for fully explicit DNS. Availability and accuracy of these numerical methods have been confirmed carefully for auto-ignition, planar laminar flame and turbulent premixed flames. To realize DNS of IC engine with realistic kinetic mechanism, several DNS of elemental combustion process in IC engines has been conducted.

Simulation of Grape Downy Mildew Development Across Geographic Areas Based on Mesoscale Weather Data Using Supercomputer

  • Kim, Kyu-Rang;Seem, Robert C.;Park, Eun-Woo;Zack, John W.;Magarey, Roger D.
    • The Plant Pathology Journal
    • /
    • v.21 no.2
    • /
    • pp.111-118
    • /
    • 2005
  • Weather data for disease forecasts are usually derived from automated weather stations (AWS) that may be dispersed across a region in an irregular pattern. We have developed an alternative method to simulate local scale, high-resolution weather and plant disease in a grid pattern. The system incorporates a simplified mesoscale boundary layer model, LAWSS, for estimating local conditions such as air temperature and relative humidity. It also integrates special models for estimating of surface wetness duration and disease forecasts, such as the grapevine downy mildew forecast model, DMCast. The system can recreate weather forecasts utilizing the NCEP/NCAR reanalysis database, which contains over 57 years of archived and corrected global upper air conditions. The highest horizontal resolution of 0.150 km was achieved by running 5-step nested child grids inside coarse mother grids. Over the Finger Lakes and Chautauqua Lake regions of New York State, the system simulated three growing seasons for estimating the risk of grape downy mildew with 1 km resolution. Outputs were represented as regional maps or as site-specific graphs. The highest resolutions were achieved over North America, but the system is functional for any global location. The system is expected to be a powerful tool for site selection and reanalysis of historical plant disease epidemics.

Development of Pre- and Post-processing System for Supercomputing-based Large-scale Structural Analysis (슈퍼컴퓨팅 기반의 대규모 구조해석을 위한 전/후처리 시스템 개발)

  • Kim, Jae-Sung;Lee, Sang-Min;Lee, Jae-Yeol;Jeong, Hee-Seok;Lee, Seung-Min
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.2
    • /
    • pp.123-131
    • /
    • 2012
  • The requirements for computational resources to perform the structural analysis are increasing rapidly. The size of the current analysis problems that are required from practical industry is typically large-scale with more than millions degrees of freedom (DOFs). These large-scale analysis problems result in the requirements of high-performance analysis codes as well as hardware systems such as supercomputer systems or cluster systems. In this paper, the pre- and post-processing system for supercomputing based large-scale structural analysis is presented. The proposed system has 3-tier architecture and three main components; geometry viewer, pre-/post-processor and supercomputing manager. To analyze large-scale problems, the ADVENTURE solid solver was adopted as a general-purpose finite element solver and the supercomputer named 'tachyon' was adopted as a parallel computational platform. The problem solving performance and scalability of this structural analysis system is demonstrated by illustrative examples with different sizes of degrees of freedom.