• Title/Summary/Keyword: supercomputer

Search Result 142, Processing Time 0.024 seconds

A Study on the Infra-Capacity Analysis for Optimal Operating Environments of Supercomputer Center (슈퍼컴퓨터센터의 최적 운영환경을 위한 기반시설 용량 산정에 관한 연구)

  • Ryu, Young-Hee;Sung, Jin-Woo;Kim, Duk-Su;Kil, Seong-Ho
    • KIEAE Journal
    • /
    • v.10 no.2
    • /
    • pp.19-24
    • /
    • 2010
  • According to the increasing demands of supercomputer, an exclusive supercomputer building is requested to install a supercomputer for promoting high-end R&D as well as creating the public service infrastructure in the national level. KISTI, as a public supercomputer center with the 4th supercomputer (capacity of 360Tflops), is experiencing shortage of infrastructure systems, caused by increased capacity. Thus, it is anticipated that the situation will be growing serious when the 5th and 6th supercomputers will be installed. On this study, analyzed on the 5th supercomputer system through projecting performance level and optimal operating environments by assessing infra-capacity. Explored way to construct optimal operating environments through infrastructure-capacity analysis of supercomputer center. This study can be of use for reviewing KISTI's conditions as the only supercomputer center in Korea. In addition, it provides reference data for planning the new exclusive supercomputer center in terms of feasibility, while analyzing infrastructure systems.

슈퍼컴퓨터의 기술발전추세와 미래

  • 유여백
    • 전기의세계
    • /
    • v.38 no.7
    • /
    • pp.46-52
    • /
    • 1989
  • 지금까지 Vector supercomputer를 비롯한 여러종류의 supercomputer의 기술발전 추세를 간단히 살펴보았다. 앞으로의 Supercomputer는 VLSI기술의 발달, GaAs같은 새로운 소재의 chip, optical connection을 이용한 더 나은 Package방식, 보다 큰 memory 그리고 parallel processing을 최대한 이용하여 현재의 supercomputer성능보다 엄청나게 강력한 Test FLOPS급의 성능을 발휘할 것으로 기대된다. 또한 전문분야별 Supercomputer들도 발전을 거듭하면서 성능은 크게 증가하고 값은 떨어져서 과학기술 분야를 포함한 각분야에 일상적으로 쓰이게 될 것이다.

  • PDF

A Study on the Government's Investment Priorities for Building a Supercomputer Joint Utilization System

  • Hyungwook Shim;Jaegyoon Hahm
    • Asian Journal of Innovation and Policy
    • /
    • v.12 no.2
    • /
    • pp.200-215
    • /
    • 2023
  • The purpose of this paper is to analyze the Korean government's investment priorities for the establishment of a supercomputer joint utilization system using AHP. The AHP model was designed as a two-layered structure consisting of two areas of specialized infrastructure, a one-stop joint utilization system service, and four evaluation items for detailed tasks. For the weight of each evaluation item, a cost efficiency index considering the annual budget was developed for the first time and applied to the weight calculation process. AHP analysis conducted a survey targeting supercomputer experts and derived priorities with 22 data that had completed reliability verification. As a result of the analysis, the government's investment priority was high in the order of dividing infrastructure for each Specialized Center and building resources in stages. In the future, the analysis results will be used to select economic promotion plans and prepare strategies for the establishment of the government's supercomputer joint utilization system.

An Economic Analysis on the Operation Effect of Public Supercomputer (공용 슈퍼컴퓨터 운영효과에 대한 경제성 분석)

  • Lee, Hyung Jin;Choi, Youn Keun;Park, Jinsoo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.4
    • /
    • pp.69-79
    • /
    • 2018
  • We performs the cost-benefit analysis, an economical analysis technique, to measure the effect of a shared public supercomputer. The costs of two given alternatives, to share the public supercomputer in a national center and to employ their own supercomputers in the organizations under the necessity, will be estimated and compared for decision making. In the case of sharing, we can simply predict the cost based on the results of the previous public supercomputer. The cost of individual introduction, however, is almost unpredictable since it has a remarkable variability due to the required system performances, locations, human factor, and so on. Accordingly, an objective and valid method to estimate the cost of individual cases will be proposed in this research. Finally, we analyze the economic effect of operating public supercomputer by comparing the sharing cost with that of the individual employs. The results of analysis confirms that the sharing public supercomputer will reduce the operational cost about 10.3 billion won annually compared with the individual introduction. Accordingly, it is expected that the sharing public supercomputer will bring a considerable economical effect.

A Study on the Rle and B/C Analysis of National Supported Supercomputing Center (국가 주도 슈퍼컴퓨터센터의 역할과 B/C 분석 및 발전방향)

  • 이정희
    • Journal of Korea Technology Innovation Society
    • /
    • v.1 no.3
    • /
    • pp.402-418
    • /
    • 1998
  • This study attempts to analysis of the 1'ole and B/C(Benefict/Cost)of National Supported Supercomputing Center in process of the promotion for informatization in Korea. ETRI Supercomputing Center, as National Supported Supercomputing Center, was established in 1967 as a laboratory of KIST(Korea Institute of Science and Technology). ETRI Supercomputer Center have acted a leading role as National HPCC(High Performance Computing and Communication) in Korea. The result of B/C analysis of En Supercomputer Center showed that it is twenty times benefit as many as cost for the last 30 years. As soon as possible, it was Suggested that ETRI Supercomputer Center must be developed as National Supercomputing Center(NSC).

  • PDF

Preferences for Supercomputer Resources Using the Logit Model

  • Hyungwook Shim;Jaegyoon Hahm
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.261-267
    • /
    • 2023
  • Public research, which requires large computational resources, utilizes the supercomputers of the National Supercomputing Center in the Republic of Korea. The average utilization rate of resources over the past three years reached 80%. Therefore, to ensure the operational stability of this national infrastructure, specialized centers have been established to distribute the computational demand concentrated in the national centers. It is necessary to predict the computational demand accurately to build an appropriate resource scale. Therefore, it is important to estimate the inflow and outflow of computational demand between the national and specialized centers to size the resources required to construct specialized centers. We conducted a logit model analysis using the probabilistic utility theory to derive the preferences of individual users for future supercomputer resources. This analysis shows that the computational demand share of specialized centers is 59.5%, which exceeds the resource utilization plan of existing specialized centers.

Enabling Performance Intelligence for Application Adaptation in the Future Internet

  • Calyam, Prasad;Sridharan, Munkundan;Xu, Yingxiao;Zhu, Kunpeng;Berryman, Alex;Patali, Rohit;Venkataraman, Aishwarya
    • Journal of Communications and Networks
    • /
    • v.13 no.6
    • /
    • pp.591-601
    • /
    • 2011
  • Today's Internet which provides communication channels with best-effort end-to-end performance is rapidly evolving into an autonomic global computing platform. Achieving autonomicity in the Future Internet will require a performance architecture that (a) allows users to request and own 'slices' of geographically-distributed host and network resources, (b) measures and monitors end-to-end host and network status, (c) enables analysis of the measurements within expert systems, and (d) provides performance intelligence in a timely manner for application adaptations to improve performance and scalability. We describe the requirements and design of one such "Future Internet performance architecture" (FIPA), and present our reference implementation of FIPA called 'OnTimeMeasure.' OnTimeMeasure comprises of several measurement-related services that can interact with each other and with existing measurement frameworks to enable performance intelligence. We also explain our OnTimeMeasure deployment in the global environment for network innovations (GENI) infrastructure collaborative research initiative to build a sliceable Future Internet. Further, we present an applicationad-aptation case study in GENI that uses OnTimeMeasure-enabled performance intelligence in the context of dynamic resource allocation within thin-client based virtual desktop clouds. We show how a virtual desktop cloud provider in the Future Internet can use the performance intelligence to increase cloud scalability, while simultaneously delivering satisfactory user quality-of-experience.

An Interface between Computing, Ecology and Biodiversity : Environmental Informatics

  • Stockwell, David;Arzberger, Peter;Fountain, Tony;Helly, John
    • The Korean Journal of Ecology
    • /
    • v.23 no.2
    • /
    • pp.101-106
    • /
    • 2000
  • The grand challenge for the 21$^{st$ century is to harness knowledge of the earth`s biological and ecological diversity to understand how they shape global environmental systems. This insight benefits both science and society. Biological and ecological data are among the most diverse and complex in the scientific realm. spanning vast temporal and spatial scales, distant localities. and multiple disciplines. Environmental informatics is an emerging discipline applying information science, ecology, and biodiversity to the understanding and solution of environmental problems. In this paper we give an overview of the experiences of the San Diego Supercomputer Center (SDSC) with this new multidisciplinary science, discuss the application of computing resources to the study of environmental systems, and outline strategic partnership activities in environmental iformatics that are underway, We hope to foster interactions between ecology, biodiversity, and conservation researchers in East Asia-Pacific Rim and those at SDSC and the Partnership for Biodiversity Informatics.

  • PDF

The Analysis of the Supercomputer Trends in Weather and Climate Research Areas (기상 및 기후 연구 분야의 슈퍼컴퓨터 보유 추이 분석)

  • Joh, Minsu;Park, Hyei-Sun
    • Atmosphere
    • /
    • v.15 no.2
    • /
    • pp.119-127
    • /
    • 2005
  • It is challenging work to predict weather and climate conditions of the future in advance. Since ENIAC was developed, weather and climate research areas have been taking advantage of the improvements in computer hardware. High performance computers allows researchers to build high quality models that allow them to make good predictions of what might happen in the future. Statistics on the high performance computers are one of the major interest to not only manufacturers but also the users such as weather and climate researchers. For this reason, the Top500 Supercomputer Sites Report has been being released twice a year since 1993 to provide a reliable basis for tracking and detecting trends in high performance computing. Using the Top500 Report, a short review on the supercomputer trends in weather and climate research areas is provided in this article.

The Construction of the UPS system for supercomputer system (슈퍼컴퓨터 시스템을 위한 무정전 전원공급 시스템의 구축)

  • Sung, Jin-Woo;Woo, Joon;Kim, Sung-Jun;Choi, Yun-Keon;Hong, Tae-Yeong;Lee, Young-Joo;Jang, Ji-Hoon;Lee, Sang-Dong
    • Proceedings of the Korean Institute of IIIuminating and Electrical Installation Engineers Conference
    • /
    • 2009.05a
    • /
    • pp.462-465
    • /
    • 2009
  • This study described the design and construction of uninterruptible power supply(UPS) system for supercomputering system. KISTI's supercomputer system is served for all 24 hours, and power be achieved in stable voltage and frequency range. We analyzed the electric power usage for 4 years, and we considered cost and stability and we contructed UPS for supercomputer system.

  • PDF