• Title/Summary/Keyword: Supercomputers

Search Result 52, Processing Time 0.021 seconds

Inelastic vector finite element analysis of RC shells

  • Min, Chang-Shik;Gupta, Ajaya Kumar
    • Structural Engineering and Mechanics
    • /
    • v.4 no.2
    • /
    • pp.139-148
    • /
    • 1996
  • Vector algorithms and the relative importance of the four basic modules (computation of element stiffness matrices, assembly of the global stiffness matrix, solution of the system of linear simultaneous equations, and calculation of stresses and strains) of a finite element computer program for inelastic analysis of reinforced concrete shells are presented. Performance of the vector program is compared with a scalar program. For a cooling tower problem, the speedup factor from the scalar to the vector program is 34 for the element stiffness matrices calculation, 25.3 for the assembly of global stiffness matrix, 27.5 for the equation solver, and 37.8 for stresses, strains and nodal forces computations on a Gray Y-MP. The overall speedup factor is 30.9. When the equation solver alone is vectorized, which is computationally the most intensive part of a finite element program, a speedup factor of only 1.9 is achieved. When the rest of the program is also vectorized, a large additional speedup factor of 15.9 is attained. Therefore, it is very important that all the modules in a nonlinear program are vectorized to gain the full potential of the supercomputers. The vector finite element computer program for inelastic analysis of RC shells with layered elements developed in the present study enabled us to perform mesh convergence studies. The vector program can be used for studying the ultimate behavior of RC shells and used as a design tool.

A Study on the Infra-Capacity Analysis for Optimal Operating Environments of Supercomputer Center (슈퍼컴퓨터센터의 최적 운영환경을 위한 기반시설 용량 산정에 관한 연구)

  • Ryu, Young-Hee;Sung, Jin-Woo;Kim, Duk-Su;Kil, Seong-Ho
    • KIEAE Journal
    • /
    • v.10 no.2
    • /
    • pp.19-24
    • /
    • 2010
  • According to the increasing demands of supercomputer, an exclusive supercomputer building is requested to install a supercomputer for promoting high-end R&D as well as creating the public service infrastructure in the national level. KISTI, as a public supercomputer center with the 4th supercomputer (capacity of 360Tflops), is experiencing shortage of infrastructure systems, caused by increased capacity. Thus, it is anticipated that the situation will be growing serious when the 5th and 6th supercomputers will be installed. On this study, analyzed on the 5th supercomputer system through projecting performance level and optimal operating environments by assessing infra-capacity. Explored way to construct optimal operating environments through infrastructure-capacity analysis of supercomputer center. This study can be of use for reviewing KISTI's conditions as the only supercomputer center in Korea. In addition, it provides reference data for planning the new exclusive supercomputer center in terms of feasibility, while analyzing infrastructure systems.

Time Domain Seismic Waveform Inversion based on Gauss Newton method (시간영역에서 가우스뉴튼법을 이용한 탄성파 파형역산)

  • Sheen, Dong-Hoon;Baag, Chang-Eob
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2006.06a
    • /
    • pp.131-135
    • /
    • 2006
  • A seismic waveform inversion for prestack seismic data based on the Gauss-Newton method is presented. The Gauss-Newton method for seismic waveform inversion was proposed in the 80s but has rarely been studied. Extensive computational and memory requirements have been principal difficulties. To overcome this, we used different sizes of grids in the inversion stage from those of grids in the wave propagation simulation, temporal windowing of the simulation and approximation of virtual sources for calculating partial derivatives, and implemented this algorithm on parallel supercomputers. We show that the Gauss-Newton method has high resolving power and convergence rate, and demonstrate potential applications to real seismic data.

  • PDF

A framework for parallel processing in multiblock flow computations (다중블록 유동해석에서 병렬처리를 위한 시스템의 구조)

  • Park, Sang-Geun;Lee, Geon-U
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.21 no.8
    • /
    • pp.1024-1033
    • /
    • 1997
  • The past several years have witnessed an ever-increasing acceptance and adoption of parallel processing, both for high performance scientific computing as well as for more general purpose applications. Furthermore with increasing needs to perform the complex flow calculations in an efficient manner, the use of the message passing model on distributed networks has emerged as an important alternative to the expensive supercomputers. This work attempts to provide a generic framework to enable the parallelization of all CFD-related works using the master-slave model. This framework consists of (1) input geometry, (2) domain decomposition, (3) grid generation, (4) flow computations, (5) flow visualization, and (6) output display as the sequential components, but performs computations for (2) to (5) in parallel on the workstation clustering. The flow computations are parallized by having multiple copies of the flow-code to solve a PDE on different spatial regions on different processors, while their flow data are exchanged across the region boundaries, and the solution is time-stepped. The Parallel Virtual Machine (PVM) is used for distributed communication in this work.

Prospect of Information Technology and Its Application to Regional Agricultural Meteorology (지역농업기상지원을 위한 정보화기술 전망 및 활용)

  • Lee, Byong-Lyol
    • Proceedings of The Korean Society of Agricultural and Forest Meteorology Conference
    • /
    • 2003.09a
    • /
    • pp.189-201
    • /
    • 2003
  • Grid is a new Information Technology (IT) concept of "super Internet" for high-performance computing: worldwide collections of high-end resources - such as supercomputers, storage, advanced instruments and immerse environments. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, real-time data sources and instruments, and human collaborators. The term "the Grid" was coined in the mid l990s to denote a proposed distributed computing infrastructure for advanced science and engineering. The term computational Grids refers to infrastructures aimed at allowing users to access and/or aggregate potentially large numbers of powerful and sophisticated resources. More formally, Grids are defined as infrastructure allowing flexible, secure, and coordinated resource sharing among dynamic collections of individuals, institutions and resources referred to as virtual Organizations. GRID is an emerging IT as a kind of next generation Internet technology which will fit very well with Agrometeorological services in the future. I believe that it would contribute to the resource sharing in AgroMeteorology by providing super computing power, virtual storage, and efficient data exchanges, especially for developing countries that are suffering from the lack of resources for their agmet services at national level. Thus, the establishment of CAgM-GRID based on existing RAMINSII is proposed as a part of FWIS of WMO.part of FWIS of WMO.

  • PDF

The Function of Computer Utilization in Educating and Researching Ocean Engineering Problems

  • Koo, Weon-Cheol;Kim, Moo-Hyun;Ryu, Sam
    • Journal of Ship and Ocean Technology
    • /
    • v.12 no.4
    • /
    • pp.1-6
    • /
    • 2008
  • Nowadays, the computational capability and graphical power based on PCs increase very rapidly every year. As a result, the complicated engineering or scientific problems that could have only been handled by supercomputers a couple of decades ago can now be routinely run on PCs. Besides, the PCs can be assembled in parallel to increase its computational capability theoretically without limitation. The Web-based interface and communication tools are also being enhanced very rapidly and the real-time distance learning (E-Learning) and project cooperation on web get increasing attention. Using the-state-of-the-art computational method, a number of complicated and computationally intensive problems are being solved by PCs. The results can be well demonstrated on screen by graphics and animation tools. Those examples include the simulations of fully nonlinear waves, their interactions with floating bodies, global-motion analysis of multi-unit floating production system including complicated mooring lines and risers. Several examples will be presented in this regard. Also, Web and java-applet based educational tools have been developed at Texas A&M University for better understanding of waves and wave-body interactions. The background and examples of such Web-based educational tools published in Kim et al. (2003) are briefly introduced here.

HTCaaS(High Throughput Computing as a Service) in Supercomputing Environment (슈퍼컴퓨팅환경에서의 대규모 계산 작업 처리 기술 연구)

  • Kim, Seok-Kyoo;Kim, Jik-Soo;Kim, Sangwan;Rho, Seungwoo;Kim, Seoyoung;Hwang, Soonwook
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.5
    • /
    • pp.8-17
    • /
    • 2014
  • Petascale systems(so called supercomputers) have been mainly used for supporting communication-intensive and tightly-coupled parallel computations based on message passing interfaces such as MPI(HPC: High-Performance Computing). On the other hand, computing paradigms such as High-Throughput Computing(HTC) mainly target compute-intensive (relatively low I/O requirements) applications consisting of many loosely-coupled tasks(there is no communication needed between them). In Korea, recently emerging applications from various scientific fields such as pharmaceutical domain, high-energy physics, and nuclear physics require a very large amount of computing power that cannot be supported by a single type of computing resources. In this paper, we present our HTCaaS(High-Throughput Computing as a Service) which can leverage national distributed computing resources in Korea to support these challenging HTC applications and describe the details of our system architecture, job execution scenario and case studies of various scientific applications.

Grid Middleware Support for e-Science Service Integration Workbench (e-Science 서비스 통합 워크벤치를 위한 그리드 미들웨어 지원)

  • Suh, Young-Kyoon;Kim, Byungsang;Nam, Dukyun;Lee, June Hawk;Hwang, Soonwook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.574-577
    • /
    • 2007
  • e-Science Service Integration Workbench is the core tool that enables IT-based computation and engineering researchers to collaborate their research activities via data sharing, by semi-automatically supporting the activities. Workbench provides the researchers with a scientific workflow by establishing the environment that is capable of finding, registering, composing, and executing services-legacy codes wrapped with grid services or web services-they need to use. In other words, designing their scientific workflow through the workbench, they can receive or share its final result with their collegues by submitting jobs they describe to computational resources such as supercomputers or grids or requesting an experiment. In this paper, we propose an implementation architecture of e-Science Service Integration Workbench to support grid services of Grid Middleware.

  • PDF

A Study on a large-scale materials simulation using a PC networked cluster (PC Network Cluster를 사용한 대규모 재료 시뮬레이션에 관한 연구)

  • Choi, Deok-Kee;Ryu, Han-Kyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.5
    • /
    • pp.15-23
    • /
    • 2002
  • For molecular dynamics requires high-performance computers or supercomputers to handle huge amount of computation, it is not until recent days that the application of molecular dynamics to materials fracture simulations draw some attention from many researchers. With the recent advent of high-performance computers, computation intensive methods become more tractable than ever. However, carrying out materials simulation on high-performance computers costs too much in general. In this study, a PC cluster consisting of multiple commodity PCs is established and computer simulations of materials with cracks are carried out on it via molecular dynamics technique. The effect of the number of nodes, speedup factors, and communication time between nodes are measured to verify the performance of the PC cluster. Upon using the PC cluster, materials fracture simulations with more than 50,000 molecules are carried out successfully.

Performance Analysis of Cluster Network Interfaces for Parallel Computing of Computational Fluid Dynamics (전산유체역학 병렬해석을 위한 클러스터 네트웍 장치 성능분석)

  • Lee, Bo Seong;Hong, Jeong U;Lee, Dong Ho;Lee, Sang San
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.31 no.5
    • /
    • pp.37-43
    • /
    • 2003
  • Parallel computing method is widely used in the computational fluid dynamics for efficient numerical analysis. Nowadays, low cost Linux cluster computers substitute for traditional supercomputers with parallel computing shcemes. The performance of nemerical solvers on an Linux cluster computer is highly dependent not on the performance of processors but on the performance of network devices in the cluster system. In this paper, we investigated the effects of the network devices such as Myrinet2000, gigabit ethernet, and fast ethernet on the performance of the cluster system by using some benchmark programs such as Netpipe, LINPACK, NAS NPB, and MPINS2D Navier-Stokes solvers. Finally, upon this investigation, we will suggest the method for building high performance low cost Linux cluster system in the computational fluid dynamics analysis.