• Title/Summary/Keyword: parallel library

Search Result 188, Processing Time 0.026 seconds

Formulation, solution and CTL software for coupled thermomechanics systems

  • Niekamp, R.;Ibrahimbegovic, A.;Matthies, H.G.
    • Coupled systems mechanics
    • /
    • v.3 no.1
    • /
    • pp.1-25
    • /
    • 2014
  • In this work, we present the theoretical formulation, operator split solution procedure and partitioned software development for the coupled thermomechanical systems. We consider the general case with nonlinear evolution for each sub-system (either mechanical or thermal) with dedicated time integration scheme for each sub-system. We provide the condition that guarantees the stability of such an operator split solution procedure for fully nonlinear evolution of coupled thermomechanical system. We show that the proposed solution procedure can accommodate different evolution time-scale for different sub-systems, and allow for different time steps for the corresponding integration scheme. We also show that such an approach is perfectly suitable for parallel computations. Several numerical simulations are presented in order to illustrate very satisfying performance of the proposed solution procedure and confirm the theoretical speed-up of parallel computations, which follow from the adequate choice of the time step for each sub-problem. This work confirms that one can make the most appropriate selection of the time step with respect to the characteristic time-scale, carry out the separate computations for each sub-system, and then enforce the coupling to preserve the stability of the operator split computations. The software development strategy of direct linking the (existing) codes for each sub-system via Component Template Library (CTL) is shown to be perfectly suitable for the proposed approach.

Sim-Hadoop : Leveraging Hadoop Distributed File System and Parallel I/O for Reliable and Efficient N-body Simulations (Sim-Hadoop : 신뢰성 있고 효율적인 N-body 시뮬레이션을 위한 Hadoop 분산 파일 시스템과 병렬 I / O)

  • Awan, Ammar Ahmad;Lee, Sungyoung;Chung, Tae Choong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.476-477
    • /
    • 2013
  • Gadget-2 is a scientific simulation code has been used for many different types of simulations like, Colliding Galaxies, Cluster Formation and the popular Millennium Simulation. The code is parallelized with Message Passing Interface (MPI) and is written in C language. There is also a Java adaptation of the original code written using MPJ Express called Java Gadget. Java Gadget writes a lot of checkpoint data which may or may not use the HDF-5 file format. Since, HDF-5 is MPI-IO compliant, we can use our MPJ-IO library to perform parallel reading and writing of the checkpoint files and improve I/O performance. Additionally, to add reliability to the code execution, we propose the usage of Hadoop Distributed File System (HDFS) for writing the intermediate (checkpoint files) and final data (output files). The current code writes and reads the input, output and checkpoint files sequentially which can easily become bottleneck for large scale simulations. In this paper, we propose Sim-Hadoop, a framework to leverage HDFS and MPJ-IO for improving the I/O performance of Java Gadget code.

Parallel Implementation of SPECK, SIMON and SIMECK by Using NVIDIA CUDA PTX (NVIDIA CUDA PTX를 활용한 SPECK, SIMON, SIMECK 병렬 구현)

  • Jang, Kyung-bae;Kim, Hyun-jun;Lim, Se-jin;Seo, Hwa-jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.423-431
    • /
    • 2021
  • SPECK and SIMON are lightweight block ciphers developed by NSA(National Security Agency), and SIMECK is a new lightweight block cipher that combines the advantages of SPECK and SIMON. In this paper, a large-capacity encryption using SPECK, SIMON, and SIMECK is implemented using a GPU with efficient parallel processing. CUDA library provided by NVIDIA was used, and performance was maximized by using CUDA assembly language PTX to eliminate unnecessary operations. When comparing the results of the simple CPU implementation and the implementation using the GPU, it was possible to perform large-scale encryption at a faster speed. In addition, when comparing the implementation using the C language and the implementation using the PTX when implementing the GPU, it was confirmed that the performance increased further when using the PTX.

Evaluation of the Quality of Records of the Serials Union Catalog Database (연속간행물 종합목록 데이터베이스의 레코드 품질 평가)

  • Yoon, Cheong-Ok
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.37 no.1
    • /
    • pp.27-42
    • /
    • 2003
  • The purpose of this study is to evaluate the quality of the Serials Union Catalog(UCAT) of KISTI. In examining both MARC records and bibliographic display records of 172 Japanese serials, AACR2R and KORMARC formats were used as the standard rules. Those records were described in the minimum or incomplete level ; but had repeated errors in various data fields, which included the incorrect choice of main headings, parallel titles and other titles, the improper use of language codes and related data fields, and the provision of incomplete information, etc. To improve the overall quality of bibliographic database, thorough comprehension and application of cataloging rules and MARC formats are strongly required.

Improvement and verification of the DeCART code for HTGR core physics analysis

  • Cho, Jin Young;Han, Tae Young;Park, Ho Jin;Hong, Ser Gi;Lee, Hyun Chul
    • Nuclear Engineering and Technology
    • /
    • v.51 no.1
    • /
    • pp.13-30
    • /
    • 2019
  • This paper presents the recent improvements in the DeCART code for HTGR analysis. A new 190-group DeCART cross-section library based on ENDF/B-VII.0 was generated using the KAERI library processing system for HTGR. Two methods for the eigen-mode adjoint flux calculation were implemented. An azimuthal angle discretization method based on the Gaussian quadrature was implemented to reduce the error from the azimuthal angle discretization. A two-level parallelization using MPI and OpenMP was adopted for massive parallel computations. A quadratic depletion solver was implemented to reduce the error involved in the Gd depletion. A module to generate equivalent group constants was implemented for the nodal codes. The capabilities of the DeCART code were improved for geometry handling including an approximate treatment of a cylindrical outer boundary, an explicit border model, the R-G-B checker-board model, and a super-cell model for a hexagonal geometry. The newly improved and implemented functionalities were verified against various numerical benchmarks such as OECD/MHTGR-350 benchmark phase III problems, two-dimensional high temperature gas cooled reactor benchmark problems derived from the MHTGR-350 reference design, and numerical benchmark problems based on the compact nuclear power source experiment by comparing the DeCART solutions with the Monte-Carlo reference solutions obtained using the McCARD code.

Topic Analysis of Scholarly Communication Research

  • Ji, Hyun;Cha, Mikyeong
    • Journal of Information Science Theory and Practice
    • /
    • v.9 no.2
    • /
    • pp.47-65
    • /
    • 2021
  • This study aims to identify specific topics, trends, and structural characteristics of scholarly communication research, based on 1,435 articles published from 1970 to 2018 in the Scopus database through Latent Dirichlet Allocation topic modeling, serial analysis, and network analysis. Topic modeling, time series analysis, and network analysis were used to analyze specific topics, trends, and structures, respectively. The results were summarized into three sets as follows. First, the specific topics of scholarly communication research were nineteen in number, including research resource management and research data, and their research proportion is even. Second, as a result of the time series analysis, there are three upward trending topics: Topic 6: Open Access Publishing, Topic 7: Green Open Access, Topic 19: Informal Communication, and two downward trending topics: Topic 11: Researcher Network and Topic 12: Electronic Journal. Third, the network analysis results indicated that high mean profile association topics were related to the institution, and topics with high triangle betweenness centrality, such as Topic 14: Research Resource Management, shared the citation context. Also, through cluster analysis using parallel nearest neighbor clustering, six clusters connected with different concepts were identified.

Web-Based Distributed Visualization System for Large Scale Geographic Data (대용량 지형 데이터를 위한 웹 기반 분산 가시화 시스템)

  • Hwang, Gyu-Hyun;Yun, Seong-Min;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.6
    • /
    • pp.835-848
    • /
    • 2011
  • In this paper, we propose a client server based distributed/parallel system to effectively visualize huge geographic data. The system consists of a web-based client GUI program and a distributed/parallel server program which runs on multiple PC clusters. To make the client program run on mobile devices as well as PCs, the graphical user interface has been designed by using JOGL, the java-based OpenGL graphics library, and sending the information about current available memory space and maximum display resolution the server can minimize the amount of tasks. PC clusters used to play the role of the server access requested geographic data from distributed disks, and properly re-sample them, then send the results back to the client. To minimize the latency happened in repeatedly access the distributed stored geography data, cache data structures have been maintained in both every nodes of the server and the client.

A Design of Pipelined-parallel CABAC Decoder Adaptive to HEVC Syntax Elements (HEVC 구문요소에 적응적인 파이프라인-병렬 CABAC 복호화기 설계)

  • Bae, Bong-Hee;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.155-164
    • /
    • 2015
  • This paper describes a design and implementation of CABAC decoder, which would handle HEVC syntax elements in adaptively pipelined-parallel computation manner. Even though CABAC offers the high compression rate, it is limited in decoding performance due to context-based sequential computation, and strong data dependency between context models, as well as decoding procedure bin by bin. In order to enhance the decoding computation of HEVC CABAC, the flag-type syntax elements are adaptively pipelined by precomputing consecutive flag-type ones; and multi-bin syntax elements are decoded by processing bins in parallel up to three. Further, in order to accelerate Binary Arithmetic Decoder by reducing the critical path delay, the update and renormalization of context modeling are precomputed parallel for the cases of LPS as well as MPS, and then the context modeling renewal is selected by the precedent decoding result. It is simulated that the new HEVC CABAC architecture could achieve the max. performance of 1.01 bins/cycle, which is two times faster with respect to the conventional approach. In ASIC design with 65nm library, the CABAC architecture would handle 224 Mbins/sec, which could decode QFHD HEVC video data in real time.

Analysis of Reading Domian of Men and Women Elderly Using Book Lending Data (도서 대출데이터를 활용한 남녀 노령자의 독서 주제 분석)

  • Cho, Jane
    • Journal of Korean Library and Information Science Society
    • /
    • v.50 no.1
    • /
    • pp.23-41
    • /
    • 2019
  • This study understand the subject domain of book which has been read by men and woman elderly by analizying the PFNET using library big data and confirm the difference between adult at age 30-40. This study extract co-occurrence matrix of book lending on the popular book list from library big data, for 4 group, men/woman elderly, men/woman adult. With these matrix, this study performs FP network analysis. And Pearson Correlation Analysis based on the Triangle Betweenness Centrality calculated on the loan book was performed to understand the correlation among the 4 clusters which has been created by PNNC algorithm. As a result, reading trend which has been focused on modern korean novel has been revealed in elderly regardless gender, among them, men elderly show extreme tendency concentrated on modern korean long series novel. In the correlation analysis, the male elderly showed a weak negative correlation with the adult male of r = -0.222, and the negative direction of all the other groups showed that the tendency of male elderly's loan book was opposite.

Computation of Compressor Flows Using Parallel Implementation of Preconditioning Method (예조건화 기법의 병렬화를 이용한 압축기 유동해석)

  • Lee Gee-Soo;Choi Jeong-Yeol;Kim Kui-Soon
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.155-162
    • /
    • 2000
  • In this paper, preconditioning method is parallelized on fast-ethernet PC cluster. The algorithm is based on scaling the pressure terms in the momemtum equations and preconditioning the conservation equations to circumvent numerical difficulties at low Mach numbers. Parallelization is performed using a domain decomposition technique(DDT) and message passing between sub-domains are taken from the MPI library. The results are shown to have good convergence properties at all Mach number on the circular arc Bump and are capable of reasonable predicting two-dimensional turbulent flows on DCA compressor cascade.

  • PDF