• Title/Summary/Keyword: HPC Utilization

Search Result 13, Processing Time 0.028 seconds

Manufacturing Innovation and HPC (High Performance Computing) Utilization (제조업 혁신과 HPC(High Performance Computing) 활용)

  • Kim, Yong-yul
    • Journal of Korea Technology Innovation Society
    • /
    • v.19 no.2
    • /
    • pp.231-253
    • /
    • 2016
  • The purpose of this study is two fold. First, we will explore the meaning, spread effect and consideration factors of manufacturing innovation in terms of theoretical perspective. Second, we will verify the status of high performance computing (HPC) utilization policy, and analyze the situation of US and Korea. Manufacturing innovation policy in each country has the objective in common which aims epoch-making enhancing of productivity. Nevertheless it can be characterized as innovation oriented policy rather than simple trial of productivity improvement. For long term growth and employment, the need for reindustrialization instead of deindustrialization should be recognized. Employment may be decreased temporarily and partially due to manufacturing innovation. However net effect of employment increasing will be bigger because of indirect employment. HPC utilization policy has the importance as a separate movement other than as a subset of manufacturing innovation. US government is trying to eliminate the bottleneck elements in adoption of HPC based M&S activity, and to promote the way of problem solving through the mechanism of public-private partnership, in spite of low level of HPC based M&S. In Korea, ecosystem related with the activity of HPC based M&S is needed, and expansion of M&S utilization in manufacturing companies and fostering of M&S supporting institutions will be important for this task.

A Study on the Implementation Method for the Achievement of the Korea High-Performance Computing Innovation Strategy

  • Choi, Youn Keun;Koh, Myoungju;Jung, Youg Hwan;Hur, YoungJu;Lee, Yeonjae;On, Noori;Hahm, Jaegyoon
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.76-85
    • /
    • 2022
  • At the 8th National High-Performance Computing (HPC) Committee convened in 2021, the "National High-Performance Computing Innovation Strategy (draft) for the 4th Industrial Revolution Era" was deliberated and the original draft was approved. In this proposal, the Ministry of Science and ICT in KOREA announced three major plans and nine detailed projects with the vision of "Realizing the 4th industrial revolution quantum jumping by leaping into a high-performance computing powerhouse." Thereby the most important policy about national mid-term and long-term HPC development was established and called the HPC innovation strategy (hereinafter "the innovation strategy"). The three plans of the innovation strategy proposed by the government are: Strategic HPC infrastructure expansion; Secure source technologies; and Activate innovative HPC utilization. Each of the detailed projects has to be executed nationally and strategically. In this paper, we propose a strategy for the implementation of two items ("Strategic HPC infrastructure expansion" and "activate innovative HPC utilization") among these detailed plans.

Preparation of Cellulose Nanofibrils and Their Applications: High Strength Nanopapers and Polymer Composite Films (셀룰로오스 나노섬유의 제조 및 응용: 고강도 나노종이와 고분자복합필름)

  • Lee, Sun-Young;Chun, Sang-Jin;Doh, Geum-Hyun;Lee, Soo;Kim, Byung-Hoon;Min, Kyung-Seon;Kim, Seung-Chan;Huh, Yoon-Seok
    • Journal of the Korean Wood Science and Technology
    • /
    • v.39 no.3
    • /
    • pp.197-205
    • /
    • 2011
  • Cellulose nanofibrils (CNF) with 50~100 nm diameter were manufactured from micro-size cellulose by an application of a high-pressure homogenizer at 1,400 bar. High strength nanopapers were prepared over a filter paper by a vacuum filtration from CNF suspension. After reinforcing and dispersing CNF suspension, hydroxypropyl cellulose (HPC) and polyvinyl alcohol (PVA)-based composites were tailored by solvent- and film-casting methods, respectively. After 2, 4, 6 and 8 passes through high-pressure homogenizer, the tensile strength of the nanopapers were extremely high and increased linearly depending upon the pass number. Chemical modification of 1H, 1H, 2H, 2H-perfluorodecyl-triethoxysilane (PFDTES) on the nanopapers significantly increased the mechanical strength and water repellency. The reinforcement of 1, 3, and 5 wt% CNF to HPC and PVA resins also improved the mechanical properties of the both composites.

Evaluation of Alignment Methods for Genomic Analysis in HPC Environment (HPC 환경의 대용량 유전체 분석을 위한 염기서열정렬 성능평가)

  • Lim, Myungeun;Jung, Ho-Youl;Kim, Minho;Choi, Jae-Hun;Park, Soojun;Choi, Wan;Lee, Kyu-Chul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.107-112
    • /
    • 2013
  • With the progress of NGS technologies, large genome data have been exploded recently. To analyze such data effectively, the assistance of HPC technique is necessary. In this paper, we organized a genome analysis pipeline to call SNP from NGS data. To organize the pipeline efficiently under HPC environment, we analyzed the CPU utilization pattern of each pipeline steps. We found that sequence alignment is computing centric and suitable for parallelization. We also analyzed the performance of parallel open source alignment tools and found that alignment method utilizing many-core processor can improve the performance of genome analysis pipeline.

Design and Implementation of National Supercomputing Service Framework (국가 슈퍼컴퓨팅 서비스 프레임워크의 설계 및 구현)

  • Yu, Jung-Lok;Byun, Hee-Jung;Kim, Han-Gi
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.663-674
    • /
    • 2016
  • Traditional supercomputing services suffer from limited accessibility and low utilization in that users(researchers) may perform computational executions only using terminal-based command line interfaces. To address this problem, in this paper, we provide the design and implementation details of National supercomputing service framework. The proposed framework supports all the fundamental primitive functions such as user management/authentication, heterogeneous computing resource management, HPC (High Performance Computing) job management, etc. so that it enables various 3rd-party applications to be newly built on top of the proposed framework. Our framework also provides Web-based RESTful OpenAPIs and the abstraction interfaces of job schedulers (as well as bundle scheduler plug-ins, for example, LoadLeveler, Open Grid Scheduler, TORQUE) in order to easily integrate the broad spectrum of heterogeneous computing clusters. To show and validate the effectiveness of the proposed framework, we describe the best practice scenario of high energy physics Lattice-QCD as an example application.

EDNN based prediction of strength and durability properties of HPC using fibres & copper slag

  • Gupta, Mohit;Raj, Ritu;Sahu, Anil Kumar
    • Advances in concrete construction
    • /
    • v.14 no.3
    • /
    • pp.185-194
    • /
    • 2022
  • For producing cement and concrete, the construction field has been encouraged by the usage of industrial soil waste (or) secondary materials since it decreases the utilization of natural resources. Simultaneously, for ensuring the quality, the analyses of the strength along with durability properties of that sort of cement and concrete are required. The prediction of strength along with other properties of High-Performance Concrete (HPC) by optimization and machine learning algorithms are focused by already available research methods. However, an error and accuracy issue are possessed. Therefore, the Enhanced Deep Neural Network (EDNN) based strength along with durability prediction of HPC was utilized by this research method. Initially, the data is gathered in the proposed work. Then, the data's pre-processing is done by the elimination of missing data along with normalization. Next, from the pre-processed data, the features are extracted. Hence, the data input to the EDNN algorithm which predicts the strength along with durability properties of the specific mixing input designs. Using the Switched Multi-Objective Jellyfish Optimization (SMOJO) algorithm, the weight value is initialized in the EDNN. The Gaussian radial function is utilized as the activation function. The proposed EDNN's performance is examined with the already available algorithms in the experimental analysis. Based on the RMSE, MAE, MAPE, and R2 metrics, the performance of the proposed EDNN is compared to the existing DNN, CNN, ANN, and SVM methods. Further, according to the metrices, the proposed EDNN performs better. Moreover, the effectiveness of proposed EDNN is examined based on the accuracy, precision, recall, and F-Measure metrics. With the already-existing algorithms i.e., JO, GWO, PSO, and GA, the fitness for the proposed SMOJO algorithm is also examined. The proposed SMOJO algorithm achieves a higher fitness value than the already available algorithm.

Dynamic Memory Allocation for Scientific Workflows in Containers (컨테이너 환경에서의 과학 워크플로우를 위한 동적 메모리 할당)

  • Adufu, Theodora;Choi, Jieun;Kim, Yoonhee
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.439-448
    • /
    • 2017
  • The workloads of large high-performance computing (HPC) scientific applications are steadily becoming "bursty" due to variable resource demands throughout their execution life-cycles. However, the over-provisioning of virtual resources for optimal performance during execution remains a key challenge in the scheduling of scientific HPC applications. While over-provisioning of virtual resources guarantees peak performance of scientific application in virtualized environments, it results in increased amounts of idle resources that are unavailable for use by other applications. Herein, we proposed a memory resource reconfiguration approach that allows the quick release of idle memory resources for new applications in OS-level virtualized systems, based on the applications resource-usage pattern profile data. We deployed a scientific workflow application in Docker, a light-weight OS-level virtualized system. In the proposed approach, memory allocation is fine-tuned to containers at each stage of the workflows execution life-cycle. Thus, overall memory resource utilization is improved.

A Study on the Revitalization of High Performance Computing in Korea

  • Choi, Younkeun;Lee, Hyungjin;Jeong, Hyonam;Cho, Jaehyuk
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.129-136
    • /
    • 2016
  • Crucial aspects to successfully realizing the re-emergence of a contemporary and sustainable supercomputing community in South Korea will involve the devoted efforts and support from key government and R&D organizations. We suggest various supplementation plans regarding the roles of support for the statutory plan. This includes the committee and the plans which are often missing necessary support systems that help competent ministries to plan properly according to the missions of the research center. This dissertation suggests that adjustment in the HPC trends will depend upon exposing and correcting problems in the law as well as overall improvement of the law. Also, the total development of a super computing market is necessary. The results of these guidelines will create a spread of demand for supercomputing for national IT resource sharing, and will foster the development of supercomputer specialists worldwide. Other major end results include significant increases in research productivity and increased rates of product development.

Enhancing the Performance of Multiple Parallel Applications using Heterogeneous Memory on the Intel's Next-Generation Many-core Processor (인텔 차세대 매니코어 프로세서에서의 다중 병렬 프로그램 성능 향상기법 연구)

  • Rho, Seungwoo;Kim, Seoyoung;Nam, Dukyun;Park, Geunchul;Kim, Jik-Soo
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.878-886
    • /
    • 2017
  • This paper discusses performance bottlenecks that may occur when executing high-performance computing MPI applications in the Intel's next generation many-core processor called Knights Landing(KNL), as well as effective resource allocation techniques to solve this problem. KNL is composed of a host processor to enable self-booting in addition to an existing accelerator consisting of a many-core processor, and it was released with a new type of on-package memory with improved bandwidth on top of existing DDR4 based memory. We empirically verified an improvement of the execution performance of multiple MPI applications and the overall system utilization ratio by studying a resource allocation method optimized for such new many-core processor architectures.

Study of Scheduling Optimization through the Batch Job Logs Analysis (배치 작업 로그 분석을 통한 스케줄링 최적화 연구)

  • Yoon, JunWeon;Song, Ui-Sung
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1411-1418
    • /
    • 2017
  • The batch job scheduler recognizes the computational resources configured in the cluster environment and plays a role of efficiently arranging the jobs in order. In order to efficiently use the limited available resources in the cluster, it is important to analyze and characterize the characteristics of user tasks. To do this, it is important to identify various scheduling algorithms and apply them to the system environment. Most scheduler software reflects the user's work environment, from job submission to termination, as well as the state of the inventory and system status of the entire managed object. It also stores various information related to task execution, such as job scripts, environment variables, libraries, wait for tasks, start and end times. In this paper, we analyze the execution log of the scheduler such as user 's success rate, execution time, and resource size through information related to job execution through batch scheduler. Based on this, it can be used as a basis to optimize the system by increasing the utilization rate of resources.