• Title/Summary/Keyword: distributed parallel computing

Search Result 157, Processing Time 0.024 seconds

A Study on Knowledge Unit for High-Performance Computing in Computational Science (계산과학분야의 고성능컴퓨팅에 관한 지식단위 연구)

  • Yoon, Heejun;Ahn, Seongjin
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.1021-1026
    • /
    • 2018
  • Computational science is at an early stage and is not yet fully active, and the high-performance computing required in the field of computational science is at present a special subject of parallel and distributed computing in computer science. Additionally, there are too few education courses which teach high-performance computing from basic to advanced levels. In this study, we derive the knowledge units needed to learn high-performance computing, an important research tool in computational science. Using ACM the Computer Science Curricula 2013 (CS2013), we examine the validity and reliability of 89 knowledge units and eleven knowledge units with high validity and reliability, after which nine core knowledge units and two optional knowledge units are proposed. The eleven proposed knowledge units are expected to contribute to the development of the high-performance computing curriculum necessary to teach computational science.

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

An Efficient List Scheduling Algorithm in Distributed Heterogeneous Computing System (분산 이기종 컴퓨팅 시스템에서 효율적인 리스트 스케줄링 알고리즘)

  • Yoon, Wan-Oh;Yoon, Jung-Hee;Lee, Chang-Ho;Gim, Hyo-Gi;Choi, Sang-Bang
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.86-95
    • /
    • 2009
  • Efficient DAG scheduling is critical for achieving high performance in heterogeneous computing environments. Finding an optimal solution to the problem of scheduling an application modeled by a directed acyclic graph(DAG) onto a set of heterogeneous machines is known to be an NP-complete problem. In this paper we propose a new list scheduling algorithm, called the Heterogeneous Rank-Path Scheduling(HRPS) algorithm, to exploit all of a program's available parallelism in distributed heterogeneous computing system. The primary goal of HRPS is to minimize the schedule length of applications. The performance of the algorithm has been observed by its application to some practical DAGs, and by comparing it with other existing scheduling algorithm such as CPOP, HCPT and FLB in term of the schedule length. The comparison studies show that HRPS significantly outperform CPOP, HCPT and FLB in schedule length.

Design and Implementation of Distributed In-Memory DBMS-based Parallel K-Means as In-database Analytics Function (분산 인 메모리 DBMS 기반 병렬 K-Means의 In-database 분석 함수로의 설계와 구현)

  • Kou, Heymo;Nam, Changmin;Lee, Woohyun;Lee, Yongjae;Kim, HyoungJoo
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.105-112
    • /
    • 2018
  • As data size increase, a single database is not enough to serve current volume of tasks. Since data is partitioned and stored into multiple databases, analysis should also support parallelism in order to increase efficiency. However, traditional analysis requires data to be transferred out of database into nodes where analytic service is performed and user is required to know both database and analytic framework. In this paper, we propose an efficient way to perform K-means clustering algorithm inside the distributed column-based database and relational database. We also suggest an efficient way to optimize K-means algorithm within relational database.

A Genetic-Based Optimization Model for Clustered Node Allocation System in a Distributed Environment (분산 환경에서 클러스터 노드 할당 시스템을 위한 유전자 기반 최적화 모델)

  • Park, Kyeong-mo
    • The KIPS Transactions:PartA
    • /
    • v.10A no.1
    • /
    • pp.15-24
    • /
    • 2003
  • In this paper, an optimization model for the clustered node allocation systems in the distributed computing environment is presented. In the presented model with a distributed file system framework, the dynamics of system behavior over times is carefully thought over the nodes and hence the functionality of the cluster monitor node to check the feasibility of the current set of clustered node allocation is given. The cluster monitor node of the node allocation system capable of distributing the parallel modules to clustered nodes provides a good allocation solution using Genetic Algorithms (GA). As a part of the experimental studies, the solution quality and computation time effects of varying GA experimental parameters, such as the encoding scheme, the genetic operators (crossover, mutations), the population size, and the number of node modules, and the comparative findings are presented.

The Distributed Encryption Processing System for Large Capacity Personal Information based on MapReduce (맵리듀스 기반 대용량 개인정보 분산 암호화 처리 시스템)

  • Kim, Hyun-Wook;Park, Sung-Eun;Euh, Seong-Yul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.576-585
    • /
    • 2014
  • Collecting and utilizing have a huge amount of personal data have caused severe security issues such as leakage of personal information. Several encryption algorithms for collected personal information have been widely adopted to prevent such problems. In this paper, a novel algorithm based on MapReduce is proposed for encrypting such private information. Furthermore, test environment has been built for the performance verification of the distributed encryption processing method. As the result of the test, average time efficiency has improved to 15.3% compare to encryption processing of token server and 3.13% compare to parallel processing.

A Survey on 5G Enabled Multi-Access Edge Computing for Smart Cities: Issues and Future Prospects

  • Tufail, Ali;Namoun, Abdallah;Alrehaili, Ahmed;Ali, Arshad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.107-118
    • /
    • 2021
  • The deployment of 5G is in full swing, with a significant yearly growth in the data traffic expected to reach 26% by the year and data consumption to reach 122 EB per month by 2022 [10]. In parallel, the idea of smart cities has been implemented by various governments and private organizations. One of the main objectives of 5G deployment is to help develop and realize smart cities. 5G can support the enhanced data delivery requirements and the mass connection requirements of a smart city environment. However, for specific high-demanding applications like tactile Internet, transportation, and augmented reality, the cloud-based 5G infrastructure cannot deliver the required quality of services. We suggest using multi-access edge computing (MEC) technology for smart cities' environments to provide the necessary support. In cloud computing, the dependency on a central server for computation and storage adds extra cost in terms of higher latency. We present a few scenarios to demonstrate how the MEC, with its distributed architecture and closer proximity to the end nodes can significantly improve the quality of services by reducing the latency. This paper has surveyed the existing work in MEC for 5G and highlights various challenges and opportunities. Moreover, we propose a unique framework based on the use of MEC for 5G in a smart city environment. This framework works at multiple levels, where each level has its own defined functionalities. The proposed framework uses the MEC and introduces edge-sub levels to keep the computing infrastructure much closer to the end nodes.

DOVE : A Distributed Object System for Virtual Computing Environment (DOVE : 가상 계산 환경을 위한 분산 객체 시스템)

  • Kim, Hyeong-Do;Woo, Young-Je;Ryu, So-Hyun;Jeong, Chang-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.120-134
    • /
    • 2000
  • In this paper we present a Distributed Object oriented Virtual computing Environment, called DOVE which consists of autonomous distributed objects interacting with one another via method invocations based on a distributed object model. DOVE appears to a user logically as a single virtual computer for a set of heterogeneous hosts connected by a network as if objects in remote site reside in one virtual computer. By supporting efficient parallelism, heterogeneity, group communication, single global name service and fault-tolerance, it provides a transparent and easy-to-use programming environment for parallel applications. Efficient parallelism is supported by diverse remote method invocation, multiple method invocation for object group, multi-threaded architecture and synchronization schemes. Heterogeneity is achieved by automatic data arshalling and unmarshalling, and an easy-to-use and transparent programming environment is provided by stub and skeleton objects generated by DOVE IDL compiler, object life control and naming service of object manager. Autonomy of distributed objects, multi-layered architecture and decentralized approaches in hierarchical naming service and object management make DOVE more extensible and scalable. Also,fault tolerance is provided by fault detection in object using a timeout mechanism, and fault notification using asynchronous exception handling methods

  • PDF

Efficient distributed consensus optimization based on patterns and groups for federated learning (연합학습을 위한 패턴 및 그룹 기반 효율적인 분산 합의 최적화)

  • Kang, Seung Ju;Chun, Ji Young;Noh, Geontae;Jeong, Ik Rae
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.73-85
    • /
    • 2022
  • In the era of the 4th industrial revolution, where automation and connectivity are maximized with artificial intelligence, the importance of data collection and utilization for model update is increasing. In order to create a model using artificial intelligence technology, it is usually necessary to gather data in one place so that it can be updated, but this can infringe users' privacy. In this paper, we introduce federated learning, a distributed machine learning method that can update models in cooperation without directly sharing distributed stored data, and introduce a study to optimize distributed consensus among participants without an existing server. In addition, we propose a pattern and group-based distributed consensus optimization algorithm that uses an algorithm for generating patterns and groups based on the Kirkman Triple System, and performs parallel updates and communication. This algorithm guarantees more privacy than the existing distributed consensus optimization algorithm and reduces the communication time until the model converges.

Parallel Flood Inundation Analysis using MPI Technique (MPI 기법을 이용한 병렬 홍수침수해석)

  • Park, Jae Hong
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.11
    • /
    • pp.1051-1060
    • /
    • 2014
  • This study is attempted to realize an improved computation performance by combining the MPI (Message Passing Interface) Technique, a standard model of the parallel programming in the distributed memory environment, with the DHM(Diffusion Hydrodynamic Model), a inundation analysis model. With parallelizing inundation model, it compared with the existing calculation method about the results of applications to complicate and required long computing time problems. In addition, it attempted to prove the capability to estimate inundation extent, depth and speed-up computing time due to the flooding in protected lowlands and to validate the applicability of the parallel model to the actual flooding analysis by simulating based on various inundation scenarios. To verify the model developed in this study, it was applied to a hypothetical two-dimensional protected land and a real flooding case, and then actually verified the applicability of this model. As a result of this application, this model shows that the improvement effectiveness of calculation time is better up to the maximum of about 41% to 48% in using multi cores than a single core based on the same accuracy. The flood analysis model using the parallel technique in this study can be used for calculating flooding water depth, flooding areas, propagation speed of flooding waves, etc. with a shorter runtime with applying multi cores, and is expected to be actually used for promptly predicting real time flood forecasting and for drawing flood risk maps etc.