• Title/Summary/Keyword: Distributed Parallel Programming

Search Result 36, Processing Time 0.028 seconds

Distributed Parallel Computing Environment for Java (자바를 위한 분산된 병렬 컴퓨팅 환경)

  • 이상윤;김승호
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.23-37
    • /
    • 2004
  • Since java thread is an object which is treated as independent process within one execution space in the multiprocessing environment, we can use it for independent process of parallel processing. Using thread and synchronization mechanism of java enables us to write parallel application program easily. Therefore, a lot of results are exist which is apply the feature of java that support parallel processing to the distributed computing environment. In this paper, we introduce a system of environment that support parallel execution of thread which is included in legacy java program. The system named TORB(Transparent Object Request Broker) enables us parallel execution of legacy java program after simple converting process, since it support the feature of programming transparency. TORB is extended version of distributed programming tool that is published by our research team. And it had only typical distributed processing feature that is execute a specified function at the specified computer.

Parallel Computing Environment for R with on Supercomputer Systems (빅데이터 분석을 위한 슈퍼컴퓨터 환경에서 R의 병렬처리)

  • Lee, Sang Yeol;Won, Joong Ho
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.4
    • /
    • pp.19-31
    • /
    • 2014
  • We study parallel processing techniques for the R programming language of high performance computing technology. In this study, we used massively parallel computing system which has 25,408 cpu cores. We conducted a performance evaluation of a distributed memory system using MPI and of a the shared memory system using OpenMP. Our findings are summarized as follows. First, For some particular algorithms, parallel processing is about 150 times faster than serial processing in R. Second, the distributed memory system gets faster as the number of nodes increases while shared memory system is limited in the improvement of performance, due to the limit of the number of cpus in a single system.

A Performance Comparison of Parallel Programming Models on Edge Devices (엣지 디바이스에서의 병렬 프로그래밍 모델 성능 비교 연구)

  • Dukyun Nam
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.165-172
    • /
    • 2023
  • Heterogeneous computing is a technology that utilizes different types of processors to perform parallel processing. It maximizes task processing and energy efficiency by leveraging various computing resources such as CPUs, GPUs, and FPGAs. On the other hand, edge computing has developed with IoT and 5G technologies. It is a distributed computing that utilizes computing resources close to clients, thereby offloading the central server. It has evolved to intelligent edge computing combined with artificial intelligence. Intelligent edge computing enables total data processing, such as context awareness, prediction, control, and simple processing for the data collected on the edge. If heterogeneous computing can be successfully applied in the edge, it is expected to maximize job processing efficiency while minimizing dependence on the central server. In this paper, experiments were conducted to verify the feasibility of various parallel programming models on high-end and low-end edge devices by using benchmark applications. We analyzed the performance of five parallel programming models on the Raspberry Pi 4 and Jetson Orin Nano as low-end and high-end devices, respectively. In the experiment, OpenACC showed the best performance on the low-end edge device and OpenSYCL on the high-end device due to the stability and optimization of system libraries.

Pattern mining for large distributed dataset: A parallel approach (PMLDD)

  • Pal, Amrit;Kumar, Manish
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5287-5303
    • /
    • 2018
  • Handling vast amount of data found in large transactional datasets is an obvious challenge for the conventional data mining algorithms. Addressing this challenge, our paper proposes a parallel approach for proper decomposition of mining problem into sub-problems in order to find frequent patterns from these datasets. The proposed, Pattern Mining for Large Distributed Dataset (PMLDD) approach, ensures minimum dependencies as well as minimum communications among sub-problems. It establishes a linear aggregation of the intermediate results so that it can be adapted to large-scale programming models like MapReduce. In this context, an algorithmic structure for MapReduce programming model is presented. PMLDD guarantees an efficient load balancing among the sub-problems by a specific selection criterion. Further, it optimizes the number of required iterations over the dataset for mining frequent patterns as compared to the existing approaches. Finally, we believe that our approach is scalable enough to handle larger datasets in terms of performance evaluation, and the result analysis justifies all these mentioned concerns.

The Design and Implementation of the ParaC Language (ParaC 언어의 설계 및 구현)

  • Lee, Kyoung-Seok;Woo, Young-Choon;Kim, Jin-Mee;Chi, Dong-Hae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2903-2913
    • /
    • 1997
  • This paper describes the design and implementation of the ParaC language that supports parallel programming on the shared memory and distributed memory parallel machine. The ParaC language is designed for the effective use of system resources of scalable parallel systems. The goal is achieved by adding parallel and synchronization constructs for shared address spaces, and remote task constructs for distributed address spaces. This paper also shows the translation method, and we implement the translator and the run-time library for parallel execution of extended constructs.

  • PDF

A Global Framework for Parallel and Distributed Application with Mobile Objects (이동 객체 기반 병렬 및 분산 응용 수행을 위한 전역 프레임워크)

  • Han, Youn-Hee;Park, Chan-Yeol;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.555-568
    • /
    • 2000
  • The World Wide Web has become the largest virtual system that is almost universal in scope. In recent research, it has become effective to utilize idle hosts existing in the World Wide Web for running applications that require a substantial amount of computation. This novel computing paradigm has been referred to as the advent of global computing. In this paper, we implement and propose a mobile object-based global computing framework called Tiger, whose primary goal is to present novel object-oriented programming libraries that support distribution, dispatching, migration of objects and concurrency among computational activities. The programming libraries provide programmers with access, location and migration transparency for distributed and mobile objects. Tiger's second goal is to provide a system supporting requisites for a global computing environment - scalability, resource and location management. The Tiger system and the programming libraries provided allow a programmer to easily develop an objectoriented parallel and distributed application using globally extended computing resources. We also present the improvement in performance gained by conducting the experiment with highly intensive computations such as parallel fractal image processing and genetic-neuro-fuzzy algorithms.

  • PDF

New execution model for CAPE using multiple threads on multicore clusters

  • Do, Xuan Huyen;Ha, Viet Hai;Tran, Van Long;Renault, Eric
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.825-834
    • /
    • 2021
  • Based on its simplicity and user-friendly characteristics, OpenMP has become the standard model for programming on shared-memory architectures. Checkpointing-aided parallel execution (CAPE) is an approach that utilizes the discontinuous incremental checkpointing technique (DICKPT) to translate and execute OpenMP programs on distributed-memory architectures automatically. Currently, CAPE implements the OpenMP execution model by utilizing the DICKPT to distribute parallel jobs and their data to slave machines, and then collects the results after executing these distributed jobs. Although this model has been proven to be effective in terms of performance and compatibility with OpenMP on distributed-memory systems, it cannot fully exploit the capabilities of multicore processors. This paper presents a novel execution model for CAPE that utilizes two levels of parallelism. In the proposed model, we add another level of parallelism in the form of multithreaded processes on slave machines with the goal of better exploiting their multicore CPUs. Initial experimental results presented near the end of this paper demonstrate that this model provides significantly enhanced CAPE performance.

Advanced controller design for AUV based on adaptive dynamic programming

  • Chen, Tim;Khurram, Safiullahand;Zoungrana, Joelli;Pandey, Lallit;Chen, J.C.Y.
    • Advances in Computational Design
    • /
    • v.5 no.3
    • /
    • pp.233-260
    • /
    • 2020
  • The main purpose to introduce model based controller in proposed control technique is to provide better and fast learning of the floating dynamics by means of fuzzy logic controller and also cancelling effect of nonlinear terms of the system. An iterative adaptive dynamic programming algorithm is proposed to deal with the optimal trajectory-tracking control problems for autonomous underwater vehicle (AUV). The optimal tracking control problem is converted into an optimal regulation problem by system transformation. Then the optimal regulation problem is solved by the policy iteration adaptive dynamic programming algorithm. Finally, simulation example is given to show the performance of the iterative adaptive dynamic programming algorithm.

High-Performance Korean Morphological Analyzer Using the MapReduce Framework on the GPU

  • Cho, Shi-Won;Lee, Dong-Wook
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.4
    • /
    • pp.573-579
    • /
    • 2011
  • To meet the scalability and performance requirements of data analyses, which often involve voluminous data, efficient parallel or concurrent algorithms and frameworks are essential. We present a high-performance Korean morphological analyzer which employs the MapReduce framework on the graphics processing unit (GPU). MapReduce is a programming framework introduced by Google to aid the development of web search applications on a large number of central processing units (CPUs). GPUs are designed as a special-purpose co-processor. Their programming interfaces are typically formulated for graphics applications. Compared to CPUs, GPUs have greater computation power and memory bandwidth; however, GPUs are more difficult to program because of the design of their architectures. The performance of the Korean morphological analyzer using the MapReduce framework on the GPU is evaluated in comparison with the CPU-based model. The proposed Korean Morphological analyzer shows promising scalable performance on distributed computing with the GPU.

PDFindexer: Distributed PDF Indexing system using MapReduce

  • Murtazaev, JAziz;Kihm, Jang-Su;Oh, Sangyoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.4 no.1
    • /
    • pp.13-17
    • /
    • 2012
  • Indexing allows converting raw document collection into easily searchable representation. Web searching by Google or Yahoo provides subsecond response time which is made possible by efficient indexing of web-pages over the entire Web. Indexing process gets challenging when the scale gets bigger. Parallel techniques, such as MapReduce framework can assist in efficient large-scale indexing process. In this paper we propose PDFindexer, system for indexing scientific papers in PDF using MapReduce programming model. Unlike Web search engines, our target domain is scientific papers, which has pre-defined structure, such as title, abstract, sections, references. Our proposed system enables parsing scientific papers in PDF recreating their structure and performing efficient distributed indexing with MapReduce framework in a cluster of nodes. We provide the overview of the system, their components and interactions among them. We discuss some issues related with the design of the system and usage of MapReduce in parsing and indexing of large document collection.