• Title/Summary/Keyword: Distributed and Parallel Computing

Search Result 152, Processing Time 0.023 seconds

Realtime Monitoring and Visualization for PDP System (PDP 시스템의 실시간 모니터링 및 시각화)

  • 김수자;송은하;박복자;정영식
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.5
    • /
    • pp.755-765
    • /
    • 2004
  • Recently, the Internet-based distributed/parallel computing using many of idle hosts has been demonstrated its usefulness for processings of a large-scale task and involving several important issues. While executing a large-scale task, the realtime monitoring is required for adaptive strategy of the performance and state change of host. This paper provides the realtime monitoring and visualization on global computing infrastructure called PDP(Parallel Distributed Processing) which is a parallel computing framework implemented with Jana for parallel computing on the Internet.

  • PDF

Debugging of Parallel Programs using Distributed Cooperating Components

  • Mrayyan, Reema Mohammad;Al Rababah, Ahmad AbdulQadir
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.570-578
    • /
    • 2021
  • Recently, in the field of engineering and scientific and technical calculations, problems of mathematical modeling, real-time problems, there has been a tendency towards rejection of sequential solutions for single-processor computers. Almost all modern application packages created in the above areas are focused on a parallel or distributed computing environment. This is primarily due to the ever-increasing requirements for the reliability of the results obtained and the accuracy of calculations, and hence the multiply increasing volumes of processed data [2,17,41]. In addition, new methods and algorithms for solving problems appear, the implementation of which on single-processor systems would be simply impossible due to increased requirements for the performance of the computing system. The ubiquity of various types of parallel systems also plays a positive role in this process. Simultaneously with the growing demand for parallel programs and the proliferation of multiprocessor, multicore and cluster technologies, the development of parallel programs is becoming more and more urgent, since program users want to make the most of the capabilities of their modern computing equipment[14,39]. The high complexity of the development of parallel programs, which often does not allow the efficient use of the capabilities of high-performance computers, is a generally accepted fact[23,31].

Adaptive and optimized agent placement scheme for parallel agent-based simulation

  • Jin, Ki-Sung;Lee, Sang-Min;Kim, Young-Chul
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.313-326
    • /
    • 2022
  • This study presents a noble scheme for distributed and parallel simulations with optimized agent placement for simulation instances. The traditional parallel simulation has some limitations in that it does not provide sufficient performance even though using multiple resources. The main reason for this discrepancy is that supporting parallelism inevitably requires additional costs in addition to the base simulation cost. We present a comprehensive study of parallel simulation architectures, execution flows, and characteristics. Then, we identify critical challenges for optimizing large simulations for parallel instances. Based on our cost-benefit analysis, we propose a novel approach to overcome the performance constraints of agent-based parallel simulations. We also propose a solution for eliminating the synchronizing cost among local instances. Our method ensures balanced performance through optimal deployment of agents to local instances and an adaptive agent placement scheme according to the simulation load. Additionally, our empirical evaluation reveals that the proposed model achieves better performance than conventional methods under several conditions.

Development of the Dynamic Host Management Scheme for Parallel/Distributed Processing on the Web (웹 환경에서의 병렬/분산 처리를 위한 동적 호스트 관리 기법의 개발)

  • Song, Eun-Ha;Jeong, Young-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.251-260
    • /
    • 2002
  • The parallel/distributed processing with a lot of the idle hosts on the web has the high coot-performance ratio for large-scale applications. It's processing has to show the solutions for unpredictable status such as heterogeneity of hosts, variability of hosts, autonomy of hosts, the supporting performance continuously, and the number of hosts which are participated in computation and so on. In this paper, we propose the strategy of adaptive tack reallocation based on performance the host job processing, spread out geographically Also, It shows the scheme of dynamic host management with dynamic environment, which is changed by lots of hosts on the web during parallel processing for large-scale applications. This paper implements the PDSWeb (Parallel/Distributed Scheme on Web) system, evaluates and applies It to the generation of rendering image with highly intensive computation. The results are showed that the adaptive task reallocation with the variation of hosts has been increased up to maximum 90% and the improvement in performance according to add/delete of hosts.

Parallel Computing Environment for R with on Supercomputer Systems (빅데이터 분석을 위한 슈퍼컴퓨터 환경에서 R의 병렬처리)

  • Lee, Sang Yeol;Won, Joong Ho
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.4
    • /
    • pp.19-31
    • /
    • 2014
  • We study parallel processing techniques for the R programming language of high performance computing technology. In this study, we used massively parallel computing system which has 25,408 cpu cores. We conducted a performance evaluation of a distributed memory system using MPI and of a the shared memory system using OpenMP. Our findings are summarized as follows. First, For some particular algorithms, parallel processing is about 150 times faster than serial processing in R. Second, the distributed memory system gets faster as the number of nodes increases while shared memory system is limited in the improvement of performance, due to the limit of the number of cpus in a single system.

An Efficient Solution Method to MDO Problems in Sequential and Parallel Computing Environments (순차 및 병렬처리 환경에서 효율적인 다분야통합최적설계 문제해결 방법)

  • Lee, Se-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.3
    • /
    • pp.236-245
    • /
    • 2011
  • Many researchers have recently studied multi-level formulation strategies to solve the MDO problems and they basically distributed the coupling compatibilities across all disciplines, while single-level formulations concentrate all the controls at the system-level. In addition, approximation techniques became remedies for computationally expensive analyses and simulations. This paper studies comparisons of the MDO methods with respect to computing performance considering both conventional sequential and modem distributed/parallel processing environments. The comparisons show Individual Disciplinary Feasible (IDF) formulation is the most efficient for sequential processing and IDF with approximation (IDFa) is the most efficient for parallel processing. Results incorporating to popular design examples show this finding. The author suggests design engineers should firstly choose IDF formulation to solve MDO problems because of its simplicity of implementation and not-bad performance. A single drawback of IDF is requiring more memory for local design variables and coupling variables. Adding cheap memories can save engineers valuable time and effort for complicated multi-level formulations and let them free out of no solution headache of Multi-Disciplinary Analysis (MDA) of the Multi-Disciplinary Feasible (MDF) formulation.

Fast Circuit Simulation Based on Parallel-Distributed LIM using Cloud Computing System

  • Inoue, Yuta;Sekine, Tadatoshi;Hasegawa, Takahiro;Asai, Hideki
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.10 no.1
    • /
    • pp.49-54
    • /
    • 2010
  • This paper describes a fast circuit simulation technique using the latency insertion method (LIM) with a parallel and distributed leapfrog algorithm. The numerical simulation results on the PC cluster system that uses the cloud computing system are shown. As a result, it is confirmed that our method is very useful and practical.

Distributed Parallel Computing Environment for Java (자바를 위한 분산된 병렬 컴퓨팅 환경)

  • 이상윤;김승호
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.23-37
    • /
    • 2004
  • Since java thread is an object which is treated as independent process within one execution space in the multiprocessing environment, we can use it for independent process of parallel processing. Using thread and synchronization mechanism of java enables us to write parallel application program easily. Therefore, a lot of results are exist which is apply the feature of java that support parallel processing to the distributed computing environment. In this paper, we introduce a system of environment that support parallel execution of thread which is included in legacy java program. The system named TORB(Transparent Object Request Broker) enables us parallel execution of legacy java program after simple converting process, since it support the feature of programming transparency. TORB is extended version of distributed programming tool that is published by our research team. And it had only typical distributed processing feature that is execute a specified function at the specified computer.

Applying Distributed Agents to Parallel Genetic Algorithm on Dynamic Network Environments (동적 네트워크 환경하의 분산 에이전트를 활용한 병렬 유전자 알고리즘 기법)

  • Baek Jin-Wook;Bang Jeon-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.119-125
    • /
    • 2006
  • Distributed Systems can be defined as set of computing resources connected by computer network. One of the most significant techniques in optimization problem domains is parallel genetic algorithms, which are based on distributed systems. Since the status of dynamic network environments such as Internet and mobile computing. can be changed continually, it must not be efficient on the dynamic environments to solve an optimization problem using previous parallel genetic algorithms themselves. In this paper, we propose the effective technique, in which the parallel genetic algorithm can be used efficiently on the dynamic network environments.

  • PDF

DMRUT-MCDS: Discovery Relationships in the Cyber-Physical Integrated Network

  • Lu, Hongliang;Cao, Jiannong;Zhu, Weiping;Jiao, Xianlong;Lv, Shaohe;Wang, Xiaodong
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.558-567
    • /
    • 2015
  • In recent years, we have seen a proliferation of mobile-network-enabled smart objects, such as smart-phones and smart-watches, that form a cyber-physical integrated network to connect the cyber and physical worlds through the capabilities of sensing, communicating, and computing. Discovery of the relationship between smart objects is a critical and nontrivial task in cyber-physical integrated network applications. Aiming to find the most stable relationship in the heterogeneous and dynamic cyber-physical network, we propose a distributed and efficient relationship-discovery algorithm, called dynamically maximizing remaining unchanged time with minimum connected dominant set (DMRUT-MCDS) for constructing a backbone with the smallest scale infrastructure. In our proposed algorithm, the impact of the duration of the relationship is considered in order to balance the size and sustain time of the infrastructure. The performance of our algorithm is studied through extensive simulations and the results show that DMRUT-MCDS performs well in different distribution networks.