• 제목/요약/키워드: Distributed and Parallel Algorithms

검색결과 77건 처리시간 0.02초

분산 및 병렬 알고리즘 시뮬레이터 (Distributed/parallel Algorithm Simulator)

  • 서영진
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 1999년도 가을 학술발표논문집 Vol.26 No.2 (3)
    • /
    • pp.777-779
    • /
    • 1999
  • A new distributed/parallel algorithm simulator, DASim(Distributed Algorithm Simulator), is proposed in this paper. The idea is to ease the task of design, analysis and implementation of distributed algorithms. A small high level language has been proposed for the purpose. Through this non-language specific high level language, the users are spared from the tedious details about how to program distributed or parallel algorithms. Further, visualization of these algorithms are pretty helpful to understand behaviors of these algorithms.

  • PDF

분산 유전알고리즘의 TSP 적용 (Distributed Genetic Algorithms for the TSP)

  • 박유석
    • 대한안전경영과학회지
    • /
    • 제3권3호
    • /
    • pp.191-200
    • /
    • 2001
  • Parallel Genetic Algorithms partition the whole population into several sub-populations and search the optimal solution by exchanging the information each others periodically. Distributed Genetic Algorithm, one of Parallel Genetic Algorithms, divides a large population into several sub-populations and executes the traditional Genetic Algorithm on each sub-population independently. And periodically promising individuals selected from sub-populations are migrated by following the migration interval and migration rate to different sub-populations. In this paper, for the Travelling Salesman Problems, we analyze and compare with Distributed Genetic Algorithms using different Genetic Algorithms and using same Genetic Algorithms on each separated sub-population The simulation result shows that using different Genetic Algorithms obtains better results than using same Genetic Algorithms in Distributed Genetic Algorithms. This results look like the property of rapidly searching the approximated optima and keeping the variety of solution make interaction in different Genetic Algorithms.

  • PDF

전력 조류 계산의 분산 병렬처리기법에 관한 연구 (A Development of Distributed Parallel Processing algorithm for Power Flow analysis)

  • 이춘모;이해기
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 학술대회 논문집 전문대학교육위원
    • /
    • pp.134-140
    • /
    • 2001
  • Parallel processing has the potential to be cost effectively used on computationally intense power system problems. But this technology is not still available is not only parallel computer but also parallel processing scheme. Testing these algorithms to ensure accuracy, and evaluation of their performance is also an issue. Although a significant amount of parallel algorithms of power system problem have been developed in last decade, actual testing on processor architectures lies in the beginning stages. This paper presents the parallel processing algorithm to supply the base being able to treat power flow by newton's method by the distributed memory type parallel computer. This method is to assign and to compute teared blocks of sparse matrix at each parallel processors. The testing to insure accuracy of developed method have been done on serial computer by trying to simulate a parallel environment.

  • PDF

Proposition and Evaluation of Parallelism-Independent Scheduling Algorithms for DAGs of Tasks with Non-Uniform Execution Time

  • Kirilka Nikolova;Atusi Maeda;Sowa, Masa-Hiro
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -1
    • /
    • pp.289-293
    • /
    • 2000
  • We propose two new algorithms for parallelism-independent scheduling. The machine code generated from the compiler using these algorithms in its scheduling phase is parallelism-independent code, executable in minimum time regardless of the number of the processors in the parallel computer. Our new algorithms have the following phases: finding the minimum number of processors on which the program can be executed in minimal time, scheduling by an heuristic algorithm for this predefined number of processors, and serialization of the parallel schedule according to the earliest start time of the tasks. At run time tasks are taken from the serialized schedule and assigned to the processor which allows the earliest start time of the task. The order of the tasks decided at compile time is not changed at run time regardless of the number of the available processors which means there is no out-of-order issue and execution. The scheduling is done predominantly at compile time and dynamic scheduling is minimized and diminished to allocation of the tasks to the processors. We evaluate the proposed algorithms by comparing them in terms of schedule length to the CP/MISF algorithm. For performance evaluation we use both randomly generated DAGs (directed acyclic graphs) and DACs representing real applications. From practical point of view, the algorithms we propose can be successfully used for scheduling programs for in-order superscalar processors and shared memory multiprocessor systems. Superscalar processors with any number of functional units can execute the parallelism-independent code in minimum time without necessity for dynamic scheduling and out-of-order issue hardware. This means that the use of our algorithms will lead to reducing the complexity of the hardware of the processors and the run-time overhead related to the dynamic scheduling.

  • PDF

Performance Optimization of Parallel Algorithms

  • Hudik, Martin;Hodon, Michal
    • Journal of Communications and Networks
    • /
    • 제16권4호
    • /
    • pp.436-446
    • /
    • 2014
  • The high intensity of research and modeling in fields of mathematics, physics, biology and chemistry requires new computing resources. For the big computational complexity of such tasks computing time is large and costly. The most efficient way to increase efficiency is to adopt parallel principles. Purpose of this paper is to present the issue of parallel computing with emphasis on the analysis of parallel systems, the impact of communication delays on their efficiency and on overall execution time. Paper focuses is on finite algorithms for solving systems of linear equations, namely the matrix manipulation (Gauss elimination method, GEM). Algorithms are designed for architectures with shared memory (open multiprocessing, openMP), distributed-memory (message passing interface, MPI) and for their combination (MPI + openMP). The properties of the algorithms were analytically determined and they were experimentally verified. The conclusions are drawn for theory and practice.

동적 네트워크 환경하의 분산 에이전트를 활용한 병렬 유전자 알고리즘 기법 (Applying Distributed Agents to Parallel Genetic Algorithm on Dynamic Network Environments)

  • 백진욱;방정원
    • 한국컴퓨터정보학회논문지
    • /
    • 제11권4호
    • /
    • pp.119-125
    • /
    • 2006
  • 네트워크를 통하여 서로 연결된 컴퓨팅 자원들의 집합을 분산 시스템이라고 정의할 수 있다. 최적화 문제 영역에서 가장 중요한 해결 기법 중에 하나인 병렬 유전자 알고리즘은 분산 시스템을 기반으로 하고 있다. 인터넷과 이동 컴퓨팅과 같은 동적 네트워크 환경 하에서 네트워크의 상태는 가변적으로 변할 수 있어 기존의 병렬 유전자 알고리즘을 분산 시스템에서 최적화 문제를 해결하기 위하여 그대로 사용하기에는 비효율적이다. 본 논문에서는 동적 네트워크 환경 하에서 분산 에이전트를 사용하여 병렬 유전자 알고리즘을 효율적으로 사용할 수 있는 기법을 제시한다.

  • PDF

병렬 DEVS 시뮬레이션 환경(P-DEVSIM ++) 성능 평가 (Performance Evaluation of a Parallel DEVS Simulation Environment of P-DEVSIM ++)

  • 성영락
    • 한국시뮬레이션학회논문지
    • /
    • 제2권1호
    • /
    • pp.31-44
    • /
    • 1993
  • Zeigler's DEVS(Discrete Event Systems Specification) formalism supports formal specification of discrete event systems in a hierarchical , modular manner. Associated are hierarchical, distributed simulation algorithms, called abstract simulators, which interpret dynamics of DEVS models. This paper deals with performance evaluation of P-DEVSIM ++, a parallel simulation environment which implements the DEVS formalism and associated simulation algorithms in a parallel environment. Performance simulator has been developed and used to experiment models of parallel simulation executions in different conditions. The experimental result shows that simulation time depends on both the number of processors in the parallel system and the communication overheads among such processors.

  • PDF

대형구조물의 분산구조해석을 위한 PCG 알고리즘 (Distributed Structural Analysis Algorithms for Large-Scale Structures based on PCG Algorithms)

  • 권윤한;박효선
    • 한국전산구조공학회논문집
    • /
    • 제12권3호
    • /
    • pp.385-396
    • /
    • 1999
  • 최근 공학분야에서 다루어지고 있는 문제의 규모가 대형화하고 있으며 이러한 대형구조물의 구조설계는 부재의 강도설계 및 절점의 변위조절을 위하여 많은 수의 구조해석을 요구한다. 한 대의 개인용 컴퓨터에 의한 대형구조물의 구조해석은 대용량의 기억장치와 많은 계산 시간이 요구되므로 반복적 해석이 필요한 대형구조물의 설계에 효율적으로 이용되기 어려운 실정이다. 따라서, 본 논문에서는 이러한 문제에 대한 대안으로 다수의 개인용 컴퓨터들을 네트워크로 연결하여 고성능 병렬연산시스템을 구성하고 이에 적합한 두 가지 형태의 분산구조방정식해법들을 반복법인 PCG 알고리즘을 이용하여 개발하였다. 대형구조물을 위한 분산구조해석법은 구조해석 과정에 요구되는 각 컴퓨터 상호 간의 통신회수와 통신량을 최소화할 수 있도록 개발되었다. 분산구조해석법의 성능은 대규모 3차원 트러스 구조물 및 144층 가새 튜브구조물의 구조해석에 적용하여 분석하였다.

  • PDF

A Study on Sorting in A Computer Using The Binary Multi-level Multi-access Protocol

  • Jung Chang-Duk
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2006년도 춘계학술대회
    • /
    • pp.303-310
    • /
    • 2006
  • The sorting algorithms have been developed to take advantage of distributed computers. But the speedup of parallel sorting algorithms decrease rapidly with increased number of processors due to parallel processing overhead such as context switching time and inter-processor communication cost. In this paper, we propose a parallel sorting method which provides linear speedup of an optimal serial algorithm for a system with a large number of processors. This algorithm may even provide superlinear speedup for a practical system. The algorithm takes advantage of an interconnection network properties and its protocol.

  • PDF

Fully Distributed Economic Dispatching Methods Based on Alternating Direction Multiplier Method

  • Yang, Linfeng;Zhang, Tingting;Chen, Guo;Zhang, Zhenrong;Luo, Jiangyao;Pan, Shanshan
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권5호
    • /
    • pp.1778-1790
    • /
    • 2018
  • Based on the requirements and characteristics of multi-zone autonomous decision-making in modern power system, fully distributed computing methods are needed to optimize the economic dispatch (ED) problem coordination of multi-regional power system on the basis of constructing decomposition and interaction mechanism. In this paper, four fully distributed methods based on alternating direction method of multipliers (ADMM) are used for solving the ED problem in distributed manner. By duplicating variables, the 2-block classical ADMM can be directly used to solve ED problem fully distributed. The second method is employing ADMM to solve the dual problem of ED in fully distributed manner. N-block methods based on ADMM including Alternating Direction Method with Gaussian back substitution (ADM_G) and Exchange ADMM (E_ADMM) are employed also. These two methods all can solve ED problem in distributed manner. However, the former one cannot be carried out in parallel. In this paper, four fully distributed methods solve the ED problem in distributed collaborative manner. And we also discussed the difference of four algorithms from the aspects of algorithm convergence, calculation speed and parameter change. Some simulation results are reported to test the performance of these distributed algorithms in serial and parallel.