• Title/Summary/Keyword: message-passing programs

Search Result 27, Processing Time 0.083 seconds

Efficient Executions of MPI Parallel Programs in Memory-Centric Computer Architecture (메모리 중심 컴퓨터 구조에서 MPI 병렬 프로그램의 효율적인 수행)

  • Lee, Je-Man;Lee, Seung-Chul;Shin, Dong-Ha
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.257-258
    • /
    • 2019
  • 본 논문에서는 "프로세서 중심 컴퓨터 구조"에서 개발된 MPI 병렬 프로그램을 수정하지 않고 "메모리 중심 컴퓨터 구조"에서 더 효율적으로 수행시키는 기술을 제안한다. 본 연구에서 제안하는 기술은 메모리 중심 컴퓨터 구조가 가지는 "빠른 대용량 공유 메모리" 특징을 이용하여 MPI 표준 라이브러리가 수행하는 네트워크 통신을 통한 느린 데이터 전달을 공유 메모리를 통한 빠른 데이터 전달로 대체하여 효율성을 얻는다. 본 연구에서 제안한 기술은 도커 가상화 기술을 사용한 분산 시스템 환경에서 MC-MPI-LIB 라이브러리 및 MC-MPI-SIM 시뮬레이터로 구현되었으며 다수의 MPI 병렬 프로그램으로 시험 수행하여 효율성이 있음을 보였다.

  • PDF

A Labeling Scheme for Efficient On-the-fly Detection of Race Conditions in Parallel Programs (병렬프로그램의 경합조건을 수행 중에 효율적으로 탐지하기 위한 레이블링 기법)

  • Park, So-Hee;Woo, Jong-Jung;Bae, Jong-Min;Jun, Yong-Kee
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.525-534
    • /
    • 2002
  • Race conditions, races in short, need to be detected for debugging parallel programs, because the races result in unintended non-deterministic executions. To detect the races in an execution of program, previous techniques use a centralized data structure which may incur serious bottleneck in generating concurrency information, or show inefficient time complexity which depends on the degree of nested parallelism in comparing any two of them. We propose a new labeling scheme in this paper, which is scalable in generating the concurrency information without bottleneck by using private data structure, and improves time complexity into constant in checking concurrency. The scalability and time efficiency therfore makes on-the-fly race detection efficient not only for programs with either shared-memory or message-passing, but also for programs with mixed model of the two.

Design and Implementation of a Parallel Computer "KAPAC" (병렬 컴퓨터 “KAPAC”의 설계 및 구현)

  • 성동수;강휘삼;최승욱;박규호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.4
    • /
    • pp.1-11
    • /
    • 1992
  • A parallel computer "KAPAC(KAIST Parallel Computer)" based on Transputer is designed and implemented. Its purpose is to support the real time processing and high perfomance computing through parallelizing the complex and heavy computation load. KAPAC has UNIX machine as host-computer and is implemented on VME bus as back-end machine. The parallel computer "KAPAC" is the message-passing loosely-coupled multiprocessor computer having thirty two processing elements, and the network topology between processing elements can be easily configured with the crossbar switchs using the control program. Various topologies are introduced and appoication programs are executed on the parallel computer "KAPAC" with eifferent interconnection topologies to show the reconfigurability.to show the reconfigurability.

  • PDF

A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi

  • Ko, Young-ho;Heo, Gyu-Seong;Lee, Sang-Hyun
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.194-199
    • /
    • 2019
  • As the performance of the system increases, more parallelized data is being processed than single processing of data. Today's cpu structure has been developed to leverage multicore, and hence data processing methods are being developed to enable parallel processing. In recent years desktop cpu has increased multicore, data is growing exponentially, and there is also a growing need for data processing as artificial intelligence develops. This neural network of artificial intelligence consists of a matrix, making it advantageous for parallel processing. This paper aims to speed up the processing of the system by using raspberrypi to implement the cluster building and parallel processing system against the backdrop of the foregoing discussion. Raspberrypi is a credit card-sized single computer made by the raspberrypi Foundation in England, developed for education in schools and developing countries. It is cheap and easy to get the information you need because many people use it. Distributed processing systems should be supported by programs that connected multiple computers in parallel and operate on a built-in system. RaspberryPi is connected to switchhub, each connected raspberrypi communicates using the internal network, and internally implements parallel processing using the Message Passing Interface (MPI). Parallel processing programs can be programmed in python and can also use C or Fortran. The system was tested for parallel processing as a result of multiplying the two-dimensional arrangement of 10000 size by 0.1. Tests have shown a reduction in computational time and that parallelism can be reduced to the maximum number of cores in the system. The systems in this paper are manufactured on a Linux-based single computer and are thought to require testing on systems in different environments.

A Dynamic Work Manager for Heterogeneous Cluster Systems (DWM: 이기종 클러스터 시스템의 동적 자원 관리자)

  • Park, Jong-Hyun;Kim, Jun-Seong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.6
    • /
    • pp.56-62
    • /
    • 2009
  • Inexpensive high performance computer systems combined with high speed networks and machine independent communication libraries have made cluster computing a viable option for parallel applications. In a heterogeneous cluster environment, efficient resource management is critically important since the computing power of the individual computer system is a significant performance factor when executing applications in parallel. This paper presents a dynamic task manager, called DWM (dynamic work manager). It makes a heterogeneous cluster system fully utilize the different computing power of its individual computer system. We measure the performance of DWM in a heterogeneous cluster environment with several kernel-level benchmark programs and their programming complexity quantitatively. From the experiments, we found that DWM provides competitive performance with a notable reduction in programming effort.

Design and Implementation of Distributed Active Object System(DAOS) for Manufacturing Control Applications (공정 제어 응용을 위한 분산 능동 객체 시스템(DAOS)의 설계 및 구현)

  • Eum, Doo-Hun;Yoo, Eun-Ja
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.141-150
    • /
    • 2001
  • Manufacturing conb'ol applications consist of concurrent active components such as robots, AGV's (Automatic Guided Vehicles), and conveyors. Running of manufacturing control programs is interactions among those components. We can enhance the productivity and extendability of manufacturing control applications by using the object-oriented teclmology that models those components as reusable objects. But the objects in current object-oriented technology that encapsulate state and behavior infonnation are passive in a sense that those respond only when messages are sent to them. In this paper, we introduce the Distributed Active Object Systems (DAGS) approach that SUPPOltS active objects. Since active objects encapsulate control infonnation in addition to state and behavior information under COREA/Java-based distributed environment, they can represent manufacturing control components better than the objects in ordimuy object-oriented technology. TIus control infonnation provides an object with a featme that can monitor its own status as well as other object's status connected by intelface valiables. Active objects can initiate a behavior according to the change of those status. Therefore, we can sb-uctmally assemble self-initiating active objects by using intelface variables to construct a system without describing bow to control distributed objects by using message passing. As the DAOS approach supports object composability, we can enhal1ce the productivity and extendability of disbibuted manufactming control applications even better than the ordil1alY object-oriented approach. Also, the DAOS approach supports better component reusability with active objects that encapsulate control information .

  • PDF

Development of Network based Gravity and Magnetic data Processing System (네트워크에 기반한 중력.자력 자료의 처리기술 개발 연구)

  • Kwon, Byung-Doo;Lee, Heui-Soon;Oh, Seok-Hoon;Chung, Ho-Joon;Rim, Hyoung-Rae
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.4
    • /
    • pp.235-244
    • /
    • 2000
  • We studied basic ideas of a network based Gravity/Magnetic data processing server/client system which provides functions of data processing, forward modeling, inversion and data process on Data Base. This Java technology was used to provide facilities, socket communication and JDBC(Java Database Connectivity) technology to produce an effective and practical client application. The server computers are linked by network to process the MPI parallelized computing. This can provide useful devices of the geophysical process and modeling that usually require massive computing performance and time. Since this system can be accessed by lots of users, it can provides the consistent and confident results through the verified processing programs. This system also makes it possible to get results and outputs through internet when their local machines are connected to the network. It can help many users who want to omit the jobs of system administration and to process data during their field works.

  • PDF