• Title/Summary/Keyword: Shared-Memory Multiprocessor

Search Result 52, Processing Time 0.017 seconds

A study on the parallel processing of the avionic system computer using multi RISC processors (다중 RISC 프로세서를 이용한 항공전자시스템컴퓨터 병렬처리기법 연구)

  • Lee, Jae-Uk;Lee, Sung-Soo;Kim, Young-Taek;Yang, Seung-Yul;Kim, Bong-Gyu;Hwang, Sang-Hyun;Park, Deok-Bae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.7
    • /
    • pp.144-149
    • /
    • 2002
  • This paper presents a technique for real time multiprocessor parallel processing to develop an avionic system computer(ASC) which integrates the avionics control, navigation and fire control, cursive and raster graphic symbol generation into one line replaceable unit. The proposed method has optimal performance by adopting a logically asymmetric structure between four 32bit RISC processors based on the master-slave multiprocessing, a tightly coupled interaction level with the time shared common bus and global memory, and an efficient bus arbitration algorithm. The ASC has been verified through a series of flight tests. The relevant tests also have been rigorously conducted on the prototype ASC such as electrical test, environmental test, and electromagnetic interference test.

Adaptive Execution Techniques for Parallel Programs (병렬 프로그램의 적응형 실행 기법)

  • 이재진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.8
    • /
    • pp.421-431
    • /
    • 2004
  • This paper presents adaptive execution techniques that determine whether parallelized loops are executed in parallel or sequentially in order to maximize performance. The adaptation and performance estimation algorithms are implemented in a compiler preprocessor. The preprocessor inserts code that automatically determines at compile-time or at run-time the way the parallelized loops are executed. Using a set of standard numerical applications written in Fortran77 and running them with our techniques on a distributed shared memory multiprocessor machine (SGI Origin2000), we obtain the performance of our techniques, on average, 26%, 20%, 16%, and 10% faster than the original parallel program on 32, 16, 8, and 4 processors, respectively. One of the applications runs even more than twice faster than its original parallel version on 32 processors.