• Title/Summary/Keyword: ILP compiler

Search Result 7, Processing Time 0.017 seconds

A Predicate-Sensitive Scheduling Algorithm in Instruction-Level Parallelism Processors (ILP 프로세서를 위한 조건실행 지원 스케쥴링 알고리즘)

  • Yoo, Byung-Kang;Lee, Sang-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.1
    • /
    • pp.202-214
    • /
    • 1998
  • Exploitation of instruction-level parallelism(ILP) is an effective mechanism for improving the performance of modern super-scalar and VLIW processors. Various software techniques can be applied to increase ILP. Among these techniques, predicated execution is the one that increases the degree of ILP by allowing instructions from different basic blocks to be converted to a single basic block by removing branch instructions. In this paper, a global predicate-sensitive scheduling algorithm is proposed to improve the performance for ILP processors that support predicated execution. In order to examine the performance of proposed algorithm, a C compiler and a simulator are developed. By simulating various benchmark programs with the compiler and the simulator, the performance results of this algorithm are measured and the effectiveness of the algorithm is verified. As a result of measure performance with I, 2, 4 issue execution, this study was confirmed average performance by 20% or more.

  • PDF

NHS: A Novel Hybrid Scheduling for ILP

  • You, Song-Pei;Mashiro Sowa
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.310-313
    • /
    • 2000
  • This paper presents a new scheduling method for ILP processing called NHS(Novel Hybrid Scheduling). It concerns not only exploiting as much ILP as possible like other state-of-the-art scheduling scheme, but also choosing the most important instructions among many ready-to-execute instructions to processors in order to reduce the execution time under limited hardware resource. At the heart of NHS is a conception called CCP(Complex Critical Path), an extension of CP(Critical Path). By using CCP, compiler not only can get a global information of the whole program to extract ILP, but also can collecting data dependence information and control flow information. The paper also presents the simulation results, to date, of our attempts to study the NHS scheduling method. The results indicate good potential for this scheduling method.

  • PDF

OpenGL ES 2.0 based Shader Compilation Method for the Instruction-Level Parallelism (OpenGL ES 2.0 기반 셰이더 명령어 병렬 처리를 위한 컴파일 기법)

  • Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.69-76
    • /
    • 2008
  • In this paper, we present the architecture of graphics processor and its instruction format for the mobile device. In addition, we introduce tile shader data structure for the on/off-line compilation based on the OpenGL ES 2.0 and a new optimization method based on the ILP(Instruction-Level Parallelism). This paper shows where a processor with the sane core clock is being used, the shader instruction resulted from the compile structure and method in this paper is approximately 1.5 to 2 times faster than a code based on the single instruction.

  • PDF

Exploiting Thread-Level Parallelism in Lockstep Execution by Partially Duplicating a Single Pipeline

  • Oh, Jaeg-Eun;Hwang, Seok-Joong;Nguyen, Huong Giang;Kim, A-Reum;Kim, Seon-Wook;Kim, Chul-Woo;Kim, Jong-Kook
    • ETRI Journal
    • /
    • v.30 no.4
    • /
    • pp.576-586
    • /
    • 2008
  • In most parallel loops of embedded applications, every iteration executes the exact same sequence of instructions while manipulating different data. This fact motivates a new compiler-hardware orchestrated execution framework in which all parallel threads share one fetch unit and one decode unit but have their own execution, memory, and write-back units. This resource sharing enables parallel threads to execute in lockstep with minimal hardware extension and compiler support. Our proposed architecture, called multithreaded lockstep execution processor (MLEP), is a compromise between the single-instruction multiple-data (SIMD) and symmetric multithreading/chip multiprocessor (SMT/CMP) solutions. The proposed approach is more favorable than a typical SIMD execution in terms of degree of parallelism, range of applicability, and code generation, and can save more power and chip area than the SMT/CMP approach without significant performance degradation. For the architecture verification, we extend a commercial 32-bit embedded core AE32000C and synthesize it on Xilinx FPGA. Compared to the original architecture, our approach is 13.5% faster with a 2-way MLEP and 33.7% faster with a 4-way MLEP in EEMBC benchmarks which are automatically parallelized by the Intel compiler.

  • PDF

Compiler Processor Trade-offs for Dynamic Scheduling of VLIW Instructions (VLIW명령어의 동적 스케줄링을 위한 컴파일러와 프로세서간 상호보완)

  • Sunghyun Jee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.279-287
    • /
    • 2004
  • This paper describes a processor architecture, named Dynamically Instruction Scheduled VLIW (DISVLIW). The DISVLIW Processor architecture is designed for dynamic scheduling VLIW instructions using dependency information. The DISVLIW instruction format is augmented to allow dependency bit vectors to be placed in the same VLIW word. The DISVLIW processor dynamically schedules each instruction in long instructions using functional unit and dynamic scheduler pairs. Features such as explicit parallelism, balanced scheduling effort, and dynamic scheduling of VLIW instructions can be used to provide a sound frustructure for supercomputing. We simulate the DISVLIW processor architecture and show that the DISVLIW processor performs significantly better than the VLIW processor for a wide range of cache sites and across numerical benchmark applications.

Performance Improvement Through Aggressive Instruction Packing (적극적인 명령어 압축을 통한 성능향상)

  • Ji, Seung-Hyeon;Kim, Seok-Il
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.231-240
    • /
    • 2002
  • This paper proposes balancing scheduling effort more evenly between the compiler and the processor, by introducing independently scheduled VLIW instructions. Aggressively Packed VLIW (APVLIW) processor is aimed specifically at independent scheduling Very Long Instruction Word(VLIW) instructions with dependency information. The APVLIW processor independently schedules earth instruction within long instructions using functional unit and dynamic scheduler pairs. Every dynamic scheduler dynamically checks far data dependencies and resource collisions while scheduling each instruction. This scheduling is especially effective in applications containing loops. We simulate the architecture and show that the APVLIW processor performs significantly better than the VLIW processor for a wide range of cache sizes and across various numerical benchmark applications.

A Hybrid Value Predictor using Speculative Update of the Predictor Table and Static Classification for the Pattern of Executed Instructions in Superscalar Processors (슈퍼스칼라 프로세서에서 예상 테이블의 모험적 갱신과 명령어 실행 유형의 정적 분류를 이용한 혼합형 결과값 예측기)

  • Park, Hong-Jun;Jo, Young-Il
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.107-115
    • /
    • 2002
  • We propose a new hybrid value predictor which achieves high performance by combining several predictors. Because the proposed hybrid value predictor can update the prediction table speculatively, it efficiently reduces the number of mispredicted instructions due to stale data. Also, the proposed predictor can enhance the prediction accuracy and efficiently decrease the hardware cost of predictor, because it allocates instructions into the best-suited predictor during instruction fetch stage by using the information of static classification which is obtained from the profile-based compiler implementation. For the 16-issue superscalar processors, simulation results based on the SimpleScalar/PISA tool set show that we achieve the average prediction rates of 73% by using speculative update and the average prediction rates of 88% by adding static classification for the SPECint95 benchmark programs.