• Title/Summary/Keyword: multicore

Search Result 143, Processing Time 0.017 seconds

Study on LLVM application in Parallel Computing System (병렬 컴퓨팅 시스템에서 LLVM 응용 연구)

  • Cho, Jungseok;Cho, Doosan;Kim, Yongyeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.395-399
    • /
    • 2019
  • In order to support various parallel computing systems, it is necessary to extend LLVM IR to more efficiently support vector / matrix and to design LLVM IR to machine code as a new algorithm. As shown in the IR example, RISC instruction generation is naturally generated because the RISC instruction is basically composed of the RISC instruction, and the vector instruction is also not supported. There is a need for new IR structures, command generation algorithms and related extensions to support vector / matrix more robustly. To do this, it is important to map each instruction in the LLVM IR to the appropriate instruction in the target architecture (vector / matrix) (instruction selection algorithm). It is necessary to understand the meaning of LLVM IR command, to compare the meaning of each instruction of the target architecture with syntax, and to select the instruction that matches the pattern to make mapping efficient.

Analysis on the Cooling Efficiency of High-Performance Multicore Processors according to Cooling Methods (기계식 쿨링 기법에 따른 고성능 멀티코어 프로세서의 냉각 효율성 분석)

  • Kang, Seung-Gu;Choi, Hong-Jun;Ahn, Jin-Woo;Park, Jae-Hyung;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.1-11
    • /
    • 2011
  • Many researchers have studied on the methods to improve the processor performance. However, high integrated semiconductor technology for improving the processor performance causes many problems such as battery life, high power density, hotspot, etc. Especially, as hotspot has critical impact on the reliability of chip, thermal problems should be considered together with performance and power consumption when designing high-performance processors. To alleviate the thermal problems of processors, there have been various researches. In the past, mechanical cooling methods have been used to control the temperature of processors. However, up-to-date microprocessors causes severe thermal problems, resulting in increased cooling cost. Therefore, recent studies have focused on architecture-level thermal-aware design techniques than mechanical cooling methods. Even though architecture-level thermal-aware design techniques are efficient for reducing the temperature of processors, they cause performance degradation inevitably. Therefore, if the mechanical cooling methods can manage the thermal problems of processors efficiently, the performance can be improved by reducing the performance degradation due to architecture-level thermal-aware design techniques such as dynamic thermal management. In this paper, we analyze the cooling efficiency of high-performance multicore processors according to mechanical cooling methods. According to our experiments using air cooler and liquid cooler, the liquid cooler consumes more power than the air cooler whereas it reduces the temperature more efficiently. Especially, the cost for reducing $1^{\circ}C$ is varied by the environments. Therefore, if the mechanical cooling methods can be used appropriately, the temperature of high-performance processors can be managed more efficiently.

Improving Haskell GC-Tuning Time Using Divide-and-Conquer (분할 정복법을 이용한 Haskell GC 조정 시간 개선)

  • An, Hyungjun;Kim, Hwamok;Liu, Xiao;Kim, Yeoneo;Byun, Sugwoo;Woo, Gyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.9
    • /
    • pp.377-384
    • /
    • 2017
  • The performance improvement of a single core processor has reached its limit since the circuit density cannot be increased any longer due to overheating. Therefore, the multicore and manycore architectures have emerged as viable approaches and parallel programming becomes more important. Haskell, a purely functional language, is getting popular in this situation since it naturally supports parallel programming owing to its beneficial features including the implicit parallelism in evaluating expressions and the monadic tools supporting parallel constructs. However, the performance of Haskell parallel programs is strongly influenced by the performance of the run-time system including the garbage collector. Though a memory profiling tool namely GC-tune has been suggested, we need a more systematic way to use this tool. Since GC-tune finds the optimal memory size by executing the target program with all the different possible GC options, the GC-tuning time takes too long. This paper suggests a basic divide-and-conquer method to reduce the number of GC-tune executions by reducing the search area by one-quarter for every searching step. Applying this method to two parallel programs, a maximally independent set and a K-means programs, the memory tuning time is reduced by 7.78 times with accuracy 98% on average.