• Title/Summary/Keyword: 메모리 I/O

Search Result 243, Processing Time 0.02 seconds

An Improved Estimation Model of Server Power Consumption for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 향상된 서버 전력 소비 추정 모델)

  • Kim, Dong-Jun;Kwak, Hu-Keun;Kwon, Hui-Ung;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.139-146
    • /
    • 2012
  • In the server cluster environment, one of the ways saving energy is to control server's power according to traffic conditions. This is to determine the ON/OFF state of servers according to energy usage of data center and each server. To do this, we need a way to estimate each server's energy. In this paper, we use a software-based power consumption estimation model because it is more efficient than the hardware model using power meter in terms of energy and cost. The traditional software-based power consumption estimation model has a drawback in that it doesn't know well the computing status of servers because it uses only the idle status field of CPU. Therefore it doesn't estimate consumption power effectively. In this paper, we present a CPU field based power consumption estimation model to estimate more accurate than the two traditional models (CPU/Disk/Memory utilization based power consumption estimation model and CPU idle utilization based power consumption estimation model) by using the various status fields of CPU to get the CPU status of servers and the overall status of system. We performed experiments using 2 PCs and compared the power consumption estimated by the power consumption model (software) with that measured by the power meter (hardware). The experimental results show that the traditional model has about 8-15% average error rate but our proposed model has about 2% average error rate.

PMS : Prefetching Strategy for Multi-level Storage System (PMS : 다단계 저장장치를 고려한 효율적인 선반입 정책)

  • Lee, Kyu-Hyung;Lee, Hyo-Jeong;Noh, Sam-Hyuk
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.1
    • /
    • pp.26-32
    • /
    • 2009
  • The multi-level storage architecture has been widely adopted in servers and data centers. However, while prefetching has been shown as a crucial technique to exploit sequentiality in accesses common for such systems and hide the increasing relative cost of disk I/O, existing multi-level storage studies have focused mostly on cache replacement strategies. In this paper, we show that prefetching algorithms designed for single-level systems may have their limitations magnified when applied to multi-level systems. Overly conservative prefetching will not be able to effectively use the lower-level cache space, while overly aggressive prefetching will be compounded across levels and generate large amounts of wasted prefetch. We design and implement a hierarchy-aware lower-level prefetching strategy called PMS(Prefetching strategy for Multi-level Storage system) that applicable to any upper level prefetching algorithms. PMS does not require any application hints, a priori knowledge from the application or modification to the va interface. Instead, it monitors the upper-level access patterns as well as the lower-level cache status, and dynamically adjusts the aggressiveness of the lower-level prefetching activities. We evaluated the PMS through extensive simulation studies using a verified multi-level storage simulator, an accurate disk simulator, and access traces with different access patterns. Our results indicate that PMS dynamically controls aggressiveness of lower-level prefetching in reaction to multiple system and workload parameters, improving the overall system performance in all 32 test cases. Working with four well-known existing prefetching algorithms adopted in real systems, PMS obtains an improvement of up to 35% for the average request response time, with an average improvement of 16.56% over all cases.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.