• 제목/요약/키워드: Computing Power

검색결과 1,382건 처리시간 0.025초

클라우드 컴퓨팅 환경에서 자원의 사용률을 이용한 소비전력 예측 방안 (Prediction Method about Power Consumption by Using Utilization Rate of Resources in Cloud Computing Environment)

  • 박상면;문영성
    • 인터넷정보학회논문지
    • /
    • 제17권1호
    • /
    • pp.7-14
    • /
    • 2016
  • 최근 클라우드 컴퓨팅 기술이 발전함에 따라, 언제 어디서나 스마트 폰이나 컴퓨터로 접속하여 업무를 처리할 수 있다. 또한 IT 인프라를 구축하기 위한 초기투자비용과 유지보수에 대한 부담을 줄이는 방안으로 적합하다고 여겨지면서 클라우드 컴퓨팅은 발전하였다. 클라우드 컴퓨팅의 수요가 급격하게 늘어남에 따라, 데이터센터의 환경을 유지하기 위해 소비되는 전력에 관한 문제가 발생하였다. 이 문제를 해결하기 위해서는 먼저 소비전력을 측정할 수 있어야 한다. 비록 전력측정기를 이용하여 소비전력을 측정하는 것은 정확한 소비전력을 얻을 수 있지만, 추가비용이 발생한다. 따라서 본 논문에서는 전력측정기에 의존하지 않고 소비전력을 예측하는 방안을 제시한다. 제시한 방안의 정확성을 입증하기 위해 클라우드 컴퓨팅 환경에서 CPU와 Hard disk 테스트를 실시하였다. 테스트가 진행되는 동안, 제안한 방안과 전력측정기에 의해 예측 값과 실제 값을 얻고. 오차율을 계산하였다. 그 결과 CPU 테스트에서 예측 값과 실제 값의 차이는 약 4.22%이고, Hard disk 테스트에서는 약 8.51%을 보였다.

Algorithm for Improving the Computing Power of Next Generation Wireless Receivers

  • Rizvi, Syed S.
    • Journal of Computing Science and Engineering
    • /
    • 제6권4호
    • /
    • pp.310-319
    • /
    • 2012
  • Next generation wireless receivers demand low computational complexity algorithms with high computing power in order to perform fast signal detections and error estimations. Several signal detection and estimation algorithms have been proposed for next generation wireless receivers which are primarily designed to provide reasonable performance in terms of signal to noise ratio (SNR) and bit error rate (BER). However, none of them have been chosen for direct implementation as they offer high computational complexity with relatively lower computing power. This paper presents a low-complexity power-efficient algorithm that improves the computing power and provides relatively faster signal detection for next generation wireless multiuser receivers. Measurement results of the proposed algorithm are provided and the overall system performance is indicated by BER and the computational complexity. Finally, in order to verify the low-complexity of the proposed algorithm we also present a formal mathematical proof.

An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

  • He, Bo;Li, Tianzhang
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.489-504
    • /
    • 2021
  • By distributing computing tasks among devices at the edge of networks, edge computing uses virtualization, distributed computing and parallel computing technologies to enable users dynamically obtain computing power, storage space and other services as needed. Applying edge computing architectures to Internet of Vehicles can effectively alleviate the contradiction among the large amount of computing, low delayed vehicle applications, and the limited and uneven resource distribution of vehicles. In this paper, a predictive offloading strategy based on the MEC load state is proposed, which not only considers reducing the delay of calculation results by the RSU multi-hop backhaul, but also reduces the queuing time of tasks at MEC servers. Firstly, the delay factor and the energy consumption factor are introduced according to the characteristics of tasks, and the cost of local execution and offloading to MEC servers for execution are defined. Then, from the perspective of vehicles, the delay preference factor and the energy consumption preference factor are introduced to define the cost of executing a computing task for another computing task. Furthermore, a mathematical optimization model for minimizing the power overhead is constructed with the constraints of time delay and power consumption. Additionally, the simulated annealing algorithm is utilized to solve the optimization model. The simulation results show that this strategy can effectively reduce the system power consumption by shortening the task execution delay. Finally, we can choose whether to offload computing tasks to MEC server for execution according to the size of two costs. This strategy not only meets the requirements of time delay and energy consumption, but also ensures the lowest cost.

Five Forces Model of Computational Power: A Comprehensive Measure Method

  • Wu, Meixi;Guo, Liang;Yang, Xiaotong;Xie, Lina;Wang, Shaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2239-2256
    • /
    • 2022
  • In this paper, a model is proposed to comprehensively evaluate the computational power. The five forces model of computational power solves the problem that the measurement units of different indexes are not unified in the process of computational power evaluation. It combines the bidirectional projection method with TOPSIS method. This model is more scientific and effective in evaluating the comprehensive situation of computational power. Lastly, an example shows the validity and practicability of the model.

가상 컴퓨팅 랩 환경에서 노드 전원관리 스케줄러 설계 및 구현 (Design and Implementation of a Node Power Scheduler in Virtual Computing Lab Environment)

  • 서경석;이봉환
    • 한국정보통신학회논문지
    • /
    • 제17권8호
    • /
    • pp.1827-1834
    • /
    • 2013
  • 기존의 PC 기반의 데스크탑 환경은 보안, 이동성, 업그레이드 비용 감소 등의 장점으로 인하여 서버 기반의 가상데스크탑 환경으로 바뀌고 있다. 본 논문에서는 오픈소스 기반의 클라우드 컴퓨팅 플랫폼과 하이퍼바이저를 이용하여 컴퓨터 실습실에 적용 가능한 가상 컴퓨팅 랩 서비스 시스템을 설계하고 구현하였다. 또한, 서버 팜에 있는 노드들의 전력 소모를 줄이기 위한 전원 스케줄러를 제안하였으며, 이 스케줄러 탑재 시 기존시스템에 비하여 전력 소모량을 대폭 줄일 수 있는 실험 결과를 제시하였다.

Implementation of an Intelligent Grid Computing Architecture for Transient Stability Constrained TTC Evaluation

  • Shi, Libao;Shen, Li;Ni, Yixin;Bazargan, Masound
    • Journal of Electrical Engineering and Technology
    • /
    • 제8권1호
    • /
    • pp.20-30
    • /
    • 2013
  • An intelligent grid computing architecture is proposed and developed for transient stability constrained total transfer capability evaluation of future smart grid. In the proposed intelligent grid computing architecture, a model of generalized compute nodes with 'able person should do more work' feature is presented and implemented to make full use of each node. A timeout handling strategy called conditional resource preemption is designed to improve the whole system computing performance further. The architecture can intelligently and effectively integrate heterogeneous distributed computing resources around Intranet/Internet and implement the dynamic load balancing. Furthermore, the robustness of the architecture is analyzed and developed as well. The case studies have been carried out on the IEEE New England 39-bus system and a real-sized Chinese power system, and results demonstrate the practicability and effectiveness of the intelligent grid computing architecture.

Power Modeling Approach for GPU Source Program

  • Li, Junke;Guo, Bing;Shen, Yan;Li, Deguang;Huang, Yanhui
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권1호
    • /
    • pp.181-191
    • /
    • 2018
  • Rapid development of information technology makes our environment become smarter and massive high performance computers are providing powerful computing for that. Graphics Processing Unit (GPU) as a typical high performance component is being widely used for both graphics and general-purpose applications. Although it can greatly improve computing power, it also delivers significant power consumption and need sufficient power supplies. To make high performance computing more sustainable, the important step is to measure it. Current power technologies for GPU have some drawbacks, such as they are not applicable for power estimation at the early stage. In this article, we present a novel power technology to correlate power consumption and the characteristics at the programmer perspective, and then to estimate power consumption of source program without prerunning. We conduct experiments on Nvidia's GT740 platform; the results show that our power model is more accurately than regression model and has an average error of 2.34% and the maximum error of 9.65%.

A new model and testing verification for evaluating the carbon efficiency of server

  • Liang Guo;Yue Wang;Yixing Zhang;Caihong Zhou;Kexin Xu;Shaopeng Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권10호
    • /
    • pp.2682-2700
    • /
    • 2023
  • To cope with the risks of climate change and promote the realization of carbon peaking and carbon neutrality, this paper first comprehensively considers the policy background, technical trends and carbon reduction paths of energy conservation and emission reduction in data center server industry. Second, we propose a computing power carbon efficiency of data center server, and constructs the carbon emission per performance of server (CEPS) model. According to the model, this paper selects the mainstream data center servers for testing. The result shows that with the improvement of server performance, the total carbon emissions are rising. However, the speed of performance improvement is faster than that of carbon emission, hence the relative carbon emission per unit computing power shows a continuous decreasing trend. Moreover, there are some differences between different products, and it is calculated that the carbon emission per unit performance is 20-60KG when the service life of the server is five years.

병렬처리를 이용한 화력발전소의 실시간 시뮬레이션 (Real time simulation using multiple DSPs for fossil power plants)

  • 박희준;김병국
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.480-483
    • /
    • 1997
  • A fossil power plant can be modeled by a lot of algebraic equations and differential equations. When we simulate a large, complicated fossil power plant by a computer such as workstation or PC, it takes much time until overall equations are completely calculated. Therefore, new processing systems which have high computing speed is ultimately needed to develope real-time simulators. Vital points of real-time simulators are accuracy, computing speed, and deadline observing. In this paper, we present a enhanced strategy in which we can provide powerful computing power by parallel processing of DSP processors with communication links. We designed general purpose DSP modules, and a VME interface module. Because the DSP module is designed for general purpose, we can easily expand the parallel system by just connecting new DSP modules to the system. Additionally we propose methods about downloading programs, initial data to each DSP module via VME bus, DPRAM and processing sequences about computing and updating values between DSP modules and CPU30 board when the simulator is working.

  • PDF

CUDA 기반 GPU에서 효율적인 Power Method의 구현 (Implementation of Efficient Power Method on CUDA GPU)

  • 김정환;김진수
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권2호
    • /
    • pp.9-16
    • /
    • 2011
  • GPU는 저렴한 비용으로 쉽게 대규모 데이터 병렬성을 활용할 수 있는 장점을 갖고 있어 많은 고성능 컴퓨팅 응용 분야에서 사용되고 있는 추세다. 행렬의 고유벡터를 구하는 power method는 웹 페이지의 중요도를 계산하는 PageRank 알고리즘 등 여러 응용 분야에서 활용되고 있는 방법으로써, 본 연구에서는 power method를 GPU에서 병렬화하여 구현하였으며, 성능을 최적화하기 위한 개선 방법을 제시하였다. Power method는 행렬과 벡터의 곱셈 연산이 반복적으로 수행되며 GPU에서 쉽게 병렬화가 가능하다. 그러나, 고유벡터의 수렴 여부 판단을 위한 연산 등의 작업과 다음 곱셈을 위한 벡터 크기의 조정 등의 작업이 부가적으로 필요하며, 이러한 작업은 GPU 내의 커널 코드를 여러 차례 호출하고 불필요한 데이터 이동을 유발하는 문제점이 있다. 본 연구에서는 커널 호출 회수를 줄이고 스레드 배치를 최적함과 동시에 수렴 여부 판단을 위한 연산을 최적함으로써 power method의 성능을 향상시켰다.