• Title/Summary/Keyword: CPU availability

Search Result 22, Processing Time 0.015 seconds

A Predictive Virtual Machine Placement in Decentralized Cloud using Blockchain

  • Suresh B.Rathod
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.60-66
    • /
    • 2024
  • Host's data during transmission. Data tempering results in loss of host's sensitive information, which includes number of VM, storage availability, and other information. In the distributed cloud environment, each server (computing server (CS)) configured with Local Resource Monitors (LRMs) which runs independently and performs Virtual Machine (VM) migrations to nearby servers. Approaches like predictive VM migration [21] [22] by each server considering nearby server's CPU usage, roatative decision making capacity [21] among the servers in distributed cloud environment has been proposed. This approaches usage underlying server's computing power for predicting own server's future resource utilization and nearby server's resource usage computation. It results in running VM and its running application to remain in waiting state for computing power. In order to reduce this, a decentralized decision making hybrid model for VM migration need to be proposed where servers in decentralized cloud receives, future resource usage by analytical computing system and takes decision for migrating VM to its neighbor servers. Host's in the decentralized cloud shares, their detail with peer servers after fixed interval, this results in chance to tempering messages that would be exchanged in between HC and CH. At the same time, it reduces chance of over utilization of peer servers, caused due to compromised host. This paper discusses, an roatative decisive (RD) approach for VM migration among peer computing servers (CS) in decentralized cloud environment, preserving confidentiality and integrity of the host's data. Experimental result shows that, the proposed predictive VM migration approach reduces extra VM migration caused due over utilization of identified servers and reduces number of active servers in greater extent, and ensures confidentiality and integrity of peer host's data.

Acceleration of computation speed for elastic wave simulation using a Graphic Processing Unit (그래픽 프로세서를 이용한 탄성파 수치모사의 계산속도 향상)

  • Nakata, Norimitsu;Tsuji, Takeshi;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.98-104
    • /
    • 2011
  • Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.