• Title/Summary/Keyword: Computing Costs

Search Result 274, Processing Time 0.03 seconds

A Quantitative Approach to Minimize Energy Consumption in Cloud Data Centres using VM Consolidation Algorithm

  • M. Hema;S. KanagaSubaRaja
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.312-334
    • /
    • 2023
  • In large-scale computing, cloud computing plays an important role by sharing globally-distributed resources. The evolution of cloud has taken place in the development of data centers and numerous servers across the globe. But the cloud information centers incur huge operational costs, consume high electricity and emit tons of dioxides. It is possible for the cloud suppliers to leverage their resources and decrease the consumption of energy through various methods such as dynamic consolidation of Virtual Machines (VMs), by keeping idle nodes in sleep mode and mistreatment of live migration. But the performance may get affected in case of harsh consolidation of VMs. So, it is a desired trait to have associate degree energy-performance exchange without compromising the quality of service while at the same time reducing the power consumption. This research article details a number of novel algorithms that dynamically consolidate the VMs in cloud information centers. The primary objective of the study is to leverage the computing resources to its best and reduce the energy consumption way behind the Service Level Agreement (SLA)drawbacks relevant to CPU load, RAM capacity and information measure. The proposed VM consolidation Algorithm (PVMCA) is contained of four algorithms: over loaded host detection algorithm, VM selection algorithm, VM placement algorithm, and under loading host detection algorithm. PVMCA is dynamic because it uses dynamic thresholds instead of static thresholds values, which makes it suggestion for real, unpredictable workloads common in cloud data centers. Also, the Algorithms are adaptive because it inevitably adjusts its behavior based on the studies of historical data of host resource utilization for any application with diverse workload patterns. Finally, the proposed algorithm is online because the algorithms are achieved run time and make an action in response to each request. The proposed algorithms' efficiency was validated through different simulations of extensive nature. The output analysis depicts the projected algorithms scaled back the energy consumption up to some considerable level besides ensuring proper SLA. On the basis of the project algorithms, the energy consumption got reduced by 22% while there was an improvement observed in SLA up to 80% compared to other benchmark algorithms.

ECPS: Efficient Cloud Processing Scheme for Massive Contents (클라우드 환경에서 대규모 콘텐츠를 위한 효율적인 자원처리 기법)

  • Na, Moon-Sung;Kim, Seung-Hoon;Lee, Jae-Dong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.4
    • /
    • pp.17-27
    • /
    • 2010
  • Major IT vendors expect that cloud computing technology makes it possible to reduce the contents service cycle, speed up application deployment and skip the installation process, reducing operational costs, proactive management etc. However, cloud computing environment for massive content service solutions requires high-performance data processing to reduce the time of data processing and analysis. In this study, Efficient_Cloud_Processing_Scheme(ECPS) is proposed for allocation of resources for massive content services. For high-performance services, optimized resource allocation plan is presented using MapReduce programming techniques and association rules that is used to detect hidden patterns in data mining, based on levels of Hadoop platform(Infrastructure as a service). The proposed ECPS has brought more than 20% improvement in performance and speed compared to the traditional methods.

Securing Sensitive Data in Cloud Storage (클라우드 스토리지에서의 중요데이터 보호)

  • Lee, Shir-Ly;Lee, Hoon-Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.871-874
    • /
    • 2011
  • The fast emerging of network technology and the high demand of computing resources have prompted many organizations to outsource their storage and computing needs. Cloud based storage services such as Microsoft's Azure and Amazon's S3 allow customers to store and retrieve any amount of data, at anytime from anywhere via internet. The scalable and dynamic of the cloud storage services help their customer to reduce IT administration and maintenance costs. No doubt, cloud based storage services brought a lot of benefits to its customer by significantly reducing cost through optimization increased operating and economic efficiencies. However without appropriate security and privacy solution in place, it could become major issues to the organization. As data get produced, transferred and stored at off premise and multi tenant cloud based storage, it becomes vulnerable to unauthorized disclosure and unauthorized modification. An attacker able to change or modify data while data inflight or when data is stored on disk, so it is very important to secure data during its entire life-cycle. The traditional cryptography primitives for the purpose of data security protection cannot be directly adopted due to user's lose control of data under off premises cloud server. Secondly cloud based storage is not just a third party data warehouse, the data stored in cloud are frequently update by the users and lastly cloud computing is running in a simultaneous, cooperated and distributed manner. In our proposed mechanism we protect the integrity, authentication and confidentiality of cloud based data with the encrypt- then-upload concept. We modified and applied proxy re-encryption protocol in our proposed scheme. The whole process does not reveal the clear data to any third party including the cloud provider at any stage, this helps to make sure only the authorized user who own corresponding token able to access the data as well as preventing data from being shared without any permission from data owner. Besides, preventing the cloud storage providers from unauthorized access and making illegal authorization to access the data, our scheme also protect the data integrity by using hash function.

The study on the Sensorless PMSM Controlusing the Superposition Theory (중첩의 정리를 이용한 PMSM의 센서리스 재어에 관한 연구)

  • Hong, Jeng-Pyo;Kwon, Soon-Jae;Kim, Gyu-Seob;Sohn, Mu-Heon;Kim, Jong-Dal
    • Proceedings of the KIEE Conference
    • /
    • 2002.07b
    • /
    • pp.756-760
    • /
    • 2002
  • This study presents a solution to control a Permanent Magnet Synchronous Motor without sensors. The control method is the presented superposition principle. This method of sensorless theory is very simple to compute estimated angle. Therefore computing time to estimate angle is shorter than other sensorless method. The use of this system yields enhanced operations, fewer system components, lower system cost, energy efficient control system design and increased efficiency. A practical solution is described and results are given in this Study. The performance of a Sensorless architecture allows an intelligent approach to reduce the complete system costs of digital motion control applications using cheaper electrical motors without sensors. This paper deals with an overview of sensorless solutions in PMSM control applications whereby the focus will be the new Controller without sensors and its applications.

  • PDF

Fault-Tolerant Analysis of Redundancy Techniques in VLSI Design Environment

  • Cho Jai-Rip
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.393-403
    • /
    • 1998
  • The advent of very large scale integration(VLSI) has had a tremendous impact on the design of fault-tolerant circuits and systems. The increasing density, decreasing power consumption, and decreasing costs of integrated circuits, due in part to VLSI, have made it possible and practical to implement the redundancy approaches used in fault-tolerant computing. The purpose of this paper is to study the many aspects of designing fault-tolerant systems in a VLSI environment. First, we expound upon the opportunities and problemes presented by VLSI technology. Second, we consider in detail the importance of design mistakes, common-mode failures, and transient faults in VLSI. Finally, we examine the techniques available to implement redundancy using VLSI and the problems associated with these techniques.

  • PDF

Exploring Smartphone-Based Indoor Navigation: A QR Code Assistance-Based Approach

  • Chirakkal, Vinjohn V;Park, Myungchul;Han, Dong Seog
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.3
    • /
    • pp.173-182
    • /
    • 2015
  • A real-time, Indoor navigation systems utilize ultra-wide band (UWB), radio-frequency identification (RFID) and received signal strength (RSS) techniques that encompass WiFi, FM, mobile communications, and other similar technologies. These systems typically require surplus infrastructure for their implementation, which results in significantly increased costs and complexity. Therefore, as a solution to reduce the level of cost and complexity, an inertial measurement unit (IMU) and quick response (QR) codes are utilized in this paper to facilitate navigation with the assistance of a smartphone. The QR code helps to compensate for errors caused by the pedestrian dead reckoning (PDR) algorithm, thereby providing more accurate localization. The proposed algorithm having IMU in conjunction with QR code shows an accuracy of 0.64 m which is higher than existing indoor navigation techniques.

A Study of Location Management System using SIP in Medical Environment (의료환경에서 SIP를 이용한 위치관리시스템에 관한 연구)

  • Kim, Kkyong-Mok;Park, Yong-Min
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.430-433
    • /
    • 2006
  • A RFID technology that process information of tag attaching in physical object by a point technology that enable this Ubiquitous computing through radio frequency is noted. One of our most important, location tracking for object in RFID system. With this in mind, proposed EPCglobal network gives the location tracking and location services. but, RFID system has the drawbacks. That is special construction and components be positively necessary for RFID system. The service difficult to integrate other service and it increases costs considerably. This thesis gives location service using RFID based on SIP in hospital.

  • PDF

Auto Classification of Ship Surface Plates By Neural-Networks (신경망을 이용한 선박의 곡가공 외판 분류 자동화)

  • Kim, Soo-Young;Shin, Sung-Chul;Gim, Tae-Gun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.103-108
    • /
    • 2002
  • Manufacturing the complex surface plates in Stern and Stem is major factor in computing the processing cost of a ship. If these parts are effectively classified, it helps to compute the processing cost and find the way of cut-down on the processing costs. This study is intended to effectively classify surface plates. To solve this problem, we apply Pattern Classification of Neural-Networks.

A SURVEY OF QUALITY OF SERVICE IN MULTI-TIER WEB APPLICATIONS

  • Ghetas, Mohamed;Yong, Chan Huah;Sumari, Putra
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.238-256
    • /
    • 2016
  • Modern web services have been broadly deployed on the Internet. Most of these services use multi-tier architecture for flexible scaling and software reusability. However, managing the performance of multi-tier web services under dynamic and unpredictable workload, and different resource demands in each tier is a critical problem for a service provider. When offering quality of service assurance with least resource usage costs, web service providers should adopt self-adaptive resource provisioning in each tier. Recently, a number of rule- and model-based approaches have been designed for dynamic resource management in virtualized data centers. This survey investigates the challenges of resource provisioning and provides a competing assessment on the existing approaches. After the evaluation of their benefits and drawbacks, the new research direction to improve the efficiency of resource management and recommendations are introduced.

Load bearing capacity reduction of concrete structures due to reinforcement corrosion

  • Chen, Hua-Peng;Nepal, Jaya
    • Structural Engineering and Mechanics
    • /
    • v.75 no.4
    • /
    • pp.455-464
    • /
    • 2020
  • Reinforcement corrosion is one of the major problems in the durability of reinforced concrete structures exposed to aggressive environments. Deterioration caused by reinforcement corrosion reduces the durability and the safety margin of concrete structures, causing excessive costs in managing these structures safely. This paper aims to investigate the effects of reinforcement corrosion on the load bearing capacity deterioration of the corroded reinforced concrete structures. A new analytical method is proposed to predict the crack growth of cover concrete and evaluate the residual strength of concrete structures with corroded reinforcement failing in bond. The structural performance indicators, such as concrete crack growth and flexural strength deterioration rate, are assumed to be a stochastic process for lifetime distribution modelling of structural performance deterioration over time during the life cycle. The Weibull life evolution model is employed for analysing lifetime reliability and estimating remaining useful life of the corroded concrete structures. The results for the worked example show that the proposed approach can provide a reliable method for lifetime performance assessment of the corroded reinforced concrete structures.