• Title/Summary/Keyword: virtualization system

Search Result 259, Processing Time 0.03 seconds

Loan/Redemption Scheme for I/O performance improvement of Virtual Machine Scheduler (가상머신 스케줄러의 I/O 성능 향상을 위한 대출/상환 기법)

  • Kim, Kisu;Jang, Joonhyouk;Hong, Jiman
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.18-25
    • /
    • 2016
  • Virtualized hardware resources provides efficiency in use and easy of management. Based on the benefits, virtualization techniques are used to build large server clusters and cloud systems. The performance of a virtualized system is significantly affected by the virtual machine scheduler. However, the existing virtual machine scheduler have a problem in that the I/O response is reduced in accordance with the scheduling delay becomes longer. In this paper, we introduce the Loan/Redemption mechanism of a virtual machine scheduler in order to improve the responsiveness to I/O events. The proposed scheme gives additional credits for to virtual machines and classifies the task characteristics of each virtual machine by analyzing the credit consumption pattern. When an I/O event arrives, the scheduling priority of a virtual machine is temporally increased based on the analysis. The evaluation based on the implementation shows that the proposed scheme improves the I/O response 60% and bandwidth of virtual machines 62% compared to those of the existing virtual machine scheduler.

Performance and Energy Oriented Resource Provisioning in Cloud Systems Based on Dynamic Thresholds and Host Reputation (클라우드 시스템에서 동적 임계치와 호스트 평판도를 기반으로 한 성능 및 에너지 중심 자원 프로비저닝)

  • Elijorde, Frank I.;Lee, Jaewan
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.39-48
    • /
    • 2013
  • A cloud system has to deal with highly variable workloads resulting from dynamic usage patterns in order to keep the QoS within the predefined SLA. Aside from the aspects regarding services, another emerging concern is to keep the energy consumption at a minimum. This requires the cloud providers to consider energy and performance trade-off when allocating virtualized resources in cloud data centers. In this paper, we propose a resource provisioning approach based on dynamic thresholds to detect the workload level of the host machines. The VM selection policy uses utilization data to choose a VM for migration, while the VM allocation policy designates VMs to a host based on its service reputation. We evaluated our work through simulations and results show that our work outperforms non-power aware methods that don't support migration as well as those based on static thresholds and random selection policy.

The Vulnerability Analysis for Virtualization Environment Risk Model Management Systematization (가상화 환경 위험도 관리체계화를 위한 취약점 분석)

  • Park, Mi-Young;Seung, Hyen-Woo;Lim, Yang-Mi
    • Journal of Internet Computing and Services
    • /
    • v.14 no.3
    • /
    • pp.23-33
    • /
    • 2013
  • Recently in the field of IT, cloud computing technology has been deployed rapidly in the current society because of its flexibility, efficiency and cost savings features. However, cloud computing system has a big problem of vulnerability in security. In order to solve the vulnerability of cloud computing systems security in this study, impact types of virtual machine about the vulnerability were determined and the priorities were determined according to the risk evaluation of virtual machine's vulnerability. For analyzing the vulnerability, risk measurement standards about the vulnerability were defined based on CVSS2.0, which is an open frame work; and the risk measurement was systematized by scoring for relevant vulnerabilities. Vulnerability risk standards are considered to suggest fundamental characteristics of vulnerability and to provide the degree of risks and consequently to be applicable to technical guides to minimize the vulnerability. Additionally, suggested risk standard of vulnerability is meaningful as the study content itself and could be used in technology policy project which is to be conducted in the future.

Software Architecture of the Grid for implementing the Cloud Computing of the High Availability (고가용성 클라우드 컴퓨팅 구축을 위한 그리드 소프트웨어 아키텍처)

  • Lee, Byoung-Yup;Park, Jun-Ho;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.2
    • /
    • pp.19-29
    • /
    • 2012
  • Currently, cloud computing technology is being supplied in various service forms and it is becoming a ground breaking service which provides usage of storage service, data and software while user is not involved in technical background such as physical location of service or system environment. cloud computing technology has advantages that it can use easily as many IT resources as it wants freely regardless of hardware issues required by a variety of systems and service level required by infrastructure. Also, since it has a strength that it can choose usage of resource about business model due to various internet-based technologies, provisioning technology and virtualization technology are being paid attention as main technologies. These technologies are ones of important technology elements which help web-based users approach freely and install according to user environment. Therefore, this thesis introduces software-related technologies and architectures in an aspect of grid for building up high availability cloud computing environment by analysis about cloud computing technology trend.

A Study on Sharing the Remote Devices through USB over IP (USB/IP를 이용한 원격장치공유에 대한 연구)

  • Yoo, Jin-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4592-4596
    • /
    • 2010
  • This paper is related to the method for sharing some remote devices through USB/IP on the connected system environment. Sharing the remote devices is actually based on the connection technology using USB which is one way of access methods on virtualized server environment. The users who receive server computing unit might want to connect to remote allocated server with the local devices. We can solve the problem of the access methods related to the sharing devices through USB device emulation. This paper discusses the implementation of USB emulation which is based on personalized services on virtualized server environment. Like this, this paper will share devices on the level of USB device. This research can write to the device directly due to virtualizing the device level.

Green Information Systems Research: A Decade in Review and Future Agenda (그린 정보시스템 연구: 과거 10년간 연구 동향 분석 및 향후 과제)

  • Lee, Ha-Bin
    • Informatization Policy
    • /
    • v.27 no.4
    • /
    • pp.3-23
    • /
    • 2020
  • It has been two decades since Green Information System attracted scholars in information systems research. The surge of sustainability issues over the world naturally made Information Systems scholars to turn their attention to understanding how the use of Information Systems is making impact to our society and environments. This paper reviews studies on Green Information Systems(Green ISs) to evaluate efforts made in the last decade. Based on a systematic approach, 64 articles published in peer-reviewed international journals in Information Systems and Business & Management disciplines are analyzed to identify research gaps and propose future research agenda in Green ISs that include the application of psychological theory in the design of Green ISs, energy efficient IT/IS to respond to accelerated virtualization, and contribution of Green ISs to biodiversity.

An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

  • He, Bo;Li, Tianzhang
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.489-504
    • /
    • 2021
  • By distributing computing tasks among devices at the edge of networks, edge computing uses virtualization, distributed computing and parallel computing technologies to enable users dynamically obtain computing power, storage space and other services as needed. Applying edge computing architectures to Internet of Vehicles can effectively alleviate the contradiction among the large amount of computing, low delayed vehicle applications, and the limited and uneven resource distribution of vehicles. In this paper, a predictive offloading strategy based on the MEC load state is proposed, which not only considers reducing the delay of calculation results by the RSU multi-hop backhaul, but also reduces the queuing time of tasks at MEC servers. Firstly, the delay factor and the energy consumption factor are introduced according to the characteristics of tasks, and the cost of local execution and offloading to MEC servers for execution are defined. Then, from the perspective of vehicles, the delay preference factor and the energy consumption preference factor are introduced to define the cost of executing a computing task for another computing task. Furthermore, a mathematical optimization model for minimizing the power overhead is constructed with the constraints of time delay and power consumption. Additionally, the simulated annealing algorithm is utilized to solve the optimization model. The simulation results show that this strategy can effectively reduce the system power consumption by shortening the task execution delay. Finally, we can choose whether to offload computing tasks to MEC server for execution according to the size of two costs. This strategy not only meets the requirements of time delay and energy consumption, but also ensures the lowest cost.

Integrating Resilient Tier N+1 Networks with Distributed Non-Recursive Cloud Model for Cyber-Physical Applications

  • Okafor, Kennedy Chinedu;Longe, Omowunmi Mary
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2257-2285
    • /
    • 2022
  • Cyber-physical systems (CPS) have been growing exponentially due to improved cloud-datacenter infrastructure-as-a-service (CDIaaS). Incremental expandability (scalability), Quality of Service (QoS) performance, and reliability are currently the automation focus on healthy Tier 4 CDIaaS. However, stable QoS is yet to be fully addressed in Cyber-physical data centers (CP-DCS). Also, balanced agility and flexibility for the application workloads need urgent attention. There is a need for a resilient and fault-tolerance scheme in terms of CPS routing service including Pod cluster reliability analytics that meets QoS requirements. Motivated by these concerns, our contributions are fourfold. First, a Distributed Non-Recursive Cloud Model (DNRCM) is proposed to support cyber-physical workloads for remote lab activities. Second, an efficient QoS stability model with Routh-Hurwitz criteria is established. Third, an evaluation of the CDIaaS DCN topology is validated for handling large-scale, traffic workloads. Network Function Virtualization (NFV) with Floodlight SDN controllers was adopted for the implementation of DNRCM with embedded rule-base in Open vSwitch engines. Fourth, QoS evaluation is carried out experimentally. Considering the non-recursive queuing delays with SDN isolation (logical), a lower queuing delay (19.65%) is observed. Without logical isolation, the average queuing delay is 80.34%. Without logical resource isolation, the fault tolerance yields 33.55%, while with logical isolation, it yields 66.44%. In terms of throughput, DNRCM, recursive BCube, and DCell offered 38.30%, 36.37%, and 25.53% respectively. Similarly, the DNRCM had an improved incremental scalability profile of 40.00%, while BCube and Recursive DCell had 33.33%, and 26.67% respectively. In terms of service availability, the DNRCM offered 52.10% compared with recursive BCube and DCell which yielded 34.72% and 13.18% respectively. The average delays obtained for DNRCM, recursive BCube, and DCell are 32.81%, 33.44%, and 33.75% respectively. Finally, workload utilization for DNRCM, recursive BCube, and DCell yielded 50.28%, 27.93%, and 21.79% respectively.

A Case Analysis on the Effects of Cloud Adoption on Service Continuity - Focusing on Failures (클라우드 도입이 서비스 연속성에 미치는 영향에 관한 사례 분석 - 장애 중심으로)

  • Ji-Yong Huh;Joon-Hee Yoon;Eun-Kyong Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.121-126
    • /
    • 2023
  • As service utilization for IT technologies such as artificial intelligence, big data, and IOT has recently increased, cloud computing has been introduced to efficiently manage vast amounts of data and IT infrastructure resources that process them to provide stable and reliable information services while streamlining infrastructure costs. Efforts for this are ongoing. This thesis compares and analyzes the operation results before and after cloud adoption in terms of system failures for 426 systems at 360 branches nationwide in cloud systems of companies operating a total of 1,750 cloud systems. As a result of the analysis, the number of failures and failure types , service downtime, etc., the introduction of the cloud yielded significant results in securing service continuity. Through this result, it is expected to provide meaningful implications to companies expecting to secure service continuity by adopting the cloud.

Autoscaling Mechanism based on Execution-times for VNFM in NFV Platforms (NFV 플랫폼에서 VNFM의 실행 시간에 기반한 자동 자원 조정 메커니즘)

  • Mehmood, Asif;Diaz Rivera, Javier;Khan, Talha Ahmed;Song, Wang-Cheol
    • KNOM Review
    • /
    • v.22 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • The process to determine the required number of resources depends on the factors being considered. Autoscaling is one such mechanism that uses a wide range of factors to decide and is a critical process in NFV. As the networks are being shifted onto the cloud after the invention of SDN, we require better resource managers in the future. To solve this problem, we propose a solution that allows the VNFMs to autoscale the system resources depending on the factors such as overhead of hyperthreading, number of requests, execution-times for the virtual network functions. It is a known fact that the hyperthreaded virtual-cores are not fully capable of performing like the physical cores. Also, as there are different types of core having different frequencies so the process to calculate the number of cores needs to be measured accurately and precisely. The platform independency is achieved by proposing another solution in the form of a monitoring microservice, which communicates through APIs. Hence, by the use of our autoscaling application and a monitoring microservice, we enhance the resource provisioning process to meet the criteria of future networks.