• Title/Summary/Keyword: virtualization service

검색결과 200건 처리시간 0.036초

Value Ecosystems of Web Services : Benefits and Costs of Web as a Prosuming Service Platform (Web1.0과 프로슈밍기반 Web2.0 서비스 가치생태계 비교)

  • Kim, Do-Hoon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • 제36권4호
    • /
    • pp.43-61
    • /
    • 2011
  • We first develop a value ecosystem framework to model the SDP(Service Delivery Process) of web services. Since the web service has been evolving from the basic web architecture (e.g., traditional world wide web) to a prosuming platform based on virtualization technologies, the proposed framework of the value ecosystem focuses on capturing the key characteristics of SDP in each type of web services. Even though they share the basic elements such as PP(Platform Provider), CA(Customization Agency) and user group, the SDP in the traditional web services (so-called Web1.0 in this paper) is quite different from the most recent one (so-called Web2.0). In our value ecosystem, users are uniformly distributed over (0, ${\Delta}$), where ${\Delta}$��represents the variety level of users' preference on the web service level. PP and CA provide a standard level of web service(s) and prosuming service package, respectively. CA in Web1.0 presents a standard customization package($s_a$) at flat rate c, whereas PP and CA collaborate and provide customization service with a usage-based scheme. We employ a multi-stage game model to analyze and compare the SDPs in Web1.0 and Web2.0. Our findings through analysis and numerical simulations are as follows. First, the user group is consecutively segmented, and the pattern of the segmentations varies across Web1.0 and Web2.0. The standardized service level s (from PP) is higher in Web1.0, whereas the amount of information created in the value ecosystem is bigger in Web2.0. This indicates the role of CA would be increasingly critical in Web2.0: in particular, for fulfilling the needs of prosuming and service customization.

A Study of Application Development Method for Improving Productivity on Cloud Native Environment (Cloud Native환경에서의 생산성 향상을 위한 어플리케이션 개발 방법 연구)

  • Kim, Jung-Bo;Kim, Jung-In
    • Journal of Korea Multimedia Society
    • /
    • 제23권2호
    • /
    • pp.328-342
    • /
    • 2020
  • As the cloud-based ICT(Information & Communication Technology) infrastructure matures, the existing monolithic software development method is evolving into a micro-service structure based on cloud native computing. To develop and operate the services efficiently under the cloud native environment, DevOps-based application development plans through MSA(Micro Service Architecture) design based are essential. A cloud native environment is an approach to developing and running applications that take advantage of cloud computing models such as automation of source distribution, container-based virtualization, application scalability, resource efficiency, and flexible maintenance through object independence. To implement this approach, the utilization of key elements such as DevOps, continuous delivery, micro service, and containers is essential, but there are not enough previous studies on case analyses or application methods of these key elements. Therefore, in this paper, we analyze the cases of application development in cloud native environment and propose the optimized application development process and development method through small and medium-sized SI projects.

PCIA Cloud Service Modeling and Performance Analysis of Physical & Logical Resource Provisioning (PCIA 클라우드 서비스 모델링 및 자원 구성에 따른 성능 영향도 분석)

  • Yin, Binfeng;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • 제19권2호
    • /
    • pp.1-10
    • /
    • 2014
  • Cloud computing provides flexible and efficient mass data analysis platform. In this paper, we define a new resource provisioning architecture in the public cloud, named PCIA. In addition, we provide a service model of PCIA and its new naming scheme. Our model selects the proper number of physical or virtual resources based on the requirements of clients. By the analysis of performance variation in the PCIA, we evaluate the relationship between performance variation and resource provisioning, and we present key standards for cloud system constructions, which can be an important resource provisioning criteria for both cloud service providers and clients.

Optimal Flow Distribution Algorithm for Efficient Service Function Chaining (효율적인 서비스 기능 체이닝을 위한 최적의 플로우 분배 알고리즘)

  • Kim, Myeongsu;Lee, Giwon;Choo, Sukjin;Pack, Sangheon;Kim, Younghwa
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제40권6호
    • /
    • pp.1032-1039
    • /
    • 2015
  • Service function chaining(SFC) defines the creation of network services that consist of an ordered set of service function. A multiple service function instances should be deployed across networks for scalable and fault-tolerant SFC services. Therefore, an incoming flows should be distributed to multiple service function instances appropriately. In this paper, we formulate the flow distribution problem in SFC aiming at minimizing the end-to-end flow latency under resource constraints. Then, we evaluate its optimal solution in a realistic network topology generated by the GT-ITM topology generator. Simulation results reveal that the optimal solution can reduce the total flow latency significantly.

A Study on a 4-Stage Phased Defense Method to Defend Cloud Computing Service Intrusion (Cloud Computing 서비스 침해방어를 위한 단계별 4-Stage 방어기법에 관한 연구)

  • Seo, Woo-Seok;Park, Dea-Woo;Jun, Moon-Seog
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • 제7권5호
    • /
    • pp.1041-1051
    • /
    • 2012
  • Attack on Cloud Computing, an intensive service solution using network infrastructure recently released, generates service breakdown or intrusive incidents incapacitating developmental platforms, web-based software, or resource services. Therefore, it is needed to conduct research on security for the operational information of three kinds of services (3S': laaS, PaaS, SaaS) supported by the Cloud Computing system and also generated data from the illegal attack on service blocking. This paper aims to build a system providing optimal services as a 4-stage defensive method through the test on the attack and defense of Cloud Computing services. It is a defense policy that conducts 4-stage, orderly and phased access control as follows: controlling the initial access to the network, controlling virtualization services, classifying services for support, and selecting multiple routes. By dispersing the attacks and also monitoring and analyzing to control the access by stage, this study performs defense policy realization and analysis and tests defenses by the types of attack. The research findings will be provided as practical foundational data to realize Cloud Computing service-based defense policy.

A Case Analysis on the Effects of Cloud Adoption on Service Continuity - Focusing on Failures (클라우드 도입이 서비스 연속성에 미치는 영향에 관한 사례 분석 - 장애 중심으로)

  • Ji-Yong Huh;Joon-Hee Yoon;Eun-Kyong Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제23권4호
    • /
    • pp.121-126
    • /
    • 2023
  • As service utilization for IT technologies such as artificial intelligence, big data, and IOT has recently increased, cloud computing has been introduced to efficiently manage vast amounts of data and IT infrastructure resources that process them to provide stable and reliable information services while streamlining infrastructure costs. Efforts for this are ongoing. This thesis compares and analyzes the operation results before and after cloud adoption in terms of system failures for 426 systems at 360 branches nationwide in cloud systems of companies operating a total of 1,750 cloud systems. As a result of the analysis, the number of failures and failure types , service downtime, etc., the introduction of the cloud yielded significant results in securing service continuity. Through this result, it is expected to provide meaningful implications to companies expecting to secure service continuity by adopting the cloud.

VTF: A Timer Hypercall to Support Real-time of Guest Operating Systems (VIT: 게스트 운영체제의 실시간성 지원을 위한 타이머 하이퍼콜)

  • Park, Mi-Ri;Hong, Cheol-Ho;Yoo, See-Hwan;Yoo, Chuck
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제37권1호
    • /
    • pp.35-42
    • /
    • 2010
  • Guest operating systems running over the virtual machines share a variety of resources. Since CPU is allocated in a time division manner it consequently leads them to having the unknown physical time. It is not regarded as a serious problem in the server virtualization fields. However, it becomes critical in embedded systems because it prevents guest OS from executing real time tasks when it does not occupy CPU. In this paper we propose a hypercall to register a timer service to notify the timer request related real time. It enables hypervisor to schedule a virtual machine which has real time tasks to execute, and allows guest OS to take CPU on time to support real time. The following experiment shows its implementation on Xen-Arm and para-virtualized Linux. We also analyze the real time performance with response time of test application and frames per second of Mplayer.

A Method of Selecting Layered File System Based on Learning Block I/O History for Service-Customized Container (서비스 맞춤형 컨테이너를 위한 블록 입출력 히스토리 학습 기반 컨테이너 레이어 파일 시스템 선정 기법)

  • Yong, Chanho;Na, Sang-Ho;Lee, Pill-Woo;Huh, Eui-Nam
    • KIPS Transactions on Computer and Communication Systems
    • /
    • 제6권10호
    • /
    • pp.415-420
    • /
    • 2017
  • Virtualization technique of OS-level is a new paradigm for deploying applications, and is attracting attention as a technology to replace traditional virtualization technique, VM (Virtual Machine). Especially, docker containers are capable of distributing application images faster and more efficient than before by applying layered image structures and union mount point to existing linux container. These characteristics of containers can only be used in layered file systems that support snapshot functionality, so it is required to select appropriate layered file systems according to the characteristics of the containerized application. We examine the characteristics of representative layered file systems and conduct write performance evaluations of each layered file systems according to the operating principles of the layered file system, Allocate-on-Demand and Copy-up. We also suggest the method of determining a appropriate layered file system principle for unknown containerized application by learning block I/O usage history of each layered file system principles in artificial neural network. Finally we validate effectiveness of artificial neural network created from block I/O history of each layered file system principles.

A Study on Seamless Handover Mechanism with Network Virtualization for Wireless Network (WLAN 환경에서 네트워크 가상화를 통한 끊김 없는 핸드오버 매커니즘 연구)

  • Ku, Gi-Jun;Jeong, Ho-Gyoun
    • Journal of Advanced Navigation Technology
    • /
    • 제18권6호
    • /
    • pp.594-599
    • /
    • 2014
  • The routinized wireless devices such as smart phone have promoted to expand the use of IEEE 802.11 groups. The challenge environments of the wireless network utilizes effectively and user-oriented seamless services that handover is the most desirable issues under the wireless circumstance. In data center software defined network (SDN) has provided the flow routing to reduce costs and complexities. Flow routing has directly offered control for network administrator and has given to reduce delay for users. Under the circumstance of being short of network facilities, SDNs give the virtualization of network environments and to support out of the isolation traffic conditions. It shows that the mechanism of handover makes sure seamless services for higher density of the network infrastructure which is SDN to support network service re-configurable.

Design and Implementation of an Automated Inter-connection Tool for Multi-Point OpenFlow Sites (다지점 오픈플로우 사이트들을 위한 자동화된 연동 도구의 설계 및 구현)

  • Na, TaeHeum;Kim, JongWon
    • KIISE Transactions on Computing Practices
    • /
    • 제21권1호
    • /
    • pp.1-12
    • /
    • 2015
  • To realize futuristic services with agility, the role of the experimental facility (i.e., testbed) based on integrated resources has become important, so that developers can flexibly utilize the dynamic provisioning power of software-defined networking and cloud computing. Following this trend, an OpenFlow-based SDN testbed environment, denoted as OF@TEIN, connects multiple sites with unique SmartX Racks (i.e., virtualization-enabled converged resources). In this paper, in order to automate the multi-point L2 (i.e., Ethernet) inter-connection of OpenFlow islands, we introduce an automated tool to configure the required Network Virtualization using Generic Routing Encapsulation (NVGRE) tunneling. With the proposed automation tool, the operators can efficiently and quickly manage network inter-connections among multiple OpenFlow sites, while letting developers to control their own traffic flows for service realization experiments.