• Title/Summary/Keyword: virtualization system

Search Result 259, Processing Time 0.027 seconds

Comparative Analysis on Cloud and On-Premises Environments for High-Resolution Agricultural Climate Data Processing (고해상도 농업 기후 자료 처리를 위한 클라우드와 온프레미스 비교 분석)

  • Park, Joo Hyeon;Ahn, Mun Il;Kang, Wee Soo;Shim, Kyo-Moon;Park, Eun Woo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.4
    • /
    • pp.347-357
    • /
    • 2019
  • The usefulness of processing and analysis systems of GIS-based agricultural climate data is affected by the reliability and availability of computing infrastructures such as cloud, on-premises, and hybrid. Cloud technology has grown in popularity. However, various reference cases accumulated over the years of operational experiences point out important features that make on-premises technology compatible with cloud technology. Both cloud and on-premises technologies have their advantages and disadvantages in terms of operational time and cost, reliability, and security depending on cases of applications. In this study, we have described characteristics of four general computing platforms including cloud, on-premises with hardware-level virtualization, on-premises with operating system-level virtualization and hybrid environments, and compared them in terms of advantages and disadvantages when a huge amount of GIS-based agricultural climate data were stored and processed to provide public services of agro-meteorological and climate information at high spatial and temporal resolutions. It was found that migrating high-resolution agricultural climate data to public cloud would not be reasonable due to high cost for storing a large amount data that may be of no use in the future. Therefore, we recommended hybrid systems that the on-premises and the cloud environments are combined for data storage and backup systems that incur a major cost, and data analysis, processing and presentation that need operational flexibility, respectively.

A Comparative Analysis of Domestic and Foreign Docker Container-Based Research Trends (국내·외 도커 컨테이너 기반 연구 동향 비교 분석)

  • Bae, Sun-Young
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.10
    • /
    • pp.742-753
    • /
    • 2022
  • Cloud computing, which is rapidly growing as one of the core technologies of the 4th industrial revolution, has become the center of global IT trend change, and Docker, a container-based open source platform, is the mainstream for virtualization technology for cloud computing. Therefore, in this paper, research trends based on Docker containers were compared and analyzed, focusing on studies published from March 2013 to July 2022. As a result of the study, first, the number of papers published by year, domestic and foreign research were steadily increasing. Second, the keywords of the study, in domestic research, Docker, Docker Containers, and Containers were in the order, and in foreign research, Cloud Computing, Containers, and Edge Computing were in the order. Third, in the frequency of publishing institutions to estimate research trends, the utilization was the highest in two papers of the Korean Next Generation Computer Society and the Korean Computer Accounting Society. In the overseas research, IEEE Communications Surveys & Tutorials, IEEE Access, and Computer were in the order. Fourth, in the research method, experiments 78(26.3%) and surveys 32(10.8%) were conducted in domestic research. In foreign research, experiments 128(43.1%) and surveys 59(19.9%) were conducted. In the experiment of implementation research, In domestic research, System 25(8.4%), Algorithm 24(8.1%), Performance Measurement and Improvement 16(5.4%) were in the order, In foreign research, Algorithm 37(12.5%), Performance Measurement and Improvement 17(9.1%), followed by Framework 26(8.8%). Through this, it is expected that it will be used as basic data that can lead the research direction of Docker container-based cloud computing such as research methods, research topics, research fields, and technology development.

Symbiotic Dynamic Memory Balancing for Virtual Machines in Smart TV Systems

  • Kim, Junghoon;Kim, Taehun;Min, Changwoo;Jun, Hyung Kook;Lee, Soo Hyung;Kim, Won-Tae;Eom, Young Ik
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.741-751
    • /
    • 2014
  • Smart TV is expected to bring cloud services based on virtualization technologies to the home environment with hardware and software support. Although most physical resources can be shared among virtual machines (VMs) using a time sharing approach, allocating the proper amount of memory to VMs is still challenging. In this paper, we propose a novel mechanism to dynamically balance the memory allocation among VMs in virtualized Smart TV systems. In contrast to previous studies, where a virtual machine monitor (VMM) is solely responsible for estimating the working set size, our mechanism is symbiotic. Each VM periodically reports its memory usage pattern to the VMM. The VMM then predicts the future memory demand of each VM and rebalances the memory allocation among the VMs when necessary. Experimental results show that our mechanism improves performance by up to 18.28 times and reduces expensive memory swapping by up to 99.73% with negligible overheads (0.05% on average).

A Development of Novel Attack Detection Methods using Virtual Honeynet (Virtual Honeynet을 이용한 신종공격 탐지기술 개발)

  • Kang, Dae-Kwon;Euom, Ieck-Chae;Kim, Chun-Suk
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.4
    • /
    • pp.406-411
    • /
    • 2010
  • A honeynet is a closely monitored computing resource that we want to be probed, attacked or compromised. More precisely, a honeypot is "an information system resource whose value lies in unauthorized or illicit use of that resource The value of honeynet is weighed by the information that can be obtained from it. but It's very difficult to deploy Honeynet in Real World, So I focused on Virtual Honeynet. The strength of virtual honeynet is scalability and ease of maintenance. It is inexpensive to deploy and accessible to almost everyone. Compared with physical honeypots, this approach is more lightweight. Instead of deploying a physical computer system that acts as a honeypot, we can also deploy one physical computer that hosts several virtual machines that act as honeypots.

SDN-Based Enterprise and Campus Networks: A Case of VLAN Management

  • Nguyen, Van-Giang;Kim, Young-Han
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.511-524
    • /
    • 2016
  • The Virtual Local Area Network (VLAN) has been used for a long time in campus and enterprise networks as the most popular network virtualization solution. Due to the benefits and advantages achieved by using VLAN, network operators and administrators have been using it for constructing their networks up until now and have even extended it to manage the networking in a cloud computing system. However, their configuration is a complex, tedious, time-consuming, and error-prone process. Since Software Defined Networking (SDN) features the centralized network management and network programmability, it is a promising solution for handling the aforementioned challenges in VLAN management. In this paper, we first introduce a new architecture for campus and enterprise networks by leveraging SDN and OpenFlow. Next, we have designed and implemented an application for easily managing and flexibly troubleshooting the VLANs in this architecture. This application supports both static VLAN and dynamic VLAN configurations. In addition, we discuss the hybrid-mode operation where the packet processing is involved by both the OpenFlow control plane and the traditional control plane. By deploying a real test-bed prototype, we illustrate how our system works and then evaluate the network latency in dynamic VLAN operation.

A Study on Implementation and Operation Management of Virtual Programming Lab based on Cloud Computing (클라우드 컴퓨팅 기반의 가상 프로그래밍 실습 환경 구현 및 운영 관리 방안 연구)

  • Park, Jungho;Choi, Eunyoung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.578-580
    • /
    • 2013
  • In order to provide the virtual desktop service for computer programing training, User group to use the service should be created for each subject. And the management program to manage development tools, disk images, user account information, log data is required. In this study, we implemented the web-based operation management system to manage the virtual desktop service for computer programming training. We can provide rapid provisioning for the virtual desktop service by using the implemented system.

  • PDF

PCIA Cloud Service Modeling and Performance Analysis of Physical & Logical Resource Provisioning (PCIA 클라우드 서비스 모델링 및 자원 구성에 따른 성능 영향도 분석)

  • Yin, Binfeng;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.1-10
    • /
    • 2014
  • Cloud computing provides flexible and efficient mass data analysis platform. In this paper, we define a new resource provisioning architecture in the public cloud, named PCIA. In addition, we provide a service model of PCIA and its new naming scheme. Our model selects the proper number of physical or virtual resources based on the requirements of clients. By the analysis of performance variation in the PCIA, we evaluate the relationship between performance variation and resource provisioning, and we present key standards for cloud system constructions, which can be an important resource provisioning criteria for both cloud service providers and clients.

A Workflow Execution System for Analyzing Large-scale Astronomy Data on Virtualized Computing Environments

  • Yu, Jung-Lok;Jin, Du-Seok;Yeo, Il-Yeon;Yoon, Hee-Jun
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.16-25
    • /
    • 2020
  • The size of observation data in astronomy has been increasing exponentially with the advents of wide-field optical telescopes. This means the needs of changes to the way used for large-scale astronomy data analysis. The complexity of analysis tools and the lack of extensibility of computing environments, however, lead to the difficulty and inefficiency of dealing with the huge observation data. To address this problem, this paper proposes a workflow execution system for analyzing large-scale astronomy data efficiently. The proposed system is composed of two parts: 1) a workflow execution manager and its RESTful endpoints that can automate and control data analysis tasks based on workflow templates and 2) an elastic resource manager as an underlying mechanism that can dynamically add/remove virtualized computing resources (i.e., virtual machines) according to the analysis requests. To realize our workflow execution system, we implement it on a testbed using OpenStack IaaS (Infrastructure as a Service) toolkit and HTCondor workload manager. We also exhaustively perform a broad range of experiments with different resource allocation patterns, system loads, etc. to show the effectiveness of the proposed system. The results show that the resource allocation mechanism works properly according to the number of queued and running tasks, resulting in improving resource utilization, and the workflow execution manager can handle more than 1,000 concurrent requests within a second with reasonable average response times. We finally describe a case study of data reduction system as an example application of our workflow execution system.

Design and Implementation of an Automated Privacy Protection System over TPM and File Virtualization (TPS: TPM 및 파일 가상화를 통한 개인정보보호 자동화 시스템 디자인 및 구현)

  • Jeong, Hye-Lim;Ahn, Sung-Kyu;Kim, Mun Sung;Park, Ki-Woong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.2
    • /
    • pp.7-17
    • /
    • 2017
  • In this paper, we propose the TPS (TPM-enhanced Privacy Protection System) which is an automated privacy protection system enhanced with a TPM (Trusted Platform Module). The TPS detects documents including personal information by periodic scanning the disk of clients at regular intervals and encrypts them. Hence, system manages the encrypted documents in the server. In particular, the security of TPS was greatly enhanced by limiting the access of documents including the personal information with regard to the client in an abnormal state through the TPM-based platform verification mechanism of the client system. In addition, we proposed and implemented a VTF (Virtual Trusted File) interface to provide users with the almost identical user interface as general document access even though documents containing personal information are encrypted and stored on the remote server. Consequently, the TPS automates the compliance of the personal information protection acts without additional users' interventions.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).