• Title/Summary/Keyword: virtualization system

Search Result 259, Processing Time 0.027 seconds

BPFast: An eBPF/XDP-Based High-Performance Packet Payload Inspection System for Cloud Environments (BPFast: 클라우드 환경을 위한 eBPF/XDP 기반 고속 네트워크 패킷 페이로드 검사 시스템)

  • You, Myoung-sung;Kim, Jin-woo;Shin, Seung-won;Park, Tae-june
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.213-225
    • /
    • 2022
  • Containerization, a lightweight virtualization technology, enables agile deployments of enterprise-scale microservices in modern cloud environments. However, containerization also opens a new window for adversaries who aim to disrupt the cloud environments. Since microservices are composed of multiple containers connected through a virtual network, a single compromised container can carry out network-level attacks to hijack its neighboring containers. While existing solutions protect containers against such attacks by using network access controls, they still have severe limitations in terms of performance. More specifically, they significantly degrade network performance when processing packet payloads for L7 access controls (e.g., HTTP). To address this problem, we present BPFast, an eBPF/XDP-based payload inspection system for containers. BPFast inspects headers and payloads of packets at a kernel-level without any user-level components. We evaluate a prototype of BPFast on a Kubernetes environment. Our results show that BPFast outperforms state-of-the-art solutions by up to 7x in network latency and throughput.

Research Trends of Mixed-Criticality System (중요도 혼재 시스템의 연구 동향 분석)

  • Yoon, Moonhyung;Park, Junho;Kim, Yongho;Yi, JeongHoon;Koo, BongJoo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.9
    • /
    • pp.125-140
    • /
    • 2018
  • Due to rapid development of semiconductor technology, embedded systems have been developed from single-functional system to the multi-functional system. The system composed of software that has different criticality level is called Mixed-Criticality System. Currently, the project related to the Mixed-Criticality System is accelerating the efforts to seek the development direction and take technical initiatives led by EU and USA where the related industry has developed, but the movement in Korea is yet insignificant. Therefore, it is urgent to perform the research and project of various basic technologies to occupy the initiative for the related technology and market. In this paper, we analyze the trends of major project researches and developments related to the MCS. First, after defining the definition of the MCS and system model, we analyze the underlying technology constituting the MCS. In addition, we analyze the project trends of each country researching MCS and discuss the future research areas. Through this study, it is possible to grasp the research trends of the world in order to establish the research direction of the MCS and to lay the foundation for the integration into the military system.

Massive Electronic Record Management System using iRODS (iRODS를 이용한 대용량 전자기록물 관리 시스템)

  • Han, Yong-Koo;Kim, Jin-Seung;Lee, Seung-Hyun;Lee, Young-Koo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.825-836
    • /
    • 2010
  • The advancement of electronic records brought great changes of the records management system. One of the biggest changes is the transition from passive to automatic management system, which manages massive records more efficiently. The integrated Rule-Oriented Data System (iRODS) is a rule-oriented grid system S/W which provides an infrastructure for building massive archive through virtualization. It also allows to define rules for data distribution and back-up. Therefore, iRODS is an ideal tool to build an electronic record management system that manages electronic records automatically. In this paper we describe the issues related to design and implementation of the electronic record management system using iRODS. We also propose a system that serves automatic processing of distribution and back-up of records according to their types by defining iRODS rules. It also provides functions to store and retrieve metadata using iRODS Catalog (iCAT) Database.

A Study on the Next VWorld System Architecture: New Technology Analysis for the Optimal Architecture Design (차세대 브이월드 시스템 아키텍처 구성에 관한 연구: 최적의 아키텍처 설계를 위한 신기술 분석)

  • Go, Jun Hee;Lim, Yong Hwa;Kim, Min Soo;Jang, In Sung
    • Spatial Information Research
    • /
    • v.23 no.4
    • /
    • pp.13-22
    • /
    • 2015
  • There has been much interest in the VWorld open platform with the addition of a variety of contents or services such as 2D map, 3D terrain, 3D buildings, and thematic map since 2012. However, the VWorld system architecture was not stable for the system overload. For example, the system was stopped due to the rapidly increasing user accesses when the 3D terrain service of the North Korea and the Baekdu mountain was launched at September 2012 and September 2013, respectively. It was because the system architect has just extended the server system and the network bandwidth whenever the rapid increase of user accesses occurs or new service starts. Therefore, this study proposes a new VWorld system architecture that can reliably serve the huge volume of National Spatial Data by applying the new technologies such as CDN, visualization and clustering. Finally, it is expected that the results of this study can be used as a basis for the next VWorld system architecture being capable of a huge volume of spatial data and users.

A Novel Reference Model for Cloud Manufacturing CPS Platform Based on oneM2M Standard (제조 클라우드 CPS를 위한 oneM2M 기반의 플랫폼 참조 모델)

  • Yun, Seongjin;Kim, Hanjin;Shin, Hyeonyeop;Chin, Hoe Seung;Kim, Won-Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.2
    • /
    • pp.41-56
    • /
    • 2019
  • Cloud manufacturing is a new concept of manufacturing process that works like a single factory with connected multiple factories. The cloud manufacturing system is a kind of large-scale CPS that produces products through the collaboration of distributed manufacturing facilities based on technologies such as cloud computing, IoT, and virtualization. It utilizes diverse and distributed facilities based on centralized information systems, which allows flexible composition user-centric and service-oriented large-scale systems. However, the cloud manufacturing system is composed of a large number of highly heterogeneous subsystems. It has difficulties in interconnection, data exchange, information processing, and system verification for system construction. In this paper, we derive the user requirements of various aspects of the cloud manufacturing system, such as functional, human, trustworthiness, timing, data and composition, based on the CPS Framework, which is the analysis methodology for CPS. Next, by analyzing the user requirements we define the system requirements including scalability, composability, interactivity, dependability, timing, interoperability and intelligence. We map the defined CPS system requirements to the requirements of oneM2M, which is the platform standard for IoT, so that the support of the system requirements at the level of the IoT platform is verified through Mobius, which is the implementation of oneM2M standard. Analyzing the verification result, finally, we propose a large-scale cloud manufacturing platform based on oneM2M that can meet the cloud manufacturing requirements to support the overall features of the Cloud Manufacturing CPS with dependability.

An elastic distributed parallel Hadoop system for bigdata platform and distributed inference engines (동적 분산병렬 하둡시스템 및 분산추론기에 응용한 서버가상화 빅데이터 플랫폼)

  • Song, Dong Ho;Shin, Ji Ae;In, Yean Jin;Lee, Wan Gon;Lee, Kang Se
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1129-1139
    • /
    • 2015
  • Inference process generates additional triples from knowledge represented in RDF triples of semantic web technology. Tens of million of triples as an initial big data and the additionally inferred triples become a knowledge base for applications such as QA(question&answer) system. The inference engine requires more computing resources to process the triples generated while inferencing. The additional computing resources supplied by underlying resource pool in cloud computing can shorten the execution time. This paper addresses an algorithm to allocate the number of computing nodes "elastically" at runtime on Hadoop, depending on the size of knowledge data fed. The model proposed in this paper is composed of the layered architecture: the top layer for applications, the middle layer for distributed parallel inference engine to process the triples, and lower layer for elastic Hadoop and server visualization. System algorithms and test data are analyzed and discussed in this paper. The model hast the benefit that rich legacy Hadoop applications can be run faster on this system without any modification.

Dynamic Memory Allocation for Scientific Workflows in Containers (컨테이너 환경에서의 과학 워크플로우를 위한 동적 메모리 할당)

  • Adufu, Theodora;Choi, Jieun;Kim, Yoonhee
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.439-448
    • /
    • 2017
  • The workloads of large high-performance computing (HPC) scientific applications are steadily becoming "bursty" due to variable resource demands throughout their execution life-cycles. However, the over-provisioning of virtual resources for optimal performance during execution remains a key challenge in the scheduling of scientific HPC applications. While over-provisioning of virtual resources guarantees peak performance of scientific application in virtualized environments, it results in increased amounts of idle resources that are unavailable for use by other applications. Herein, we proposed a memory resource reconfiguration approach that allows the quick release of idle memory resources for new applications in OS-level virtualized systems, based on the applications resource-usage pattern profile data. We deployed a scientific workflow application in Docker, a light-weight OS-level virtualized system. In the proposed approach, memory allocation is fine-tuned to containers at each stage of the workflows execution life-cycle. Thus, overall memory resource utilization is improved.

A Virtualized Kernel for Effective Memory Test (효과적인 메모리 테스트를 위한 가상화 저널)

  • Park, Hee-Kwon;Youn, Dea-Seok;Choi, Jong-Moo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.12
    • /
    • pp.618-629
    • /
    • 2007
  • In this paper, we propose an effective memory test environment, called a virtualized kernel, for 64bit multi-core computing environments. The term of effectiveness means that we can test all of the physical memory space, even the memory space occupied by the kernel itself, without rebooting. To obtain this capability, our virtualized kernel provides four mechanisms. The first is direct accessing to physical memory both in kernel and user mode, which allows applying various test patterns to any place of physical memory. The second is making kernel virtualized so that we can run two or more kernel image at the different location of physical memory. The third is isolating memory space used by different instances of virtualized kernel. The final is kernel hibernation, which enables the context switch between kernels. We have implemented the proposed virtualized kernel by modifying the latest Linux kernel 2.6.18 running on Intel Xeon system that has two 64bit dual-core CPUs with hyper-threading technology and 2GB main memory. Experimental results have shown that the two instances of virtualized kernel run at the different location of physical memory and the kernel hibernation works well as we have designed. As the results, the every place of physical memory can be tested without rebooting.

Building up the foundation for the elderly apparel industry through the development on shirt sloper of elderly obese males - Applying CLO 3D program - (노년 비만남성의 셔츠원형 개발을 통한 실버 의류산업 활성화 기반 구축 - CLO 3D 가상착의 시스템 활용 -)

  • Seong, Ok jin;Kim, Sook jin
    • The Research Journal of the Costume Culture
    • /
    • v.28 no.3
    • /
    • pp.299-312
    • /
    • 2020
  • The purpose of this study is to create a shirt sloper suitable for an elderly male body shape by producing virtual models using a 3D-virtualization program, making a torso prototype using the Yuka CAD system, and employing 3D simulation to virtualize and calibrate the model. First, the following three types of obese dummies are implemented through the CLO 3D program: Type 1 exhibits body fat in the lower body; Type 2 exhibits an obese abdomen; and Type 3 displays a balanced form of obesity. Second, for the design of the shirt pattern, the waist back length (measured value+1), back armhole depth (C/10+12+3+0.5~1.5), front armhole depth (back armhole depth 0~1), front interscye (2C/10-1+0.5-0.5), armscye depth (C/10+2+3.5+ 0.5), back interscye (2C/10-1+1), front chest C (C/4+2.5+1), back chest C (C/4+2.5-1), front hem C (C/4+2.5+1(+2)), back hem C (C/4+2.5-1(+2)), cap height (AH/3-5), and biceps width (Front AH-1, Back AH-1) are calculated. Third, the virtual attachment of the shirt pattern is resolved by increasing the front and back armhole depths, and the front and rear wrinkles are improved by adding a back armhole dart. The front hem lift and lateral pull caused by the protrusion of the abdomen are amended by increasing the margin of the chest, waist C, and hip C, with the appearance improved by balanced margin distribution in the front, back, and side panels. The improved retail pattern with an increase in the front armholes C was balanced on the torso plate.

A scheme of Docker-based Version Control for Open Source Project (오픈 소스 프로젝트를 위한 도커 기반 버전 관리 기법)

  • Lee, Yong-Jeon;Rim, Seong-Rak
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.8-14
    • /
    • 2016
  • When Open Source Projects are processed by multiple developers, the Version Control Systems, which control the different versions of the same file being used, is a very useful tool. On the other hand, because most of conventional VCS(SVN, Git, etc.) mainly control the history of the modifications of the source codes or documents, there is an inconvenience that each developer should modify the development environment whenever the development environment is modified. To overcome this inconvenience, this paper suggests a scheme of VC for OSP. The basic concept of the suggested scheme is that an image, including the development environment and controls, is created as a new version using the Docker, virtualization tool of the container method. To review the functional appropriateness of the suggested scheme, after establishing the Docker on the hosts that use the different OS( Ubuntu12.0.4, CentOS7), this study tested a VC that could control the different versions including the history of modifications of the development environment and evaluated them by a comparison with the conventional VCS. The results show that the suggested scheme is a convenient scheme of VC for the OSP.