• Title/Summary/Keyword: Cloud Computing Services

Search Result 644, Processing Time 0.037 seconds

Performance Test of Asynchronous Process of OGC WPS 2.0: A Case Study for Geo-based Image Processing

  • Yoon, Gooseon;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.4
    • /
    • pp.391-400
    • /
    • 2017
  • Geo-based application services linked with the Open Geospatial Consortium (OGC) Web Processing Service (WPS) protocol have been regarded as an important standardized framework for of digital earth building in the web environments. The WPS protocol provides interface standards for analysis functionalities within geo-spatial processing in web-based service systems. Despite its significance, there is few performance tests of WPS applications. The main motivation in this study is to perform the comparative performance test on WPS standards. Test system, which was composed of WPS servers, WPS framework, data management module, geo-based data processing module and client-sided system, was implemented by fully open source stack. In this system, two kinds of geo-based image processing functions such as cloud detection and gradient magnitude computation were applied. The performance test of different server environments of non-WPS, synchronous WPS 1.0 and asynchronous WPS 2.0 was carried out using 100 threads and 400 threads corresponds client users on a web-based application service. As the result, at 100 threads, performance of three environments was within an adjacent range in the average response time to complete the processing of each thread. At 400 threads, the application case of WPS 2.0 showed the distinguished characteristics for higher performance in the response time than the small threads cases. It is thought that WPS 2.0 contributes to settlement of without performance problems such as time delay or thread accumulation.

Extracting Graphics Information for Better Video Compression

  • Hong, Kang Woon;Ryu, Won;Choi, Jun Kyun;Lim, Choong-Gyoo
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.743-751
    • /
    • 2015
  • Cloud gaming services are heavily dependent on the efficiency of real-time video streaming technology owing to the limited bandwidths of wire or wireless networks through which consecutive frame images are delivered to gamers. Video compression algorithms typically take advantage of similarities among video frame images or in a single video frame image. This paper presents a method for computing and extracting both graphics information and an object's boundary from consecutive frame images of a game application. The method will allow video compression algorithms to determine the positions and sizes of similar image blocks, which in turn, will help achieve better video compression ratios. The proposed method can be easily implemented using function call interception, a programmable graphics pipeline, and off-screen rendering. It is implemented using the most widely used Direct3D API and applied to a well-known sample application to verify its feasibility and analyze its performance. The proposed method computes various kinds of graphics information with minimal overhead.

A Study on the Enhancement Process of the Telecommunication Network Management using Big Data Analysis (Big Data 분석을 활용한 통신망 관리 시스템의 개선방안에 관한 연구)

  • Koo, Sung-Hwan;Shin, Min-Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.12
    • /
    • pp.6060-6070
    • /
    • 2012
  • Real-Time Enterprise (RTE)'s key requirement is that it should respond and adapt fast to the change of the firms' internal and external situations including the change of market and customers' needs. Recently, the big data processing technology to support the speedy change of the firms is spotlighted. Under the circumstances that wire and wireless communication networks are evolving with an accelerated rate, it is especially critical to provide a strong security monitoring function and stable services through a real-time processing of massive communication data traffic. By applying the big data processing technology based on a cloud computing architecture, this paper solves the managerial problems of telecommunication service providers and discusses how to operate the network management system effectively.

A method of Securing Mass Storage for SQL Server by Sharing Network Disks - on the Amazon EC2 Windows Environments - (네트워크 디스크를 공유하여 SQL 서버의 대용량 스토리지 확보 방법 - Amazon EC2 Windows 환경에서 -)

  • Kang, Sungwook;Choi, Jungsun;Choi, Jaeyoung
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.1-9
    • /
    • 2016
  • Users are provided infrastructure such as CPU, memory, network, and storage as IaaS (Infrastructure as a Service) service on cloud computing environments. However storage instances cannot support the maximum storage capacity that SQL servers can use, because the capacity of instances provided by service providers is usually limited. In this paper, we propose a method of securing mass storage capacity for SQL servers by sharing network disks with limited storage capacity. We confirmed through experiments that it is possible to secure mass storage capacity, which exceeds the maximum storage capacity provided by an instance with Amazon EBS on Amazon EC2 Windows environments, and it is possible to improve the overall performance of the SQL servers by increasing the disk capacity and performance.

The Study of Sensor Data Integration for Medical Information Processing in a Cloud Computing (클라우드 컴퓨팅에서 의료 정보 처리를 위한 센서 데이터 통합에 대한 연구)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.285-287
    • /
    • 2015
  • Recently, the development of sensors and the mobile communication device offers a number of possibilities in the medical and related fields. However, this data is generated, it is difficult to match the metadata and standard units. The data integration is required to use the data generated by the different specifications of the sensor efficiently. Accordingly, in this paper we propose a method using an ontology as a method to integrate the data generated by the existing sensors and the new sensor. The ontology is mapping to the standard item and sensors, also include a type and structural difference. The mapping is comprised of two : data mapping, and metadata mapping. There are standard items that are created in this way, type of data exchange between services. This can solve the heterogeneous problem generated by sensors.

  • PDF

Web-based Distributed Experimental Frame for Discrete Event Simulation System (이산사건 시뮬레이션 시스템을 위한 웹 기반 분산 실험 틀)

  • Jung, Inho;Choi, Jaewoong;Choi, Changbeom
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.2
    • /
    • pp.9-17
    • /
    • 2017
  • The problem of social phenomenon is getting more complicated than past decades, and the simulation engineers need more computation power to solve the problem. Therefore, the needs of the computational resources of the modeling and simulation environment are increasing. In the perspective of the simulation, it is necessary to allocate computational resources flexibly so that the simulation can be performed per the available budget range. As an alternative to the simulation environment to accommodate these requirements, cloud service has emerged as an environment in which computing resources can be used flexibly. This paper proposes a web-based simulation framework which consists of a front-end that reconstructs the simulation model using the web, and a back-end that executes the discrete event simulation. This paper also carried out a case study which shows web-based simulation framework has better overall runtime than standalone simulation framework.

A Study in the Efficient Collection and Integration of a Sensed Data in a Cloud Computing Environment (클라우드 컴퓨팅 환경에서 센싱된 데이터의 효율적 수집 및 통합에 관한 연구)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.324-325
    • /
    • 2016
  • The sensor network-based service collects data by using the sensor, the data is aware of the situation via the analysis, and the service provider provides a service suitable for the user via the context-awareness. However, this data is generated, it is difficult to match the metadata and standard units. The data integration is required to use the data generated by the different specifications of the sensor efficiently. Accordingly, in this paper we propose a method using an ontology as a method to integrate the data generated by the existing sensors and the new sensor. The ontology is mapping to the standard item and sensors, also include a type and structural difference. The mapping is comprised of two:data mapping, and metadata mapping. There are standard items that are created in this way, type of data exchange between services. This can solve the heterogeneous problem generated by sensors.

  • PDF

Fine Grained Resource Scaling Approach for Virtualized Environment (가상화 환경에서 세밀한 자원 활용률 적용을 위한 스케일 기법)

  • Lee, Donhyuck;Oh, Sangyoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.11-21
    • /
    • 2013
  • Recently operating a large scale computing resource like a data center becomes easier because of the virtualization technology that virtualize servers and enable flexible resource provision. The most of public cloud services provides automatic scaling in the form of scale-in or scale-out and these scaling approaches works well to satisfy the service level agreement (SLA) of users. However, a novel scaling approach is required to operate private clouds that has smaller amount of computing resources than vast resources of public clouds. In this paper, we propose a hybrid server scaling architecture and related algorithms using both scale-in and scale-out to achieve higher resource utilization rate for private clouds. We uses dynamic resource allocation and live migration to run our proposed algorithm. Our propose system aims to provide a fine-grain resource scaling by steps. Thus private cloud systems are able to keep stable service and to reduce server management cost by optimizing server utilization. The experiment results show that our proposed approach performs better in resource utilization than the scale-out approach based on the number of users.

Adaptive VM Allocation and Migration Approach using Fuzzy Classification and Dynamic Threshold (퍼지 분류 및 동적 임계 값을 사용한 적응형 VM 할당 및 마이그레이션 방식)

  • Mateo, John Cristopher A.;Lee, Jaewan
    • Journal of Internet Computing and Services
    • /
    • v.18 no.4
    • /
    • pp.51-59
    • /
    • 2017
  • With the growth of Cloud computing, it is important to consider resource management techniques to minimize the overall costs of management. In cloud environments, each host's utilization and virtual machine's request based on user preferences are dynamic in nature. To solve this problem, efficient allocation method of virtual machines to hosts where the classification of virtual machines and hosts is undetermined should be studied. In reducing the number of active hosts to reduce energy consumption, thresholds can be implemented to migrate VMs to other hosts. By using Fuzzy logic in classifying resource requests of virtual machines and resource utilization of hosts, we proposed an adaptive VM allocation and migration approach. The allocation strategy classifies the VMs according to their resource request, then assigns it to the host with the lowest resource utilization. In migrating VMs from overutilized hosts, the resource utilization of each host was used to create an upper threshold. In selecting candidate VMs for migration, virtual machines that contributed to the high resource utilization in the host were chosen to be migrated. We evaluated our work through simulations and results show that our approach was significantly better compared to other VM allocation and Migration strategies.

Analysis of Priority of Technical Factors for Enabling Cloud Computing Services (클라우드 컴퓨팅 서비스 활성화를 위한 기술적 측면 특성요인의 중요도 우선순위 분석)

  • Kang, Da-Yeon;Hwang, Jong-Ho
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.123-130
    • /
    • 2019
  • The advent of the full-fledged Internet of Things era will bring together various types of information through Internet of Things devices, and the vast amount of information collected will be generated as new information by the analysis process. To effectively store this generated information, a flexible and scalable cloud computing system is advantageous. Therefore, the main determinants for effective client system acceptance are viewed as motivator factor (economics, efficiency, etc.) and hindrance factor (transitional costs, security issues, etc.) and the purpose of this study is to determine which detailed factors play a major role in making new system acceptance decisions around harm. The factors required to determine the major priorities are defined as the system acceptance determinants from the technical point of view obtained through the literature review, and the questionnaire is prepared based on the factors derived, and the survey is conducted on the experts concerned. In addition, the AHP analysis aims to achieve a final priority by performing a bifurcation between components for measuring a decision unit. Furthermore, the results of this study will serve as an important basis for making decisions based on acceptance (enabling) of technology.