• Title/Summary/Keyword: Cloud Center

Search Result 556, Processing Time 0.028 seconds

Price Competition in Duopoly Multimedia Cloud Service Market (복점 멀티미디어 클라우드 서비스 시장에서의 가격 경쟁)

  • Lee, Doo Ho
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.4
    • /
    • pp.79-90
    • /
    • 2019
  • As an increasing number of cloud service providers begin to provide cloud computing services, they form a competitive market to compete for users. Due to different resource configurations and service workloads, users may observe different response times for their service requests and experience different levels of service quality. To compete for cloud users, it is crucial for each cloud service provider to determine an optimal price that best corresponds to their service qualities while also guaranteeing maximum profit. To achieve this goal, the underlying rationale and characteristics in this competitive market must be clarified. In this paper, we analyze price competition in the multimedia cloud service market with two service providers. We characterize the nature of non-cooperative games in a duopoly multimedia cloud service market with the goal of capturing how each cloud service provider determines its optimal price to compete with the other and maximize its own profit. To do this, we introduce a queueing model to characterize the service process in a multimedia cloud data center. Based on performance measures of the proposed queueing model, we suggest a price competition problem in a duopoly multimedia cloud service market. By solving this problem, we can obtain the optimal equilibrium prices.

Personalized Healthcare System for Chronic Disease Care in Cloud Environment

  • Jeong, Sangjin;Kim, Yong-Woon;Youn, Chan-Hyun
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.730-740
    • /
    • 2014
  • The rapid increase in the number of patients with chronic diseases is an important public healthcare issue in many countries, which accelerates many studies on a healthcare system that can, whenever and wherever, extract and process patient data. A patient with a chronic disease conducts self-management in an out-of-hospital environment, particularly in an at-home environment, so it is important to provide integrated and personalized healthcare services for effective care. To help provide effective care for chronic disease patients, we propose a service flow and a new cloud-based personalized healthcare system architecture supporting both at-home and at-hospital environments. The system considers the different characteristics of at-hospital and at-home environments, and it provides various chronic disease care services. A prototype implementation and a predicted cost model are provided to show the effectiveness of the system. The proposed personalized healthcare system can support cost-effective disease care in an at-hospital environment and personalized self-management of chronic disease in an at-home environment.

Provably-Secure Public Auditing with Deduplication

  • Kim, Dongmin;Jeong, Ik Rae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2219-2236
    • /
    • 2017
  • With cloud storage services, users can handle an enormous amount of data in an efficient manner. However, due to the widespread popularization of cloud storage, users have raised concerns about the integrity of outsourced data, since they no longer possess the data locally. To address these concerns, many auditing schemes have been proposed that allow users to check the integrity of their outsourced data without retrieving it in full. Yuan and Yu proposed a public auditing scheme with a deduplication property where the cloud server does not store the duplicated data between users. In this paper, we analyze the weakness of the Yuan and Yu's scheme as well as present modifications which could improve the security of the scheme. We also define two types of adversaries and prove that our proposed scheme is secure against these adversaries under formal security models.

THE $^{13}CO$ DISTRIBUTION AND CORRELATION WITH EXTINCTION IN L134

  • MINN YOUNG KEY;LEE HYE KYUNG
    • Journal of The Korean Astronomical Society
    • /
    • v.29 no.1
    • /
    • pp.75-81
    • /
    • 1996
  • We mapped the $^{13}CO$ line in the dark nebula L134 using the 14-m Taeduck radio telescope with a 57 arcsec beam and one beam spacing. The cloud has a spherical shape with an intensity peak ridge extended from the northwest to the southeast directions. The halfwidth and the radial velocity of the lines peak at the region of the cloud center. The radial velocity decreases from the cloud center towards the north and south directions. The integrated line intensity distributions in the space-velocity plane show some structure and a velocity gradient. The $^{13}CO$ and $H_2CO$ clouds and dark clouds are closely related in space in shape, outer boundary, and intensity peak positions. The $^{13}CO$ integrated line intensity is linearly proportional to the visual extinction.

  • PDF

Energy-aware Multi-dimensional Resource Allocation Algorithm in Cloud Data Center

  • Nie, Jiawei;Luo, Juan;Yin, Luxiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4320-4333
    • /
    • 2017
  • Energy-efficient virtual resource allocation algorithm has become a hot research topic in cloud computing. However, most of the existing allocation schemes cannot ensure each type of resource be fully utilized. To solve the problem, this paper proposes a virtual machine (VM) allocation algorithm on the basis of multi-dimensional resource, considering the diversity of user's requests. First, we analyze the usage of each dimension resource of physical machines (PMs) and build a D-dimensional resource state model. Second, we introduce an energy-resource state metric (PAR) and then propose an energy-aware multi-dimensional resource allocation algorithm called MRBEA to allocate resources according to the resource state and energy consumption of PMs. Third, we validate the effectiveness of the proposed algorithm by real-world datasets. Experimental results show that MRBEA has a better performance in terms of energy consumption, SLA violations and the number of VM migrations.

The Parallax Correction to Improve Cloud Location Error of Geostationary Meteorological Satellite Data (정지궤도 기상위성자료의 구름위치오류 개선을 위한 시차보정)

  • Lee, Won-Seok;Kim, Young-Seup;Kim, Do-Hyeong;Chung, Chu-Yong
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.99-105
    • /
    • 2011
  • This research presents the correction method to correct the location error of cloud caused by parallax error, and how the method can reduce the position error. The procedure has two steps: first step is to retrieve the corrected satellite zenith angle from the original satellite zenith angle. Second step is to adjust the location of the cloud with azimuth angle and the corrected satellite zenith angle retrieved from the first step. The position error due to parallax error can be as large as 60km in case of 70 degree of satellite zenith angle and 15 km of cloud height. The validation results by MODIS(Moderate-Resolution Imaging Spectrometer) show that the correction method in this study properly adjusts the original cloud position error and can increase the utilization of geostationary satellite data.

Design and Implementation of System for Estimating Diameter at Breast Height and Tree Height using LiDAR point cloud data

  • Jong-Su, Yim;Dong-Hyeon, Kim;Chi-Ung, Ko;Dong-Geun, Kim;Hyung-Ju, Cho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.99-110
    • /
    • 2023
  • In this paper, we propose a system termed ForestLi that can accurately estimate the diameter at breast height (DBH) and tree height using LiDAR point cloud data. The ForestLi system processes LiDAR point cloud data through the following steps: downsampling, outlier removal, ground segmentation, ground height normalization, stem extraction, individual tree segmentation, and DBH and tree height measurement. A commercial system, such as LiDAR360, for processing LiDAR point cloud data requires the user to directly correct errors in lower vegetation and individual tree segmentation. In contrast, the ForestLi system can automatically remove LiDAR point cloud data that correspond to lower vegetation in order to improve the accuracy of estimating DBH and tree height. This enables the ForestLi system to reduce the total processing time as well as enhance the accuracy of accuracy of measuring DBH and tree height compared to the LiDAR360 system. We performed an empirical study to confirm that the ForestLi system outperforms the LiDAR360 system in terms of the total processing time and accuracy of measuring DBH and tree height.

Outsourcing decryption algorithm of Verifiable transformed ciphertext for data sharing

  • Guangwei Xu;Chen Wang;Shan Li;Xiujin Shi;Xin Luo;Yanglan Gan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.998-1019
    • /
    • 2024
  • Mobile cloud computing is a very attractive service paradigm that outsources users' data computing and storage from mobile devices to cloud data centers. To protect data privacy, users often encrypt their data to ensure data sharing securely before data outsourcing. However, the bilinear and power operations involved in the encryption and decryption computation make it impossible for mobile devices with weak computational power and network transmission capability to correctly obtain decryption results. To this end, this paper proposes an outsourcing decryption algorithm of verifiable transformed ciphertext. First, the algorithm uses the key blinding technique to divide the user's private key into two parts, i.e., the authorization key and the decryption secret key. Then, the cloud data center performs the outsourcing decryption operation of the encrypted data to achieve partial decryption of the encrypted data after obtaining the authorization key and the user's outsourced decryption request. The verifiable random function is used to prevent the semi-trusted cloud data center from not performing the outsourcing decryption operation as required so that the verifiability of the outsourcing decryption is satisfied. Finally, the algorithm uses the authorization period to control the final decryption of the authorized user. Theoretical and experimental analyses show that the proposed algorithm reduces the computational overhead of ciphertext decryption while ensuring the verifiability of outsourcing decryption.

A Hierarchical Context Dissemination Framework for Managing Federated Clouds

  • Famaey, Jeroen;Latre, Steven;Strassner, John;Turck, Filip De
    • Journal of Communications and Networks
    • /
    • v.13 no.6
    • /
    • pp.567-582
    • /
    • 2011
  • The growing popularity of the Internet has caused the size and complexity of communications and computing systems to greatly increase in recent years. To alleviate this increased management complexity, novel autonomic management architectures have emerged, in which many automated components manage the network's resources in a distributed fashion. However, in order to achieve effective collaboration between these management components, they need to be able to efficiently exchange information in a timely fashion. In this article, we propose a context dissemination framework that addresses this problem. To achieve scalability, the management components are structured in a hierarchy. The framework facilitates the aggregation and translation of information as it is propagated through the hierarchy. Additionally, by way of semantics, context is filtered based on meaning and is disseminated intelligently according to dynamically changing context requirements. This significantly reduces the exchange of superfluous context and thus further increases scalability. The large size of modern federated cloud computing infrastructures, makes the presented context dissemination framework ideally suited to improve their management efficiency and scalability. The specific context requirements for the management of a cloud data center are identified, and our context dissemination approach is applied to it. Additionally, an extensive evaluation of the framework in a large-scale cloud data center scenario was performed in order to characterize the benefits of our approach, in terms of scalability and reasoning time.

Secure and Efficient Privacy-Preserving Identity-Based Batch Public Auditing with Proxy Processing

  • Zhao, Jining;Xu, Chunxiang;Chen, Kefei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.1043-1063
    • /
    • 2019
  • With delegating proxy to process data before outsourcing, data owners in restricted access could enjoy flexible and powerful cloud storage service for productivity, but still confront with data integrity breach. Identity-based data auditing as a critical technology, could address this security concern efficiently and eliminate complicated owners' public key certificates management issue. Recently, Yu et al. proposed an Identity-Based Public Auditing for Dynamic Outsourced Data with Proxy Processing (https://doi.org/10.3837/tiis.2017.10.019). It aims to offer identity-based, privacy-preserving and batch auditing for multiple owners' data on different clouds, while allowing proxy processing. In this article, we first demonstrate this scheme is insecure in the sense that malicious cloud could pass integrity auditing without original data. Additionally, clouds and owners are able to recover proxy's private key and thus impersonate it to forge tags for any data. Secondly, we propose an improved scheme with provable security in the random oracle model, to achieve desirable secure identity based privacy-preserving batch public auditing with proxy processing. Thirdly, based on theoretical analysis and performance simulation, our scheme shows better efficiency over existing identity-based auditing scheme with proxy processing on single owner and single cloud effort, which will benefit secure big data storage if extrapolating in real application.