• Title/Summary/Keyword: Cloud Architecture

Search Result 370, Processing Time 0.029 seconds

Economic Evaluation of Cloud Computing Investment Alternatives (클라우드 컴퓨팅 투자안의 경제성 평가)

  • Kim, Tae-Ha;Yang, Ji-Youn;Yang, Hee-Dong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-135
    • /
    • 2011
  • We provide an economic evaluation model to help managers make reasonable decision for the investment in the appropriate type of cloud computing. Cloud computing can be classified into public, private and hybrid architecture and we evaluate their attractiveness using traditional NPV and real option methods. We conduct economic analysis by comparing traditional software delivery model with various types of cloud computing. The work compares each mode of cloud computing against each other using passive NPV and dynamic real-option method. For more objective and conservative evaluation of investment alternatives, we eliminate conventional benefits that are often subjective or hard to measure, and count only the reduction of investment cost and maintenance cost as benefit. We argue that hybrid and public cloud computing can be undervalued without their intrinsic options such as abandonment, expansion and contraction.

An Adaptation of Consistency Criteria for Applications in the Cloud (클라우드 환경에서 응용에 맞는 일관성의 적용)

  • Kim, Chi-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.2
    • /
    • pp.341-347
    • /
    • 2012
  • In cloud computing, a enterprise or a client can use resources of computers that they are not own them. In case of Web 2.0 applications, such as Facebook, it is difficult to predict the maximum popularity of the service. But, the cloud computing may give a solution about this problem without high cost, thus becomes wildly popular. One of the advantage of cloud computing is providing a high availability. To provide the availability when the cloud computing that has shared-nothing architecture, strict consistency is not well with cloud computing. So, some consistency was proposed including the eventual consistency that was weaken the traditional consistency and has been adopted to many cloud applications. In this paper, we observe various consistency criteria that can adjust to cloud computing and discuss about some consistency that can be adapted to many applications of cloud computing.

ACCESS CONTROL MODEL FOR DATA STORED ON CLOUD COMPUTING

  • Mateen, Ahmed;Zhu, Qingsheng;Afsar, Salman;Rehan, Akmal;Mumtaz, Imran;Ahmad, Wasi
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.208-221
    • /
    • 2019
  • The inference for this research was concentrated on client's data protection in cloud computing i.e. data storages protection problems and how to limit unauthenticated access to info by developing access control model then accessible preparations were introduce after that an access control model was recommend. Cloud computing might refer as technology base on internet, having share, adaptable authority that might be utilized as organization by clients. Compositely cloud computing is software's and hardware's are conveying by internet as a service. It is a remarkable technology get well known because of minimal efforts, adaptability and versatility according to client's necessity. Regardless its prevalence large administration, propositions are reluctant to proceed onward cloud computing because of protection problems, particularly client's info protection. Management have communicated worries overs info protection as their classified and delicate info should be put away by specialist management at any areas all around. Several access models were accessible, yet those models do not satisfy the protection obligations as per services producers and cloud is always under assaults of hackers and data integrity, accessibility and protection were traded off. This research presented a model keep in aspect the requirement of services producers that upgrading the info protection in items of integrity, accessibility and security. The developed model helped the reluctant clients to effectively choosing to move on cloud while considerate the uncertainty related with cloud computing.

The use and potential applications of point clouds in simulation of solar radiation for solar access in urban contexts

  • Alkadri, Miktha F.;Turrin, Michela;Sariyildiz, Sevil
    • Advances in Computational Design
    • /
    • v.3 no.4
    • /
    • pp.319-338
    • /
    • 2018
  • High-performing architecture should be designed by taking into account the mutual dependency between the new building and the local context. The performative architecture plays an important role to avert any unforeseen failures after the building has been built; particularly ones related to the microclimate impacts that affect the human comfort. The use of the concept of solar envelopes helps designers to construct the developable mass of the building design considering the solar access and the site obstruction. However, the current analysis method using solar envelopes lack in terms of integrating the detailed information of the existing context during the simulation process. In architectural design, often the current site modelling not only absent in preserving the complex geometry but also information on the surface characteristics. Currently, the emerging applications of point clouds offer a great possibility to overcome these limitations, since they include the attribute information such as XYZ as the position information and RGB as the color information. This study particularly presents a comparative analysis between the manually built 3D models and the models generated from the point cloud data. The modelling comparisons focus on the relevant factors of solar radiation and a set of simulation to calculate the performance indicators regarding selected portions of the models. The experimental results emphasize an introduction of the design approach and the dataset visibility of the 3D existing environments. This paper ultimately aims at improving the current architectural decision of support environment means, by increasing the correspondence between the digital models for performance analysis and the real environments (context of design) during the conceptual design phase.

An Overview of Mobile Edge Computing: Architecture, Technology and Direction

  • Rasheed, Arslan;Chong, Peter Han Joo;Ho, Ivan Wang-Hei;Li, Xue Jun;Liu, William
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.4849-4864
    • /
    • 2019
  • Modern applications such as augmented reality, connected vehicles, video streaming and gaming have stringent requirements on latency, bandwidth and computation resources. The explosion in data generation by mobile devices has further exacerbated the situation. Mobile Edge Computing (MEC) is a recent addition to the edge computing paradigm that amalgamates the cloud computing capabilities with cellular communications. The concept of MEC is to relocate the cloud capabilities to the edge of the network for yielding ultra-low latency, high computation, high bandwidth, low burden on the core network, enhanced quality of experience (QoE), and efficient resource utilization. In this paper, we provide a comprehensive overview on different traits of MEC including its use cases, architecture, computation offloading, security, economic aspects, research challenges, and potential future directions.

Optimization of Data Placement using Principal Component Analysis based Pareto-optimal method for Multi-Cloud Storage Environment

  • Latha, V.L. Padma;Reddy, N. Sudhakar;Babu, A. Suresh
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.248-256
    • /
    • 2021
  • Now that we're in the big data era, data has taken on a new significance as the storage capacity has exploded from trillion bytes to petabytes at breakneck pace. As the use of cloud computing expands and becomes more commonly accepted, several businesses and institutions are opting to store their requests and data there. Cloud storage's concept of a nearly infinite storage resource pool makes data storage and access scalable and readily available. The majority of them, on the other hand, favour a single cloud because of the simplicity and inexpensive storage costs it offers in the near run. Cloud-based data storage, on the other hand, has concerns such as vendor lock-in, privacy leakage and unavailability. With geographically dispersed cloud storage providers, multicloud storage can alleviate these dangers. One of the key challenges in this storage system is to arrange user data in a cost-effective and high-availability manner. A multicloud storage architecture is given in this study. Next, a multi-objective optimization problem is defined to minimise total costs and maximise data availability at the same time, which can be solved using a technique based on the non-dominated sorting genetic algorithm II (NSGA-II) and obtain a set of non-dominated solutions known as the Pareto-optimal set.. When consumers can't pick from the Pareto-optimal set directly, a method based on Principal Component Analysis (PCA) is presented to find the best answer. To sum it all up, thorough tests based on a variety of real-world cloud storage scenarios have proven that the proposed method performs as expected.

De-Centralized Information Flow Control for Cloud Virtual Machines with Blowfish Encryption Algorithm

  • Gurav, Yogesh B.;Patil, Bankat M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.235-247
    • /
    • 2021
  • Today, the cloud computing has become a major demand of many organizations. The major reason behind this expansion is due to its cloud's sharing infrastructure with higher computing efficiency, lower cost and higher fle3xibility. But, still the security is being a hurdle that blocks the success of the cloud computing platform. Therefore, a novel Multi-tenant Decentralized Information Flow Control (MT-DIFC) model is introduced in this research work. The proposed system will encapsulate four types of entities: (1) The central authority (CA), (2) The encryption proxy (EP), (3) Cloud server CS and (4) Multi-tenant Cloud virtual machines. Our contribution resides within the encryption proxy (EP). Initially, the trust level of all the users within each of the cloud is computed using the proposed two-stage trust computational model, wherein the user is categorized bas primary and secondary users. The primary and secondary users vary based on the application and data owner's preference. Based on the computed trust level, the access privilege is provided to the cloud users. In EP, the cipher text information flow security strategy is implemented using the blowfish encryption model. For the data encryption as well as decryption, the key generation is the crucial as well as the challenging part. In this research work, a new optimal key generation is carried out within the blowfish encryption Algorithm. In the blowfish encryption Algorithm, both the data encryption as well as decryption is accomplishment using the newly proposed optimal key. The proposed optimal key has been selected using a new Self Improved Cat and Mouse Based Optimizer (SI-CMBO), which has been an advanced version of the standard Cat and Mouse Based Optimizer. The proposed model is validated in terms of encryption time, decryption time, KPA attacks as well.

Cyber-Physical Computing: Leveraging Cloud computing for Ubiquitous Healthcare Applications (사이버 물리 컴퓨팅 : 유비쿼터스 건강 관리 응용에 대한 레버리징 클라우드컴퓨팅)

  • Abid, Hassan;Jin, Wang;Lee, Sung-Young;Lee, Young-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.41-43
    • /
    • 2011
  • Cyber-Physical Systems are tight integration of computation, networking and physical objects to sense, monitor, and control the physical world. This paper presents a novel architecture that combines two next generation technologies i.e. cyber-physical systems and Cloud computing to develop a ubiquitous healthcare based infrastructure. Through this infrastructure, patients and elderly people get remote assistance, monitoring of their health conditions and medication while living in proximity of home. Consequently, this leads to major cost savings. However, there are various challenges that need to be overcome before building such systems. These challenges include making system real-time responsive, reliability, stability and privacy. Therefore, in this paper, we propose an architecture that deals with these challenges.

A Cloud Service Brokerage Architecture for Enhanced Business Processes (비즈니스 프로세스 가치 향상을 위한 클라우드 서비스 브로커 아키텍처)

  • Lee, Moonsoo;Choi, Eunmi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.184-187
    • /
    • 2013
  • 클라우드 컴퓨팅 기술로 인한 정보기술 영역의 파라다임의 변혁에 따라서, 다양한 산업의 기업들이 운영비용 및 투자비용 절감효과를 목적으로 클라우드 컴퓨팅을 도입하며 성공 사례를 논의하고 있으나, 기업의 프로세스 개선을 위하여 클라우드 서비스를 이용하는 접근방법은 아직 국내 사례에서는 부족하다. 본 논문에서는 클라우드 서비스를 관리하는 CSB(Cloud Service Brokerage) 플랫폼에서 서비스의 Orchestration을 지원하는 SOA(Service Oriented Architecture)의 WS-BPEL 기술을 적용하여 프로세스 자동화를 구축할 수 있는 아키텍처를 제시하였다. 클라우드의 SaaS 도입 시 어플리케이션뿐만 아니라, 인프라 서비스를 이용하는 IaaS 적용과 다른 프로세스와의 호환과 연동 작업을 구축하는 PaaS를 같이 적용함으로써 비즈니스 프로세스의 가치 향상이 가능하도록 본 논문에서는 클라우드 서비스 브로커 도입에 대한 시스템 아키텍처를 제안한다.

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.