• Title/Summary/Keyword: 클라우드 게임

Search Result 54, Processing Time 0.017 seconds

Data Block based User Authentication for Outsourced Data (아웃소싱 데이터 보호를 위한 데이터 블록 기반의 상호 인증 프로토콜)

  • Hahn, Changhee;Kown, Hyunsoo;Kim, Daeyeong;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1175-1184
    • /
    • 2015
  • Recently, there has been an explosive increase in the volume of multimedia data that is available as a result of the development of multimedia technologies. More and more data is becoming available on a variety of web sites, and it has become increasingly cost prohibitive to have a single data server store and process multimedia files locally. Therefore, many service providers have been likely to outsource data to cloud storage to reduce costs. Such behavior raises one serious concern: how can data users be authenticated in a secure and efficient way? The most widely used password-based authentication methods suffer from numerous disadvantages in terms of security. Multi-factor authentication protocols based on a variety of communication channels, such as SMS, biometric, or hardware tokens, may improve security but inevitably reduce usability. To this end, we present a data block-based authentication scheme that is secure and guarantees usability in such a manner where users do nothing more than enter a password. In addition, the proposed scheme can be effectively used to revoke user rights. To the best of our knowledge, our scheme is the first data block-based authentication scheme for outsourced data that is proven to be secure without degradation in usability. An experiment was conducted using the Amazon EC2 cloud service, and the results show that the proposed scheme guarantees a nearly constant time for user authentication.

Constant-Size Ciphertext-Policy Attribute-Based Data Access and Outsourceable Decryption Scheme (고정 크기 암호 정책 속성 기반의 데이터 접근과 복호 연산 아웃소싱 기법)

  • Hahn, Changhee;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.933-945
    • /
    • 2016
  • Sharing data by multiple users on the public storage, e.g., the cloud, is considered to be efficient because the cloud provides on-demand computing service at anytime and anywhere. Secure data sharing is achieved by fine-grained access control. Existing symmetric and public key encryption schemes are not suitable for secure data sharing because they support 1-to-1 relationship between a ciphertext and a secret key. Attribute based encryption supports fine-grained access control, however it incurs linearly increasing ciphertexts as the number of attributes increases. Additionally, the decryption process has high computational cost so that it is not applicable in case of resource-constrained environments. In this study, we propose an efficient attribute-based secure data sharing scheme with outsourceable decryption. The proposed scheme guarantees constant-size ciphertexts irrespective of the number of attributes. In case of static attributes, the computation cost to the user is reduced by delegating approximately 95.3% of decryption operations to the more powerful storage systems, whereas 72.3% of decryption operations are outsourced in terms of dynamic attributes.

IoT Edge Architecture Model to Prevent Blockchain-Based Security Threats (블록체인 기반의 보안 위협을 예방할 수 있는 IoT 엣지 아키텍처 모델)

  • Yoon-Su Jeong
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2024
  • Over the past few years, IoT edges have begun to emerge based on new low-latency communication protocols such as 5G. However, IoT edges, despite their enormous advantages, pose new complementary threats, requiring new security solutions to address them. In this paper, we propose a cloud environment-based IoT edge architecture model that complements IoT systems. The proposed model acts on machine learning to prevent security threats in advance with network traffic data extracted from IoT edge devices. In addition, the proposed model ensures load and security in the access network (edge) by allocating some of the security data at the local node. The proposed model further reduces the load on the access network (edge) and secures the vulnerable part by allocating some functions of data processing and management to the local node among IoT edge environments. The proposed model virtualizes various IoT functions as a name service, and deploys hardware functions and sufficient computational resources to local nodes as needed.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.