• Title/Summary/Keyword: Edge-Cloud Systems

Search Result 71, Processing Time 0.024 seconds

Edge-Centric Metamorphic IoT Device Platform for Efficient On-Demand Hardware Replacement in Large-Scale IoT Applications (대규모 IoT 응용에 효과적인 주문형 하드웨어의 재구성을 위한 엣지 기반 변성적 IoT 디바이스 플랫폼)

  • Moon, Hyeongyun;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1688-1696
    • /
    • 2020
  • The paradigm of Internet-of-things(IoT) systems is changing from a cloud-based system to an edge-based system to solve delays caused by network congestion, server overload and security issues due to data transmission. However, edge-based IoT systems have fatal weaknesses such as lack of performance and flexibility due to various limitations. To improve performance, application-specific hardware can be implemented in the edge device, but performance cannot be improved except for specific applications due to a fixed function. This paper introduces a edge-centric metamorphic IoT(mIoT) platform that can use a variety of hardware through on-demand partial reconfiguration despite the limited hardware resources of the edge device, so we can increase the performance and flexibility of the edge device. According to the experimental results, the edge-centric mIoT platform that executes the reconfiguration algorithm at the edge was able to reduce the number of server accesses by up to 82.2% compared to previous studies in which the reconfiguration algorithm was executed on the server.

A Context-aware Task Offloading Scheme in Collaborative Vehicular Edge Computing Systems

  • Jin, Zilong;Zhang, Chengbo;Zhao, Guanzhe;Jin, Yuanfeng;Zhang, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.383-403
    • /
    • 2021
  • With the development of mobile edge computing (MEC), some late-model application technologies, such as self-driving, augmented reality (AR) and traffic perception, emerge as the times require. Nevertheless, the high-latency and low-reliability of the traditional cloud computing solutions are difficult to meet the requirement of growing smart cars (SCs) with computing-intensive applications. Hence, this paper studies an efficient offloading decision and resource allocation scheme in collaborative vehicular edge computing networks with multiple SCs and multiple MEC servers to reduce latency. To solve this problem with effect, we propose a context-aware offloading strategy based on differential evolution algorithm (DE) by considering vehicle mobility, roadside units (RSUs) coverage, vehicle priority. On this basis, an autoregressive integrated moving average (ARIMA) model is employed to predict idle computing resources according to the base station traffic in different periods. Simulation results demonstrate that the practical performance of the context-aware vehicular task offloading (CAVTO) optimization scheme could reduce the system delay significantly.

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

A Learning-based Power Control Scheme for Edge-based eHealth IoT Systems

  • Su, Haoru;Yuan, Xiaoming;Tang, Yujie;Tian, Rui;Sun, Enchang;Yan, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4385-4399
    • /
    • 2021
  • The Internet of Things (IoT) eHealth systems composed by Wireless Body Area Network (WBAN) has emerged recently. Sensor nodes are placed around or in the human body to collect physiological data. WBAN has many different applications, for instance health monitoring. Since the limitation of the size of the battery, besides speed, reliability, and accuracy; design of WBAN protocols should consider the energy efficiency and time delay. To solve these problems, this paper adopt the end-edge-cloud orchestrated network architecture and propose a transmission based on reinforcement algorithm. The priority of sensing data is classified according to certain application. System utility function is modeled according to the channel factors, the energy utility, and successful transmission conditions. The optimization problem is mapped to Q-learning model. Following this online power control protocol, the energy level of both the senor to coordinator, and coordinator to edge server can be modified according to the current channel condition. The network performance is evaluated by simulation. The results show that the proposed power control protocol has higher system energy efficiency, delivery ratio, and throughput.

Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks

  • Yoonjeong, Choi; Yujin, Lim
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.803-812
    • /
    • 2022
  • With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devices close to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizes to be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations (SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multiple SBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate content and increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategy that considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored in the SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of the content. Experiments in various environments show that the proposed caching strategy outperforms the other methods in terms of hit ratio.

The Performance Study of a Virtualized Multicore Web System

  • Lu, Chien-Te;Yeh, C.S. Eugene;Wang, Yung-Chung;Yang, Chu-Sing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5419-5436
    • /
    • 2016
  • Enhancing the performance of computing systems has been an important topic since the invention of computers. The leading-edge technologies of multicore and virtualization dramatically influence the development of current IT systems. We study performance attributes of response time (RT), throughput, efficiency, and scalability of a virtualized Web system running on a multicore server. We build virtual machines (VMs) for a Web application, and use distributed stress tests to measure RTs and throughputs under varied combinations of virtual cores (VCs) and VM instances. Their gains, efficiencies and scalabilities are also computed and compared. Our experimental and analytic results indicate: 1) A system can perform and scale much better by adopting multiple single-VC VMs than by single multiple-VC VM. 2) The system capacity gain is proportional to the number of VM instances run, but not proportional to the number of VCs allocated in a VM. 3) A system with more VMs or VCs has higher physical CPU utilization, but lower vCPU utilization. 4) The maximum throughput gain is less than VM or VC gain. 5) Per-core computing efficiency does not correlate to the quality of VCs or VMs employed. The outcomes can provide valuable guidelines for selecting instance types provided by public Cloud providers and load balancing planning for Web systems.

Task Scheduling in Fog Computing - Classification, Review, Challenges and Future Directions

  • Alsadie, Deafallah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.89-100
    • /
    • 2022
  • With the advancement in the Internet of things Technology (IoT) cloud computing, billions of physical devices have been interconnected for sharing and collecting data in different applications. Despite many advancements, some latency - specific application in the real world is not feasible due to existing constraints of IoT devices and distance between cloud and IoT devices. In order to address issues of latency sensitive applications, fog computing has been developed that involves the availability of computing and storage resources at the edge of the network near the IoT devices. However, fog computing suffers from many limitations such as heterogeneity, storage capabilities, processing capability, memory limitations etc. Therefore, it requires an adequate task scheduling method for utilizing computing resources optimally at the fog layer. This work presents a comprehensive review of different task scheduling methods in fog computing. It analyses different task scheduling methods developed for a fog computing environment in multiple dimensions and compares them to highlight the advantages and disadvantages of methods. Finally, it presents promising research directions for fellow researchers in the fog computing environment.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Resource Management in 5G Mobile Networks: Survey and Challenges

  • Chien, Wei-Che;Huang, Shih-Yun;Lai, Chin-Feng;Chao, Han-Chieh
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.896-914
    • /
    • 2020
  • With the rapid growth of network traffic, a large number of connected devices, and higher application services, the traditional network is facing several challenges. In addition to improving the current network architecture and hardware specifications, effective resource management means the development trend of 5G. Although many existing potential technologies have been proposed to solve the some of 5G challenges, such as multiple-input multiple-output (MIMO), software-defined networking (SDN), network functions virtualization (NFV), edge computing, millimeter-wave, etc., research studies in 5G continue to enrich its function and move toward B5G mobile networks. In this paper, focusing on the resource allocation issues of 5G core networks and radio access networks, we address the latest technological developments and discuss the current challenges for resource management in 5G.

Energy-Aware Data-Preprocessing Scheme for Efficient Audio Deep Learning in Solar-Powered IoT Edge Computing Environments (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 에너지 적응형 데이터 전처리 기법)

  • Yeontae Yoo;Dong Kun Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.159-164
    • /
    • 2023
  • Solar energy harvesting IoT devices prioritize maximizing the utilization of collected energy due to the periodic recharging nature of solar energy, rather than minimizing energy consumption. Meanwhile, research on edge AI, which performs machine learning near the data source instead of the cloud, is actively conducted for reasons such as data confidentiality and privacy, response time, and cost. One such research area involves performing various audio AI applications using audio data collected from multiple IoT devices in an IoT edge computing environment. However, in most studies, IoT devices only perform sensing data transmission to the edge server, and all processes, including data preprocessing, are performed on the edge server. In this case, it not only leads to overload issues on the edge server but also causes network congestion by transmitting unnecessary data for learning. On the other way, if data preprocessing is delegated to each IoT device to address this issue, it leads to another problem of increased blackout time due to energy shortages in the devices. In this paper, we aim to alleviate the problem of increased blackout time in devices while mitigating issues in server-centric edge AI environments by determining where the data preprocessed based on the energy state of each IoT device. In the proposed method, IoT devices only perform the preprocessing process, which includes sound discrimination and noise removal, and transmit to the server if there is more energy available than the energy threshold required for the basic operation of the device.