• Title/Summary/Keyword: Edge-Cloud Systems

Search Result 71, Processing Time 0.022 seconds

Intelligent Resource Management Schemes for Systems, Services, and Applications of Cloud Computing Based on Artificial Intelligence

  • Lim, JongBeom;Lee, DaeWon;Chung, Kwang-Sik;Yu, HeonChang
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1192-1200
    • /
    • 2019
  • Recently, artificial intelligence techniques have been widely used in the computer science field, such as the Internet of Things, big data, cloud computing, and mobile computing. In particular, resource management is of utmost importance for maintaining the quality of services, service-level agreements, and the availability of the system. In this paper, we review and analyze various ways to meet the requirements of cloud resource management based on artificial intelligence. We divide cloud resource management techniques based on artificial intelligence into three categories: fog computing systems, edge-cloud systems, and intelligent cloud computing systems. The aim of the paper is to propose an intelligent resource management scheme that manages mobile resources by monitoring devices' statuses and predicting their future stability based on one of the artificial intelligence techniques. We explore how our proposed resource management scheme can be extended to various cloud-based systems.

Task Scheduling and Resource Management Strategy for Edge Cloud Computing Using Improved Genetic Algorithm

  • Xiuye Yin;Liyong Chen
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.450-464
    • /
    • 2023
  • To address the problems of large system overhead and low timeliness when dealing with task scheduling in mobile edge cloud computing, a task scheduling and resource management strategy for edge cloud computing based on an improved genetic algorithm was proposed. First, a user task scheduling system model based on edge cloud computing was constructed using the Shannon theorem, including calculation, communication, and network models. In addition, a multi-objective optimization model, including delay and energy consumption, was constructed to minimize the sum of two weights. Finally, the selection, crossover, and mutation operations of the genetic algorithm were improved using the best reservation selection algorithm and normal distribution crossover operator. Furthermore, an improved legacy algorithm was selected to deal with the multi-objective problem and acquire the optimal solution, that is, the best computing task scheduling scheme. The experimental analysis of the proposed strategy based on the MATLAB simulation platform shows that its energy loss does not exceed 50 J, and the time delay is 23.2 ms, which are better than those of other comparison strategies.

A Cloud-Edge Collaborative Computing Task Scheduling and Resource Allocation Algorithm for Energy Internet Environment

  • Song, Xin;Wang, Yue;Xie, Zhigang;Xia, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2282-2303
    • /
    • 2021
  • To solve the problems of heavy computing load and system transmission pressure in energy internet (EI), we establish a three-tier cloud-edge integrated EI network based on a cloud-edge collaborative computing to achieve the tradeoff between energy consumption and the system delay. A joint optimization problem for resource allocation and task offloading in the threetier cloud-edge integrated EI network is formulated to minimize the total system cost under the constraints of the task scheduling binary variables of each sensor node, the maximum uplink transmit power of each sensor node, the limited computation capability of the sensor node and the maximum computation resource of each edge server, which is a Mixed Integer Non-linear Programming (MINLP) problem. To solve the problem, we propose a joint task offloading and resource allocation algorithm (JTOARA), which is decomposed into three subproblems including the uplink transmission power allocation sub-problem, the computation resource allocation sub-problem, and the offloading scheme selection subproblem. Then, the power allocation of each sensor node is achieved by bisection search algorithm, which has a fast convergence. While the computation resource allocation is derived by line optimization method and convex optimization theory. Finally, to achieve the optimal task offloading, we propose a cloud-edge collaborative computation offloading schemes based on game theory and prove the existence of Nash Equilibrium. The simulation results demonstrate that our proposed algorithm can improve output performance as comparing with the conventional algorithms, and its performance is close to the that of the enumerative algorithm.

A Study of Mobile Edge Computing System Architecture for Connected Car Media Services on Highway

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5669-5684
    • /
    • 2018
  • The new mobile edge network architecture has been required for an increasing amount of traffic, quality requirements, advanced driver assistance system for autonomous driving and new cloud computing demands on highway. This article proposes a hierarchical cloud computing architecture to enhance performance by using adaptive data load distribution for buses that play the role of edge computing server. A vehicular dynamic cloud is based on wireless architecture including Wireless Local Area Network and Long Term Evolution Advanced communication is used for data transmission between moving buses and cars. The main advantages of the proposed architecture include both a reduction of data loading for top layer cloud server and effective data distribution on traffic jam highway where moving vehicles require video on demand (VOD) services from server. Through the description of real environment based on NS-2 network simulation, we conducted experiments to validate the proposed new architecture. Moreover, we show the feasibility and effectiveness for the connected car media service on highway.

An Edge AI Device based Intelligent Transportation System

  • Jeong, Youngwoo;Oh, Hyun Woo;Kim, Soohee;Lee, Seung Eun
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.166-173
    • /
    • 2022
  • Recently, studies have been conducted on intelligent transportation systems (ITS) that provide safety and convenience to humans. Systems that compose the ITS adopt architectures that applied the cloud computing which consists of a high-performance general-purpose processor or graphics processing unit. However, an architecture that only used the cloud computing requires a high network bandwidth and consumes much power. Therefore, applying edge computing to ITS is essential for solving these problems. In this paper, we propose an edge artificial intelligence (AI) device based ITS. Edge AI which is applicable to various systems in ITS has been applied to license plate recognition. We implemented edge AI on a field-programmable gate array (FPGA). The accuracy of the edge AI for license plate recognition was 0.94. Finally, we synthesized the edge AI logic with Magnachip/Hynix 180nm CMOS technology and the power consumption measured using the Synopsys's design compiler tool was 482.583mW.

Cloudification of On-Chip Flash Memory for Reconfigurable IoTs using Connected-Instruction Execution (연결기반 명령어 실행을 이용한 재구성 가능한 IoT를 위한 온칩 플래쉬 메모리의 클라우드화)

  • Lee, Dongkyu;Cho, Jeonghun;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.2
    • /
    • pp.103-111
    • /
    • 2019
  • The IoT-driven large-scaled systems consist of connected things with on-chip executable embedded software. These light-weighted embedded things have limited hardware space, especially small size of on-chip flash memory. In addition, on-chip embedded software in flash memory is not easy to update in runtime to equip with latest services in IoT-driven applications. It is becoming important to develop light-weighted IoT devices with various software in the limited on-chip flash memory. The remote instruction execution in cloud via IoT connectivity enables to provide high performance software execution with unlimited software instruction in cloud and low-power streaming of instruction execution in IoT edge devices. In this paper, we propose a Cloud-IoT asymmetric structure for providing high performance instruction execution in cloud, still low power code executable thing in light-weighted IoT edge environment using remote instruction execution. We propose a simulated approach to determine efficient partitioning of software runtime in cloud and IoT edge. We evaluated the instruction cloudification using remote instruction by determining the execution time by the proposed structure. The cloud-connected instruction set simulator is newly introduced to emulate the behavior of the processor. Experimental results of the cloud-IoT connected software execution using remote instruction showed the feasibility of cloudification of on-chip code flash memory. The simulation environment for cloud-connected code execution successfully emulates architectural operations of on-chip flash memory in cloud so that the various software services in IoT can be accelerated and performed in low-power by cloudification of remote instruction execution. The execution time of the program is reduced by 50% and the memory space is reduced by 24% when the cloud-connected code execution is used.

Multi-access Edge Computing Scheduler for Low Latency Services (저지연 서비스를 위한 Multi-access Edge Computing 스케줄러)

  • Kim, Tae-Hyun;Kim, Tae-Young;Jin, Sunggeun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.6
    • /
    • pp.299-305
    • /
    • 2020
  • We have developed a scheduler that additionally consider network performance by extending the Kubernetes developed to manage lots of containers in cloud computing nodes. The network delay adapt characteristics of the compute nodes were learned during server operation and the learned results were utilized to develop placement algorithm by considering the existing measurement units, CPU, memory, and volume together, and it was confirmed that the low delay network service was provided through placement algorithm.

Design and Evaluation of a Fault-tolerant Publish/Subscribe System for IoT Applications (IoT 응용을 위한 결함 포용 발행/구독 시스템의 설계 및 평가)

  • Bae, Ihn-Han
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1101-1113
    • /
    • 2021
  • The rapid growth of sense-and-respond applications and the emerging cloud computing model present a new challenge: providing publish/subscribe middleware as a scalable and elastic cloud service. The publish/subscribe interaction model is a promising solution for scalable data dissemination over wide-area networks. In addition, there have been some work on the publish/subscribe messaging paradigm that guarantees reliability and availability in the face of node and link failures. These publish/subscribe systems are commonly used in information-centric networks and edge-fog-cloud infrastructures for IoT. The IoT has an edge-fog cloud infrastructure to efficiently process massive amounts of sensing data collected from the surrounding environment. In this paper. we propose a quorum-based hierarchical fault-tolerant publish/subscribe systems (QHFPS) to enable reliable delivery of messages in the presence of link and node failures. The QHFPS efficiently distributes IoT messages to the publish/subscribe brokers in fog overlay layers on the basis of proposing extended stepped grid (xS-grid) quorum for providing tolerance when faced with node failures and network partitions. We evaluate the performance of QHFPS in three aspects: number of transmitted Pub/Sub messages, average subscription delay, and subscritpion delivery rate with an analytical model.

Performance Comparison and Optimal Selection of Computing Techniques for Corridor Surveillance (회랑감시를 위한 컴퓨팅 기법의 성능 비교와 최적 선택 연구)

  • Gyeong-rae Jo;Seok-min Hong;Won-hyuck Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.770-775
    • /
    • 2023
  • Recently, as the amount of digital data increases exponentially, the importance of data processing systems is being emphasized. In this situation, the selection and construction of data processing systems are becoming more important. In this study, the performance of cloud computing (CC), edge computing (EC), and UAV-based intelligent edge computing (UEC) was compared as a way to solve this problem. The characteristics, strengths, and weaknesses of each method were analyzed. In particular, this study focused on real-time large-capacity data processing situations such as corridor monitoring. When conducting the experiment, a specific scenario was assumed and a penalty was given to the infrastructure. In this way, it was possible to evaluate performance in real situations more accurately. In addition, the effectiveness and limitations of each computing method were more clearly understood, and through this, the help was provided to enable more effective system selection.

An Overview of Mobile Edge Computing: Architecture, Technology and Direction

  • Rasheed, Arslan;Chong, Peter Han Joo;Ho, Ivan Wang-Hei;Li, Xue Jun;Liu, William
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.4849-4864
    • /
    • 2019
  • Modern applications such as augmented reality, connected vehicles, video streaming and gaming have stringent requirements on latency, bandwidth and computation resources. The explosion in data generation by mobile devices has further exacerbated the situation. Mobile Edge Computing (MEC) is a recent addition to the edge computing paradigm that amalgamates the cloud computing capabilities with cellular communications. The concept of MEC is to relocate the cloud capabilities to the edge of the network for yielding ultra-low latency, high computation, high bandwidth, low burden on the core network, enhanced quality of experience (QoE), and efficient resource utilization. In this paper, we provide a comprehensive overview on different traits of MEC including its use cases, architecture, computation offloading, security, economic aspects, research challenges, and potential future directions.