• Title/Summary/Keyword: task offloading

Search Result 42, Processing Time 0.026 seconds

A Task Offloading Approach using Classification and Particle Swarm Optimization (분류와 Particle Swarm Optimization을 이용한 태스크 오프로딩 방법)

  • Mateo, John Cristopher A.;Lee, Jaewan
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2017
  • Innovations from current researches on cloud computing such as applying bio-inspired computing techniques have brought new level solutions in offloading mechanisms. With the growing trend of mobile devices, mobile cloud computing can also benefit from applying bio-inspired techniques. Energy-efficient offloading mechanisms on mobile cloud systems are needed to reduce the total energy consumption but previous works did not consider energy consumption in the decision-making of task distribution. This paper proposes the Particle Swarm Optimization (PSO) as an offloading strategy of cloudlet to data centers where each task is represented as a particle during the process. The collected tasks are classified using K-means clustering on the cloudlet before applying PSO in order to minimize the number of particles and to locate the best data center for a specific task, instead of considering all tasks during the PSO process. Simulation results show that the proposed PSO excels in choosing data centers with respect to energy consumption, while it has accumulated a little more processing time compared to the other approaches.

An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

  • He, Bo;Li, Tianzhang
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.489-504
    • /
    • 2021
  • By distributing computing tasks among devices at the edge of networks, edge computing uses virtualization, distributed computing and parallel computing technologies to enable users dynamically obtain computing power, storage space and other services as needed. Applying edge computing architectures to Internet of Vehicles can effectively alleviate the contradiction among the large amount of computing, low delayed vehicle applications, and the limited and uneven resource distribution of vehicles. In this paper, a predictive offloading strategy based on the MEC load state is proposed, which not only considers reducing the delay of calculation results by the RSU multi-hop backhaul, but also reduces the queuing time of tasks at MEC servers. Firstly, the delay factor and the energy consumption factor are introduced according to the characteristics of tasks, and the cost of local execution and offloading to MEC servers for execution are defined. Then, from the perspective of vehicles, the delay preference factor and the energy consumption preference factor are introduced to define the cost of executing a computing task for another computing task. Furthermore, a mathematical optimization model for minimizing the power overhead is constructed with the constraints of time delay and power consumption. Additionally, the simulated annealing algorithm is utilized to solve the optimization model. The simulation results show that this strategy can effectively reduce the system power consumption by shortening the task execution delay. Finally, we can choose whether to offload computing tasks to MEC server for execution according to the size of two costs. This strategy not only meets the requirements of time delay and energy consumption, but also ensures the lowest cost.

Task offloading scheme based on the DRL of Connected Home using MEC (MEC를 활용한 커넥티드 홈의 DRL 기반 태스크 오프로딩 기법)

  • Ducsun Lim;Kyu-Seek Sohn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.61-67
    • /
    • 2023
  • The rise of 5G and the proliferation of smart devices have underscored the significance of multi-access edge computing (MEC). Amidst this trend, interest in effectively processing computation-intensive and latency-sensitive applications has increased. This study investigated a novel task offloading strategy considering the probabilistic MEC environment to address these challenges. Initially, we considered the frequency of dynamic task requests and the unstable conditions of wireless channels to propose a method for minimizing vehicle power consumption and latency. Subsequently, our research delved into a deep reinforcement learning (DRL) based offloading technique, offering a way to achieve equilibrium between local computation and offloading transmission power. We analyzed the power consumption and queuing latency of vehicles using the deep deterministic policy gradient (DDPG) and deep Q-network (DQN) techniques. Finally, we derived and validated the optimal performance enhancement strategy in a vehicle based MEC environment.

Energy efficiency task scheduling for battery level-aware mobile edge computing in heterogeneous networks

  • Xie, Zhigang;Song, Xin;Cao, Jing;Xu, Siyang
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.746-758
    • /
    • 2022
  • This paper focuses on a mobile edge-computing-enabled heterogeneous network. A battery level-aware task-scheduling framework is proposed to improve the energy efficiency and prolong the operating hours of battery-powered mobile devices. The formulated optimization problem is a typical mixed-integer nonlinear programming problem. To solve this nondeterministic polynomial (NP)-hard problem, a decomposition-based task-scheduling algorithm is proposed. Using an alternating optimization technology, the original problem is divided into three subproblems. In the outer loop, task offloading decisions are yielded using a pruning search algorithm for the task offloading subproblem. In the inner loop, closed-form solutions for computational resource allocation subproblems are derived using the Lagrangian multiplier method. Then, it is proven that the transmitted power-allocation subproblem is a unimodal problem; this subproblem is solved using a gradient-based bisection search algorithm. The simulation results demonstrate that the proposed framework achieves better energy efficiency than other frameworks. Additionally, the impact of the battery level-aware scheme on the operating hours of battery-powered mobile devices is also investigated.

Service Mobility Support Scheme in SDN-based Fog Computing Environment (SDN 기반 Fog Computing 환경에서 서비스 이동성 제공 방안)

  • Kyung, Yeun-Woong;Kim, Tae-Kook
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.3
    • /
    • pp.39-44
    • /
    • 2020
  • In this paper, we propose a SDN-based fog computing service mobility support scheme. Fog computing architecture has been attracted because it enables task offloading services to IoT(Internet of Things) devices which has limited computing and power resources. However, since static as well as mobile IoT devices are candidate service targets for the fog computing service, the efficient task offloading scheme considering the mobility should be required. Especially for the IoT services which need low-latency response, the new connection and task offloading delay with the new fog computing node after handover can occur QoS(Quality of Service) degradation. Therefore, this paper proposes an efficient service mobility support scheme which considers both task migration and flow rule pre-installations. Task migration allows for the service connectivity when the fog computing node needs to be changed. In addition, the flow rule pre-installations into the forwarding nodes along the path after handover enables to reduce the connection delay and service interruption time.

Analysis of partial offloading effects according to network load (네트워크 부하에 따른 부분 오프로딩 효과 분석)

  • Baik, Jae-Seok;Nam, Kwang-Woo;Jang, Min-Seok;Lee, Yon-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.591-593
    • /
    • 2022
  • This paper proposes a partial offloading system for minimizing application service processing latency in an FEC (Fog/Edge Computing) environment, and it analyzes the offloading effect of the proposed system against local-only and edge-server-only processing based on network load. A partial offloading algorithm based on reconstruction linearization of multi-branch structures is included in the proposed system, as is an optimal collaboration algorithm between mobile devices and edge servers [1,2]. The experiment was conducted by applying layer scheduling to a logical CNN model with a DAG topology. When compared to local or edge-only executions, experimental results show that the proposed system always provides efficient task processing strategies and processing latency.

  • PDF

Response Time Analysis Considering Sensing Data Synchronization in Mobile Cloud Applications (모바일 클라우드 응용에서 센싱 데이터 동기화를 고려한 응답 시간 분석)

  • Min, Hong;Heo, Junyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.137-141
    • /
    • 2015
  • Mobile cloud computing uses cloud service to solve the resource constraint problem of mobile devices. Offloading means that a task executed on the mobile device commits to cloud and many studies related to the energy consumption have been researched. In this paper, we designed a response time model considering sensing data synchronization to estimate the efficiency of the offloading scheme in terms of the response time. The proposed model considers synchronization of required sensing data to improve the accuracy of response time estimation when cloud processes the task requested from a mobile device. We found that the response time is effected by new sensing data generation rate and synchronization period through simulation results.

UAV-MEC Offloading and Migration Decision Algorithm for Load Balancing in Vehicular Edge Computing Network (차량 엣지 컴퓨팅 네트워크에서 로드 밸런싱을 위한 UAV-MEC 오프로딩 및 마이그레이션 결정 알고리즘)

  • A Young, Shin;Yujin, Lim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.12
    • /
    • pp.437-444
    • /
    • 2022
  • Recently, research on mobile edge services has been conducted to handle computationally intensive and latency-sensitive tasks occurring in wireless networks. However, MEC, which is fixed on the ground, cannot flexibly cope with situations where task processing requests increase sharply, such as commuting time. To solve this problem, a technology that provides edge services using UAVs (Unmanned Aerial Vehicles) has emerged. Unlike ground MEC servers, UAVs have limited battery capacity, so it is necessary to optimize energy efficiency through load balancing between UAV MEC servers. Therefore, in this paper, we propose a load balancing technique with consideration of the energy state of UAVs and the mobility of vehicles. The proposed technique is composed of task offloading scheme using genetic algorithm and task migration scheme using Q-learning. To evaluate the performance of the proposed technique, experiments were conducted with varying mobility speed and number of vehicles, and performance was analyzed in terms of load variance, energy consumption, communication overhead, and delay constraint satisfaction rate.

Delayed offloading scheme for IoT tasks considering opportunistic fog computing environment (기회적 포그 컴퓨팅 환경을 고려한 IoT 테스크의 지연된 오프로딩 제공 방안)

  • Kyung, Yeunwoong
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.4
    • /
    • pp.89-92
    • /
    • 2020
  • According to the various IoT(Internet of Things) services, there have been lots of task offloading researches for IoT devices. Since there are service response delay and core network load issues in conventional cloud computing based offloadings, fog computing based offloading has been focused whose location is close to the IoT devices. However, even in the fog computing architecture, the load can be concentrated on the for computing node when the number of requests increase. To solve this problem, the opportunistic fog computing concept which offloads task to available computing resources such as cars and drones is introduced. In previous fog and opportunistic fog node researches, the offloading is performed immediately whenever the service request occurs. This means that the service requests can be offloaded to the opportunistic fog nodes only while they are available. However, if the service response delay requirement is satisfied, there is no need to offload the request immediately. In addition, the load can be distributed by making the best use of the opportunistic fog nodes. Therefore, this paper proposes a delayed offloading scheme to satisfy the response delay requirements and offload the request to the opportunistic fog nodes as efficiently as possible.

Partial Offloading System of Multi-branch Structures in Fog/Edge Computing Environment (FEC 환경에서 다중 분기구조의 부분 오프로딩 시스템)

  • Lee, YonSik;Ding, Wei;Nam, KwangWoo;Jang, MinSeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1551-1558
    • /
    • 2022
  • We propose a two-tier cooperative computing system comprised of a mobile device and an edge server for partial offloading of multi-branch structures in Fog/Edge Computing environments in this paper. The proposed system includes an algorithm for splitting up application service processing by using reconstructive linearization techniques for multi-branch structures, as well as an optimal collaboration algorithm based on partial offloading between mobile device and edge server. Furthermore, we formulate computation offloading and CNN layer scheduling as latency minimization problems and simulate the effectiveness of the proposed system. As a result of the experiment, the proposed algorithm is suitable for both DAG and chain topology, adapts well to different network conditions, and provides efficient task processing strategies and processing time when compared to local or edge-only executions. Furthermore, the proposed system can be used to conduct research on the optimization of the model for the optimal execution of application services on mobile devices and the efficient distribution of edge resource workloads.