• Title/Summary/Keyword: Task scheduling algorithm

Search Result 208, Processing Time 0.023 seconds

A Dynamic Voltage Scaling Algorithm for Low-Energy Hard Real-Time Applications using Execution Time Profile (실행 시간 프로파일을 이용한 저전력 경성 실시간 프로그램용 동적 전압 조절 알고리즘)

  • 신동군;김지홍
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.11
    • /
    • pp.601-610
    • /
    • 2002
  • Intra-task voltage scheduling (IntraVS), which adjusts the supply voltage within an individual task boundary, is an effective technique for developing low-power applications. In this paper, we propose a novel intra-task voltage scheduling algorithm for hard real-time applications based on average-case execution time. Unlike the conventional IntraVS algorithm where voltage scaling decisions are based on the worst-case execution cycles, tile proposed algorithm improves the energy efficiency by controlling the execution speed based on average-case execution cycles while meeting the real-time constraints. The experimental results using an MPEG-4 decoder program show that the proposed algorithm reduces the energy consumption by up to 34% over conventional IntraVS algorithm.

Effective Task Scheduling and Dynamic Resource Optimization based on Heuristic Algorithms in Cloud Computing Environment

  • NZanywayingoma, Frederic;Yang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.5780-5802
    • /
    • 2017
  • Cloud computing system consists of distributed resources in a dynamic and decentralized environment. Therefore, using cloud computing resources efficiently and getting the maximum profits are still challenging problems to the cloud service providers and cloud service users. It is important to provide the efficient scheduling. To schedule cloud resources, numerous heuristic algorithms such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Cuckoo Search (CS) algorithms have been adopted. The paper proposes a Modified Particle Swarm Optimization (MPSO) algorithm to solve the above mentioned issues. We first formulate an optimization problem and propose a Modified PSO optimization technique. The performance of MPSO was evaluated against PSO, and GA. Our experimental results show that the proposed MPSO minimizes the task execution time, and maximizes the resource utilization rate.

Scheduling for Guaranteeing QoS of Continuous Multimedia Traffic (연속적 멀티미디어 트래픽의 서비스 질 보장을 위한 스케쥴링)

  • 길아라
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.22-32
    • /
    • 2003
  • Many of multimedia applications in distributed environments generate the packets which have the real-time characteristics for continuous audio/video data and transmit them according to the teal-time task scheduling theories. In this paper, we model the traffic for continuous media in the distributed multimedia applications based on the high-bandwidth networks and introduce the PDMA algorithm which is the hard real-time task scheduling theory for guaranteeing QoS requested by the clients. Furthermore, we propose the admission control to control the new request not to interfere the current services for maintaining the high quality of services of the applications. Since the proposed admission control is sufficient for the PDMA algorithm, the PDMA algorithm is always able to find the feasible schedule for the set of messages which satisfies it. Therefore, if the set of messages including the new request to generate the new traffic. Otherwise, it rejects the new request. In final, we present the simulation results for showing that the scheduling with the proposed admission control is of practical use.

Genetic algorithm-based scheduling for ground support of multiple satellites and antennae considering operation modes

  • Lee, Junghyun;Kim, Haedong;Chung, Hyun;Ko, Kwanghee
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.17 no.1
    • /
    • pp.89-100
    • /
    • 2016
  • Given the unpredictability of the space environment, satellite communications are manually performed by exchanging telecommands and telemetry. Ground support for orbiting satellites is given only during limited periods of ground antenna visibility, which can result in conflicts when multiple satellites are present. This problem can be regarded as a scheduling problem of allocating antenna support (task) to limited visibility (resource). To mitigate unforeseen errors and costs associated with manual scheduling and mission planning, we propose a novel method based on a genetic algorithm to solve the ground support problem of multiple satellites and antennae with visibility conflicts. Numerous scheduling parameters, including user priority, emergency, profit, contact interval, support time, remaining resource, are considered to provide maximum benefit to users and real applications. The modeling and formulae are developed in accordance with the characteristics of satellite communication. To validate the proposed algorithm, 20 satellites and 3 ground antennae in the Korean peninsula are assumed and modeled using the satellite tool kit (STK). The proposed algorithm is applied to two operation modes: (i) telemetry, tracking, and command and (ii) payload. The results of the present study show near-optimal scheduling in both operation modes and demonstrate the applicability of the proposed algorithm to actual mission control systems.

Performance Enhancement of On-Line Scheduling Algorithm for IRIS Real-Time Tasks using Partial Solution (부분 해를 이용한 IRIS 실시간 태스크용 온-라인 스케줄링 알고리즘의 성능향상)

  • 심재홍;최경희;정기현
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.12-21
    • /
    • 2003
  • In this paper, we propose an on-line scheduling algorithm with the goal of maximizing the total reward of IRIS (Increasing Reward with Increasing Service) real-time tasks that have reward functions and arrive dynamically into the system. We focus on enhancing the performance of scheduling algorithm, which W.: based on the following two main ideas. First, we show that the problem to maximize the total reward of dynamic tasks can also be solved by the problem to find minimum of maximum derivatives of reward functions. Secondly, we observed that only a few of scheduled tasks are serviced until a new task arrives, and the rest tasks are rescheduled with the new task. Based on our observation, the Proposed algorithm doesn't schedules all tasks in the system at every scheduling print, but a part of tasks. The performance of the proposed algorithm is verified through the simulations for various cases. The simulation result showed that the computational complexity of proposed algorithm is$O(N_2)$ in the worst case which is equal to those of the previous algorithms, but close to O(N) on the average.

An expert system for intelligent scheduling in flexible manufacturing cell (유연생산셀의 지능형 스케쥴링을 위한 전문가 시스템)

  • 전병선;박승규;이노성;안인석;서기성;이동헌;우광방
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.1111-1116
    • /
    • 1993
  • In this study, we discuss the design of the expert system for the scheduling of the FMC(Flexible Manufacturing Cell) consisting of the several versatile machines. Due to the NP property, the scheduling problem of several machine FMC is very complex task. Thus we proposed the two heuritstic shceduling algorithms for solving the problem and constituted the algorithm based of solving the problem and constituted the algorithm base of ISS(Intelligent Scheduling System) using them. By the rules in the rule base, the best alternative among various algorithms in algorithm base is selected and applied in controlling the FMC. To show the efficiency of ISS, the scheduling output of ISS and the existent dynamic dispatching rule were tested and compared. The results indicate that the ISS is superior to the existent dynamic dispatching rules in various performance indexes.

  • PDF

End-to-End Laxity-based Priority Assignment for Distributed Real-Time Systems (분산 실시간 시스템을 위한 양극단 여유도 기반의 우선순위 할당 방법)

  • Kim, Hyoung-Yuk;Park, Hong-Seong
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.59-61
    • /
    • 2004
  • Researches about scheduling distributed real-time systems have some weak points, not scheduling both sporadic and periodic tasks and messages or being unable to guaranteeing the end-to-end constraints due to omitting precedence relations between sporadic tasks. This paper describes the application model of sporadic tasks with precedence constraints in a distributed real-time system. It is shown that existing scheduling methods such as Rate Monotonic scheduling are not proper to be applied to the system having sporadic tasks with precedence constraints. So this paper proposes an end-to-end laxity-based priority assignment algorithm which considers the practical laxity of a task and allocates a proper priority to a task.

  • PDF

Managing Deadline-constrained Bag-of-Tasks Jobs on Hybrid Clouds with Closest Deadline First Scheduling

  • Wang, Bo;Song, Ying;Sun, Yuzhong;Liu, Jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.2952-2971
    • /
    • 2016
  • Outsourcing jobs to a public cloud is a cost-effective way to address the problem of satisfying the peak resource demand when the local cloud has insufficient resources. In this paper, we studied the management of deadline-constrained bag-of-tasks jobs on hybrid clouds. We presented a binary nonlinear programming (BNP) problem to model the hybrid cloud management which minimizes rent cost from the public cloud while completes the jobs within their respective deadlines. To solve this BNP problem in polynomial time, we proposed a heuristic algorithm. The main idea is assigning the task closest to its deadline to current core until the core cannot finish any task within its deadline. When there is no available core, the algorithm adds an available physical machine (PM) with most capacity or rents a new virtual machine (VM) with highest cost-performance ratio. As there may be a workload imbalance between/among cores on a PM/VM after task assigning, we propose a task reassigning algorithm to balance them. Extensive experimental results show that our heuristic algorithm saves 16.2%-76% rent cost and improves 47.3%-182.8% resource utilizations satisfying deadline constraints, compared with first fit decreasing algorithm, and that our task reassigning algorithm improves the makespan of tasks up to 47.6%.

Generic Scheduling Method for Distributed Parallel Systems (분산병렬 시스템에서 유전자 알고리즘을 이용한 스케쥴링 방법)

  • Kim, Hwa-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.1B
    • /
    • pp.27-32
    • /
    • 2003
  • This paper presents the Genetic Algorithm based Task Scheduling (GATS) method for the scheduling of programs with diverse embedded parallelism types in Distributed Parallel Systems, which consist of a set of loosely coupled parallel and vector machines connected via high speed networks The distributed parallel processing tries to solve computationally intensive problems that have several types of parallelism, on a suite of high performance and parallel machines in a manner that best utilizes the capabilities of each machine. When scheduling in distributed parallel systems, the matching of the parallelism characteristics between tasks and parallel machines rather than load balancing should be carefully handled with the minimization of communication cost in order to obtain more speedup. This paper proposes the based initialization methods for an initial population and the knowledge-based mutation methods to accommodate the parallelism type matching in genetic algorithms.

A Workflow Scheduling Technique Using Genetic Algorithm in Spot Instance-Based Cloud

  • Jung, Daeyong;Suh, Taeweon;Yu, Heonchang;Gil, JoonMin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3126-3145
    • /
    • 2014
  • Cloud computing is a computing paradigm in which users can rent computing resources from service providers according to their requirements. A spot instance in cloud computing helps a user to obtain resources at a lower cost. However, a crucial weakness of spot instances is that the resources can be unreliable anytime due to the fluctuation of instance prices, resulting in increasing the failure time of users' job. In this paper, we propose a Genetic Algorithm (GA)-based workflow scheduling scheme that can find the optimal task size of each instance in a spot instance-based cloud computing environment without increasing users' budgets. Our scheme reduces total task execution time even if an out-of-bid situation occurs in an instance. The simulation results, based on a before-and-after GA comparison, reveal that our scheme achieves performance improvements in terms of reducing the task execution time on average by 7.06%. Additionally, the cost in our scheme is similar to that when GA is not applied. Therefore, our scheme can achieve better performance than the existing scheme, by optimizing the task size allocated to each available instance throughout the evolutionary process of GA.