• Title/Summary/Keyword: task allocation algorithm

Search Result 72, Processing Time 0.026 seconds

Channel Allocation and Task scheduling Scheme Using Artificial Intelligence (인공지능 기법을 이용한 채널할당과 태스크 스케줄링 기법)

  • Heo, Bo-Jin;Son, Dong-Cheol;Kim, Chang-Seok;Lee, Sang-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.52-57
    • /
    • 2007
  • 한정된 자원을 효율적으로 사용해야하는 이동통신망에서 멀티미디어 서비스 요구에 따른 무선 트래픽 채널을 할당하는 기법은 무선이라는 특수 환경으로 인해 제약을 받을 수밖에 없다. 이동망의 기지국의 경우 여러 무선 가입자 보드로부터 요구되는 서비스별 트래픽요구에 대한 채널 할당과 이에 대한 메인보드에서 처리해야 하는 작업 스케줄링은 무선과 CPU라는 서로 다른 환경을 잘 매핑하는 과제를 안고 있다. 본 논문에서는 음성과 데이터 호를 동시에 서비스하는 셀룰러 시스템에서 멀티미디어 서비스 트래픽 특성을 고려한 주파수할당과 작업 스케줄링이라는 두 가지 요소를 접목할 때 인공지능알고리즘인 유전자알고리즘을 이용하는 방법과 이에 적합한 작업 스케줄링 방식을 제안한다.

  • PDF

Performance Evaluation of Scheduling Algorithms according to Communication Cost in the Grid System of Co-allocation Environment (Co-allocation 환경의 그리드 시스템에서 통신비용에 따른 스케줄링 알고리즘의 성능 분석)

  • Kang, Oh-Han;Kang, Sang-Seong;Kim, Jin-Suk
    • The KIPS Transactions:PartA
    • /
    • v.14A no.2
    • /
    • pp.99-106
    • /
    • 2007
  • Grid computing, a mechanism which uses heterogeneous systems that are geographically distributed, draws attention as a new paradigm for the next generation operation of parallel and distributed computing. The importance of grid computing concerning communication cost is very huge because grid computing furnishes uses with integrated virtual computing service, in which a number of computer systems are connected by a high-speed network. Therefore, to reduce the execution time, the scheduling algorithm in grid environment should take communication cost into consideration as well as computing ability of resources. However, most scheduling algorithms have not only ignored the communication cost by assuming that all tasks were dealt in one cluster, but also did not consider the overhead of communication cost when the tasks were processed in a number of clusters. In this paper, the functions of original scheduling algorithms are analyzed. More importantly, the functions of algorithms are compared and analyzed with consideration of communication cost within the co allocation environment, in which a task is performed separately in many clusters.

Optimizing Energy-Latency Tradeoff for Computation Offloading in SDIN-Enabled MEC-based IIoT

  • Zhang, Xinchang;Xia, Changsen;Ma, Tinghuai;Zhang, Lejun;Jin, Zilong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4081-4098
    • /
    • 2022
  • With the aim of tackling the contradiction between computation intensive industrial applications and resource-weak Edge Devices (EDs) in Industrial Internet of Things (IIoT), a novel computation task offloading scheme in SDIN-enabled MEC based IIoT is proposed in this paper. With the aim of reducing the task accomplished latency and energy consumption of EDs, a joint optimization method is proposed for optimizing the local CPU-cycle frequency, offloading decision, and wireless and computation resources allocation jointly. Based on the optimization, the task offloading problem is formulated into a Mixed Integer Nonlinear Programming (MINLP) problem which is a large-scale NP-hard problem. In order to solve this problem in an accessible time complexity, a sub-optimal algorithm GPCOA, which is based on hybrid evolutionary computation, is proposed. Outcomes of emulation revel that the proposed method outperforms other baseline methods, and the optimization result shows that the latency-related weight is efficient for reducing the task execution delay and improving the energy efficiency.

A Study on the Efficient Task Scheduling by the Reconstructed Task Graph (태스크 그래프의 재구성에 의한 효율적 태스크 스케줄링에 관한 연구)

  • Byun, Seung-Hwan;Yoo, Kwan-Jong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.9
    • /
    • pp.2235-2246
    • /
    • 1997
  • This paper presents an effective heuristic task scheduling algorithm for multiprocessor systems. To execute task scheduling effectively which is defined as an allocation of m's tasks onto n's processors(m > n), several problems almost at NP-hard should be cleaned up. The purpose of the task scheduling obtains the minimum execution time by mapping the tasks on a system topology or reduces the total execution time to give a minimum system topology. In order to solve this problem, in this paper, the task scheduling is done by redefining a task graph to a reconstructed task graph (RTG). An RTG is obtained by merging or copying nodes to equal the number of nodes on each level of the task graph to the number of processors of the system topology and then directly scheduled to the system topology. This method obtains a fast scheduling time and a simple scheduling method, and near-optimal execution time without executing steps such as the refinement step and the duplication step after the task scheduling.

  • PDF

A Method for Distributed Database Processing with Optimized Communication Cost in Dataflow model (데이터플로우 모델에서 통신비용 최적화를 이용한 분산 데이터베이스 처리 방법)

  • Jun, Byung-Uk
    • Journal of Internet Computing and Services
    • /
    • v.8 no.1
    • /
    • pp.133-142
    • /
    • 2007
  • Large database processing is one of the most important technique in the information society, Since most large database is regionally distributed, the distributed database processing has been brought into relief. Communications and data compressions are the basic technologies for large database processing. In order to maximize those technologies, the execution time for the task, the size of data, and communication time between processors should be considered. In this paper, the dataflow scheme and vertically layered allocation algorithm have been used to optimize the distributed large database processing. The basic concept of this method is rearrangement of processes considering the communication time between processors. The paper also introduces measurement model of the execution time, the size of output data, and the communication time in order to implement the proposed scheme.

  • PDF

Honey Bee Based Load Balancing in Cloud Computing

  • Hashem, Walaa;Nashaat, Heba;Rizk, Rawya
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.5694-5711
    • /
    • 2017
  • The technology of cloud computing is growing very quickly, thus it is required to manage the process of resource allocation. In this paper, load balancing algorithm based on honey bee behavior (LBA_HB) is proposed. Its main goal is distribute workload of multiple network links in the way that avoid underutilization and over utilization of the resources. This can be achieved by allocating the incoming task to a virtual machine (VM) which meets two conditions; number of tasks currently processing by this VM is less than number of tasks currently processing by other VMs and the deviation of this VM processing time from average processing time of all VMs is less than a threshold value. The proposed algorithm is compared with different scheduling algorithms; honey bee, ant colony, modified throttled and round robin algorithms. The results of experiments show the efficiency of the proposed algorithm in terms of execution time, response time, makespan, standard deviation of load, and degree of imbalance.

A Classification-Based Virtual Machine Placement Algorithm in Mobile Cloud Computing

  • Tang, Yuli;Hu, Yao;Zhang, Lianming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.1998-2014
    • /
    • 2016
  • In recent years, cloud computing services based on smart phones and other mobile terminals have been a rapid development. Cloud computing has the advantages of mass storage capacity and high-speed computing power, and it can meet the needs of different types of users, and under the background, mobile cloud computing (MCC) is now booming. In this paper, we have put forward a new classification-based virtual machine placement (CBVMP) algorithm for MCC, and it aims at improving the efficiency of virtual machine (VM) allocation and the disequilibrium utilization of underlying physical resources in large cloud data center. By simulation experiments based on CloudSim cloud platform, the experimental results show that the new algorithm can improve the efficiency of the VM placement and the utilization rate of underlying physical resources.

Dynamic Task Assignment Using A Quasi-Dual Graph Model (의사 쌍대 그래프 모델을 이용한 동적 태스크 할당 방법)

  • 김덕수;박용진
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.6
    • /
    • pp.62-68
    • /
    • 1983
  • We suggest a Quasi- dual graph model in consideration of dynamic module assignment and relocation to assign task optimally to two processors that have different processing abilities. An optimal module partitioning and allocation to minimize total processing cost can be achieved by applying shortest-path algorithm with time complexity 0(n2) on this graph model.

  • PDF

Load Balancing Approach to Enhance the Performance in Cloud Computing

  • Rassan, Iehab AL;Alarif, Noof
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.158-170
    • /
    • 2021
  • Virtualization technologies are being adopted and broadly utilized in many fields and at different levels. In cloud computing, achieving load balancing across large distributed virtual machines is considered a complex optimization problem with an essential importance in cloud computing systems and data centers as the overloading or underloading of tasks on VMs may cause multiple issues in the cloud system like longer execution time, machine failure, high power consumption, etc. Therefore, load balancing mechanism is an important aspect in cloud computing that assist in overcoming different performance issues. In this research, we propose a new approach that combines the advantages of different task allocation algorithms like Round robin algorithm, and Random allocation with different threshold techniques like the VM utilization and the number of allocation counts using least connection mechanism. We performed extensive simulations and experiments that augment different scheduling policies to overcome the resource utilization problem without compromising other performance measures like makespan and execution time of the tasks. The proposed system provided better results compared to the original round robin as it takes into consideration the dynamic state of the system.

SS-DRM: Semi-Partitioned Scheduling Based on Delayed Rate Monotonic on Multiprocessor Platforms

  • Senobary, Saeed;Naghibzadeh, Mahmoud
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.1
    • /
    • pp.43-56
    • /
    • 2014
  • Semi-partitioned scheduling is a new approach for allocating tasks on multiprocessor platforms. By splitting some tasks between processors, semi-partitioned scheduling is used to improve processor utilization. In this paper, a new semi-partitioned scheduling algorithm called SS-DRM is proposed for multiprocessor platforms. The scheduling policy used in SS-DRM is based on the delayed rate monotonic algorithm, which is a modified version of the rate monotonic algorithm that can achieve higher processor utilization. This algorithm can safely schedule any system composed of two tasks with total utilization less than or equal to that on a single processor. First, it is formally proven that any task which is feasible under the rate monotonic algorithm will be feasible under the delayed rate monotonic algorithm as well. Then, the existing allocation method is extended to the delayed rate monotonic algorithm. After that, two improvements are proposed to achieve more processor utilization with the SS-DRM algorithm than with the rate monotonic algorithm. According to the simulation results, SS-DRM improves the scheduling performance compared with previous work in terms of processor utilization, the number of required processors, and the number of created subtasks.