• Title/Summary/Keyword: Scheduling Policy

Search Result 203, Processing Time 0.031 seconds

Real-Time Scheduling Method to assign Virtual CPU in the Multocore Mobile Virtualization System (멀티코아 모바일 가상화 시스템에서 가상 CPU 할당 실시간 스케줄링 방법)

  • Kang, Yongho;Keum, Kimoon;Kim, Seongjong;Jin, Kwangyoun;Kim, Jooman
    • Journal of Digital Convergence
    • /
    • v.12 no.3
    • /
    • pp.227-235
    • /
    • 2014
  • Mobile virtualization is an approach to mobile device management in which two virtual platforms are installed on a single wireless device. A smartphone, a single wireless device, might have one virtual environment for business use and one for personal use. Mobile virtualization might also allow one device to run two different operating systems, allowing the same phone to run both RTOS and Android apps. In this paper, we propose the techniques to virtualize the cores of a multicore, allowing the reassign any number of vCPUs that are exposed to a OS to any subset of the pCPUs. And then we also propose the real-time scheduling method to assigning the vCPUs to the pCPU. Suggested technology in this paper solves problem that increases time of real-time process when interrupt are handled, and is able more to fast processing than previous algorithm.

DEVS 형식론을 이용한 다중프로세서 운영체제의 모델링 및 성능평가

  • 홍준성
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1994.10a
    • /
    • pp.32-32
    • /
    • 1994
  • In this example, a message passing based multicomputer system with general interdonnedtion network is considered. After multicomputer systems are developed with morm-hole routing network, topologies of interconecting network are not major considertion for process management and resource sharing. Tehre is an independeent operating system kernel oneach node. It communicates with other kernels using message passingmechanism. Based on this architecture, the problem is how mech does performance degradation will occur in the case of processor sharing on multicomputer systems. Processor sharing between application programs is veryimprotant decision on system performance. In almost cases, application programs running on massively parallel computer systems are not so much user-interactive. Thus, the main performance index is system throughput. Each application program has various communication patterns. and the sharing of processors causes serious performance degradation in hte worst case such that one processor is shared by two processes and another processes are waiting the messages from those processes. As a result, considering this problem is improtant since it gives the reason whether the system allows processor sharingor not. Input data has many parameters in this simulation . It contains the number of threads per task , communication patterns between threads, data generation and also defects in random inupt data. Many parallel aplication programs has its specific communication patterns, and there are computation and communication phases. Therefore, this phase informatin cannot be obtained random input data. If we get trace data from some real applications. we can simulate the problem more realistic . On the other hand, simualtion results will be waseteful unless sufficient trace data with varisous communication patterns is gathered. In this project , random input data are used for simulation . Only controllable data are the number of threads of each task and mapping strategy. First, each task runs independently. After that , each task shres one and more processors with other tasks. As more processors are shared , there will be performance degradation . Form this degradation rate , we can know the overhead of processor sharing . Process scheduling policy can affects the results of simulation . For process scheduling, priority queue and FIFO queue are implemented to support round-robin scheduling and priority scheduling.

  • PDF

Efficient Support for Adaptive Bandwidth Scheduling in Video Servers (비디오 서버에서의 효율적인 대역폭 스케줄링 지원)

  • Lee, Won-Jun
    • The KIPS Transactions:PartC
    • /
    • v.9C no.2
    • /
    • pp.297-306
    • /
    • 2002
  • Continuous multimedia applications require a guaranteed retricval and transfer rate of streaming data, which conventional file server mechanism generally does not provide. In this paper we describe a dynamic negotiated admission control and dick bandwidth scheduling framework for Continuous Media (CM : e.g., video) servers. The framework consists of two parts. One is a reserve-based admission control mechanism and the other part is a scheduler for continuous media streams with dynamic resource allocation to achieve higher utilization than non-dynamic scheduler by effectively sharing available resources among contending streams to improve overall QoS. Using our policy, we could increase the number of simultaneously running: clients that coo]d be supported and cot]d ensure a good response ratio and better resource utilization under heavy traffic requirements.

A Cache buffer and Read Request-aware Request Scheduling Method for NAND flash-based Solid-state Disks (캐시 버퍼와 읽기 요청을 고려한 낸드 플래시 기반 솔리드 스테이트 디스크의 요청 스케줄링 기법)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.143-150
    • /
    • 2013
  • Solid-state disks (SSDs) have been widely used by high-performance personal computers or servers due to its good characteristics and performance. The NAND flash-based SSDs, which take large portion of the whole NAND flash market, are the major type of SSDs. They usually integrate a cache buffer which is built from DRAM and uses the write-back policy for better performance. Unfortunately, the policy makes existing scheduling methods less effective at the I/F level of SSDs Therefore, in this paper, we propose a scheduling method for the I/F with consideration of the cache buffer. The proposed method considers the hit/miss status of cache buffer and gives higher priority to the read requests. As a result, the requests whose data is hit on the cache buffer can be handled in advance and the read requests which have larger effects on the whole system performance than write requests experience shorter latency. The experimental results show that the proposed scheduling method improves read latency by 26%.

A study on the scheduling of multiple products production through a single facility (단일시설에 의한 다품종소량생산의 생산계획에 관한 연구)

  • Kwak, Soo-Il;Lee, Kwang-Soo;Won, Young-Jong
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.1 no.1
    • /
    • pp.151-170
    • /
    • 1976
  • There are many cases of production processes which intermittently produce several different kinds of products for stock through one set of physical facility. In this case, an important question is what size of production run should be prduced once we do set-up for a product in order to minimize the total cost, that is, the sum of the set-up, carrying, and stock-out costs. This problem is used to be called scheduling of multiple products through a single facility in the production management field. Despite the very common occurrence of this type of production process, no one has yet devised a method for determining the optimal production schedule. The purpose of this study is to develop quantitative analytical models which can be used practically and give us rational production schedules. The study is to show improved models with application to a can-manufacturing plant. In this thesis the economic production quantity (EPQ) model was used as a basic model to develop quantitative analytical models for this scheduling problem and two cases, one with stock-out cost, the other without stock-out cost, were taken into consideration. The first analytical model was developed for the scheduling of products through a single facility. In this model we calculate No, the optimal number of production runs per year, minimizing the total annual cost above all. Next we calculate No$_{i}$ is significantly different from No, some manipulation of the schedule can be made by trial and error in order to try to fit the product into the basic (No schedule either more or less frequently as dictated by) No$_{i}$, But this trial and error schedule is thought of inefficient. The second analytical model was developed by reinterpretation by reinterpretation of the calculating process of the economic production quantity model. In this model we obtained two relationships, one of which is the relationship between optimal number of set-ups for the ith item and optimal total number of set-ups, the other is the relationship between optimal average inventory investment for the ith item and optimal total average inventory investment. From these relationships we can determine how much average inventory investment per year would be required if a rational policy based on m No set-ups per year for m products were followed and, alternatively, how many set-ups per year would be required if a rational policy were followed which required an established total average inventory inventory investment. We also learned the relationship between the number of set-ups and the average inventory investment takes the form of a hyperbola. But, there is no reason to say that the first analytical model is superior to the second analytical model. It can be said that the first model is useful for a basic production schedule. On the other hand, the second model is efficient to get an improved production schedule, in a sense of reducing the total cost. Another merit of the second model is that, unlike the first model where we have to know all the inventory costs for each product, we can obtain an improved production schedule with unknown inventory costs. The application of these quantitative analytical models to PoHang can-manufacturing plants shows this point.int.

  • PDF

A Web-based QoS-guaranteed Traffic Control system (웹 기반의 QoS 보장형 트래픽 제어 시스템)

  • 이명섭;신경철;류명춘;박찬현
    • Proceedings of the IEEK Conference
    • /
    • 2002.06a
    • /
    • pp.45-48
    • /
    • 2002
  • This paper presents a QoS-guaranteed traffic control system which supports QoS of realtime packet transmission for the multimedia communication. The traffic control system presented in this paper applies the integrated service model and provides QoS o(packet transmission by means of determining the packet transmission rate with the policy of network manager and the optimal resource allocation according to the end-to-end traffic load. It also provides QoS for the realtime packet transmission through the AWF2Q+ Scheduling algorithm and per-class queuing method.

  • PDF

Process scheduling policy based on fairness (Fairness에 중점을 둔 프로세스 스케줄링 기법)

  • Kang, Seong-Yong;Jang, Hak-Beom;Choi, Hyoung-Kee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.964-966
    • /
    • 2011
  • 운영체제는 여러 가지 프로세스 스케줄링 기법을 지원한다. 본 논문에서는 오픈 소스운영체제인 Linux에서 fairness에 중점을 둔 스케줄링 기법을 설계 및 구현하여 프로세스들간의 priority inversion과 starvation을 해결하는 방법을 제안한다.

Rough-cut simulation algorithm for the decision of operations policy at the block paint shop in shipbuilding (조선 도장공정 운영전략 수립을 위한 사전모의실험기법)

  • Chung, Kuy-Hoon;Park, Chang-Kyu;Min, Sang-Gyu;Park, Ju-Chull;Cho, Kyu-Kab
    • IE interfaces
    • /
    • v.14 no.1
    • /
    • pp.59-66
    • /
    • 2001
  • This paper introduces the case study that has been performed at the block paint shop, Hyundai Heavy Industries. First of all, the overall production processes of shipbuilding and research activities conducted by Korean are reviewed with a view-point of production planning. Then HYPOS(Hyundai heavy industries Painting shop Operation System) project is briefly described, which has several modules such as planning module, scheduling module, work order module, and DB(database) management module. Although the HYPOS system has several modules, this paper mainly focuses on the planning module that utilizes the rough-cut simulation algorithm to make a decision about the policy for production schedule. The HYPOS has been operated since June, 2000 and the operation data has been collected for the purpose of future evaluation about the performance of HYPOS system. The evaluation of HYPOS will be shown in a public domain as soon as the enough operation data is available.

  • PDF

Short-Time Production Scheduling and Parts Routing for Flexible Assembly Lines (유연한 조립 시스템의 단기 생산 스케듈링과 라우팅에 관한 연구)

  • Sin, Ok-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.823-830
    • /
    • 1995
  • A reactive piloting policy for Flexible Assembly Lines (FAL) is proposed, where the sequencing of the operations as well as the assignment of tasks to manipulators are not predetermined but driven by the actual state of the FAL For all work-in-process coming from a manipulator, the next destination is determined by minimizing a temporal criterion taking into account the time needed to reach the destination, the load of the manipulator to reach, the durati on of the operation to be completed in the destination manipulator, and the availability of product components in this manipulator. The purpose of proposed piloting policy is to manufacture a given quantity of products as rapidly as possible by balancing the amount of work allocated to manipulators and to reduce the efforts required for scheduling the production of short series of diversified products. After introducing the characteristics of assembly processes and FAL modelization, the proposed algorithm is evaluated by simulations. The simulations of the proposed algorithm showed satisfactory results.

  • PDF

Dynamic Resource Adjustment Operator Based on Autoscaling for Improving Distributed Training Job Performance on Kubernetes (쿠버네티스에서 분산 학습 작업 성능 향상을 위한 오토스케일링 기반 동적 자원 조정 오퍼레이터)

  • Jeong, Jinwon;Yu, Heonchang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.205-216
    • /
    • 2022
  • One of the many tools used for distributed deep learning training is Kubeflow, which runs on Kubernetes, a container orchestration tool. TensorFlow jobs can be managed using the existing operator provided by Kubeflow. However, when considering the distributed deep learning training jobs based on the parameter server architecture, the scheduling policy used by the existing operator does not consider the task affinity of the distributed training job and does not provide the ability to dynamically allocate or release resources. This can lead to long job completion time and low resource utilization rate. Therefore, in this paper we proposes a new operator that efficiently schedules distributed deep learning training jobs to minimize the job completion time and increase resource utilization rate. We implemented the new operator by modifying the existing operator and conducted experiments to evaluate its performance. The experiment results showed that our scheduling policy improved the average job completion time reduction rate of up to 84% and average CPU utilization increase rate of up to 92%.