• Title/Summary/Keyword: Parameter-scheduling

Search Result 87, Processing Time 0.029 seconds

DEVELOPMENT OF A RESOURCE LEVELING MODEL USING OPTIMIZATION

  • Jin-Lee Kim;Ralph D. Ellis
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.558-563
    • /
    • 2005
  • This paper presents a GA-based optimal algorithm for a resource leveling model that levels the resources of a set of non-critical activities experiencing conflicts simultaneously up to an assumed level of resource rates specified by the planner using a pair-wise comparison of the activities being considered. A parameter called the future float is adopted and applied as an indicator for assigning leveling priorities to the sets of activities experiencing conflicts. A construction project network example was worked out to demonstrate the performance of the proposed method. The histogram obtained using the algorithm proposed was shown to be the same as, or very close to that produced by the existing resource leveling method based on the least total float rule, which shifts non-critical activities individually.

  • PDF

Study on Accelerating Distributed ML Training in Orchestration

  • Su-Yeon Kim;Seok-Jae Moon
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.143-149
    • /
    • 2024
  • As the size of data and models in machine learning training continues to grow, training on a single server is becoming increasingly challenging. Consequently, the importance of distributed machine learning, which distributes computational loads across multiple machines, is becoming more prominent. However, several unresolved issues remain regarding the performance enhancement of distributed machine learning, including communication overhead, inter-node synchronization challenges, data imbalance and bias, as well as resource management and scheduling. In this paper, we propose ParamHub, which utilizes orchestration to accelerate training speed. This system monitors the performance of each node after the first iteration and reallocates resources to slow nodes, thereby speeding up the training process. This approach ensures that resources are appropriately allocated to nodes in need, maximizing the overall efficiency of resource utilization and enabling all nodes to perform tasks uniformly, resulting in a faster training speed overall. Furthermore, this method enhances the system's scalability and flexibility, allowing for effective application in clusters of various sizes.

An Adaptive Delay Control based on the Transmission Urgency of the Packets in the Wireless Networks (무선망에서 패킷의 전송 긴급성을 고려한 적응적 지연 제어 방안)

  • Jeong, Dae-In
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.1A
    • /
    • pp.44-53
    • /
    • 2010
  • This paper proposes a traffic management policy for delay control in the wireless networks. The so-called EDD(Earliest Due Date) scheme is adopted as the packet scheduling policy, so that the service provision is performed in the order of the transmission urgency of the backlogged packets. In addition, we derive a formula to determine the contention window, one of the MAC parameters, with the goal of minimizing the non-work conserving characteristics of the traditional MAC scheme. This method eliminates the burden of the class-wise parameter settings which is typically required for the priority control. Simulations are performed to show the validity of the proposed scheme in comparison with the policy that adopts the class-level queue management such as the IEEE 802.11e standard. Smaller delays and higher rates of delay guarantees are observed throughout the experiments.

Energy Efficient Wireless Sensor Networks Using Linear-Programming Optimization of the Communication Schedule

  • Tabus, Vlad;Moltchanov, Dmitri;Koucheryavy, Yevgeni;Tabus, Ioan;Astola, Jaakko
    • Journal of Communications and Networks
    • /
    • v.17 no.2
    • /
    • pp.184-197
    • /
    • 2015
  • This paper builds on a recent method, chain routing with even energy consumption (CREEC), for designing a wireless sensor network with chain topology and for scheduling the communication to ensure even average energy consumption in the network. In here a new suboptimal design is proposed and compared with the CREEC design. The chain topology in CREEC is reconfigured after each group of n converge-casts with the goal of making the energy consumption along the new paths between the nodes in the chain as even as possible. The new method described in this paper designs a single near-optimal Hamiltonian circuit, used to obtain multiple chains having only the terminal nodes different at different converge-casts. The advantage of the new scheme is that for the whole life of the network most of the communication takes place between same pairs of nodes, therefore keeping topology reconfigurations at a minimum. The optimal scheduling of the communication between the network and base station in order to maximize network lifetime, given the chosen minimum length circuit, becomes a simple linear programming problem which needs to be solved only once, at the initialization stage. The maximum lifetime obtained when using any combination of chains is shown to be upper bounded by the solution of a suitable linear programming problem. The upper bounds show that the proposed method provides near-optimal solutions for several wireless sensor network parameter sets.

Dynamic Resource Adjustment Operator Based on Autoscaling for Improving Distributed Training Job Performance on Kubernetes (쿠버네티스에서 분산 학습 작업 성능 향상을 위한 오토스케일링 기반 동적 자원 조정 오퍼레이터)

  • Jeong, Jinwon;Yu, Heonchang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.205-216
    • /
    • 2022
  • One of the many tools used for distributed deep learning training is Kubeflow, which runs on Kubernetes, a container orchestration tool. TensorFlow jobs can be managed using the existing operator provided by Kubeflow. However, when considering the distributed deep learning training jobs based on the parameter server architecture, the scheduling policy used by the existing operator does not consider the task affinity of the distributed training job and does not provide the ability to dynamically allocate or release resources. This can lead to long job completion time and low resource utilization rate. Therefore, in this paper we proposes a new operator that efficiently schedules distributed deep learning training jobs to minimize the job completion time and increase resource utilization rate. We implemented the new operator by modifying the existing operator and conducted experiments to evaluate its performance. The experiment results showed that our scheduling policy improved the average job completion time reduction rate of up to 84% and average CPU utilization increase rate of up to 92%.

Dynamic slot allocation scheme for rt-VBR services in the wireless ATM networks (무선 ATM망에서 rt-VBR 서비스를 위한 동적 슬롯 할당 기법)

  • Yang, Seong-Ryoung;Lim, In-Taek;Heo, Jeong-Seok
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.543-550
    • /
    • 2002
  • This paper proposes the dynamic slot allocation method for real-time VBR (rt-VBR) services in wireless ATM networks. The proposed method is characterized by a contention-based mechanism of the reservation request, a contention-free polling scheme for transferring the dynamic parameters. The base station scheduler allocates a dynamic parameter minislot to the wireless terminal for transferring the residual lifetime and the number of requesting slots as the dynamic parameters. The scheduling algorithm uses a priority scheme based on the maximum cell transfer delay parameter. Based on the received dynamic parameters, the scheduler allocates the uplink slots to the wireless terminal with the most stringent delay requirement. The simulation results show that the proposed method guarantee the delay constraint of rt-VBR services along with its cell loss rate significantly reduced.

Optimal Scheduling of Drug Treatment for HIV Infection: Continuous Dose Control and Receding Horizon Control

  • Hyungbo Shim;Han, Seung-Ju;Chung, Chung-Choo;Nam, Sang-Won;Seo, Jin-Heon
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.3
    • /
    • pp.282-288
    • /
    • 2003
  • It is known that HIV (Human Immunodeficiency Virus) infection, which causes AIDS after some latent period, is a dynamic process that can be modeled mathematically. Effects of available anti-viral drugs, which prevent HIV from infecting healthy cells, can also be included in the model. In this paper we illustrate control theory can be applied to a model of HIV infection. In particular, the drug dose is regarded as control input and the goal is to excite an immune response so that the symptom of infected patient should not be developed into AIDS. Finite horizon optimal control is employed to obtain the optimal schedule of drug dose since the model is highly nonlinear and we want maximum performance for enhancing the immune response. From the simulation studies, we found that gradual reduction of drug dose is important for the optimality. We also demonstrate the obtained open-loop optimal control is vulnerable to parameter variation of the model and measurement noise. To overcome this difficulty, we finally present nonlinear receding horizon control to incorporate feedback in the drug treatment.

Optimal Scheduling of Drug Treatment for HIV Infection;Continuous Dose Control and Receding Horizon Control

  • Shim, H.;Han, S.J.;Jeong, I.S.;Huh, Y.H.;Chung, C.C.;Nam, S.W.;Seo, J.H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1951-1956
    • /
    • 2003
  • It is known that HIV (Human Immunodeficiency Virus) infection, which causes AIDS after some latent period, is a dynamic process that can be modeled mathematically. Effects of available anti-viral drugs, which prevent HIV from infecting healthy cells, can also be included in the model. In this paper we illustrate control theory can be applied to a model of HIV infection. In particular, the drug dose is regarded as control input and the goal is to excite an immune response so that the symptom of infected patient should not be developed into AIDS. Finite horizon optimal control is employed to obtain the optimal schedule of drug dose since the model is highly nonlinear and we want maximum performance for enhancing the immune response. From the simulation studies, we find that gradual reduction of drug dose is important for the optimality. We also demonstrate the obtained open-loop optimal control is vulnerable to parameter variation of the model and measurement noise. To overcome this difficulty, we finally present nonlinear receding horizon control to incorporate feedback in the drug treatment.

  • PDF

Delay Bound Analysis of Networks based on Flow Aggregation (통합 플로우 기반 네트워크의 지연시간 최대치 분석)

  • Joung, Jinoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.107-112
    • /
    • 2020
  • We analyze the flow aggregate (FA) based network delay guarantee framework, with generalized minimal interleaved regulator (IR) initially suggested by IEEE 802.1 time sensitive network (TSN) task group (TG). The framework has multiple networks with minimal IRs attached at their output ports for suppressing the burst cascades, with FAs within a network for alleviating the scheduling complexity. We analyze the framework with various topology and parameter sets with the conclusion that the FA-based framework with low complexity can yield better performance than the integrated services (IntServ) system with high complexity, especially with large network size and large FA size.

Design of Temperature based Gain Scheduled Controller for Wide Temperature Variation (게인 스케줄링을 이용한 광대역 온도제어기의 설계)

  • Jeong, Jae Hyeon;Kim, Jung Han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.8
    • /
    • pp.831-838
    • /
    • 2013
  • This paper focused on the design of an efficient temperature controller for a plant with a wide range of operating temperatures. The greater the temperature difference a plant has, the larger the nonlinearity it is exposed to in terms of heat transfer. For this reason, we divided the temperature range into five sections, and each was modeled using ARMAX(auto regressive moving average exogenous). The movement of the dominant poles of the sliced system was analyzed and, based on the variation in the system parameters with temperature, optimal control parameters were obtained through simulation and experiments. From the configurations for each section of the temperature range, a temperature-based gain-scheduled controller (TBGSC) was designed for parameter variation of the plant. Experiments showed that the TBGSC resulted in improved performance compared with an existing proportional integral derivative (PID) controller.