• Title/Summary/Keyword: integer programming

Search Result 810, Processing Time 0.022 seconds

자동차 운반선사의 해상운송계획 지원 시스템 개발

  • Jeong, Jae-Un;Choe, Hyeong-Rim;Kim, Hyeon-Su;Park, Byeong-Ju
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.11a
    • /
    • pp.556-563
    • /
    • 2007
  • 국제 물동량의 증가로 인한 해상운송의 활성화는 시장 참여자들의 경쟁 심화로 이어지고 있다. 따라서 해상운송의 효율성 향상을 통한 경쟁력 강화의 필요성이 높아지고 있다. 특히 한국의 경우에는 자동차 수출입 물동량의 증가로 인한 자동차 운반선사(Car Carriers)의 효율적인 운송계획의 중요성이 강조되고 있다. 이에 본 연구에서는 자동차 운반선의 현황분석을 통한 운송계획의 문제점을 분석하고, 이를 해결할 수 있는 자동차 운반선 해상운송계획 지원 시스템을 개발한다. 이는 자동차 운반선에 관한 해상운송계획 과정을 체계화함으로써 계획의 수립 속도 및 질적 수준을 향상시키고, 계획의 체계적인 관리(수정, 변경 등)를 가능케 함으로써 사용자가 보다 나은 의사결정을 내릴 수 있도록 하기 위함이다. 한편 자동차 운반선의 해상운송계획 과정에서 해상운송과 자동차 화물의 특성에 의해 발생되는 다양한 예외상황들을 고려할 수 있도록 하기 위해 IP(Integer Programming) 모형을 사용하여 이익을 최대로 하는 또는 비용을 최소로 하는 최적의 안을 생성하고, 계획 수립 이후에 발생하는 변경사항들을 실무자가 효율적으로 수정할 수 있도록 해상운송계획 지원 시스템을 개발한다.

  • PDF

A Joint Allocation Algorithm of Computing and Communication Resources Based on Reinforcement Learning in MEC System

  • Liu, Qinghua;Li, Qingping
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.721-736
    • /
    • 2021
  • For the mobile edge computing (MEC) system supporting dense network, a joint allocation algorithm of computing and communication resources based on reinforcement learning is proposed. The energy consumption of task execution is defined as the maximum energy consumption of each user's task execution in the system. Considering the constraints of task unloading, power allocation, transmission rate and calculation resource allocation, the problem of joint task unloading and resource allocation is modeled as a problem of maximum task execution energy consumption minimization. As a mixed integer nonlinear programming problem, it is difficult to be directly solve by traditional optimization methods. This paper uses reinforcement learning algorithm to solve this problem. Then, the Markov decision-making process and the theoretical basis of reinforcement learning are introduced to provide a theoretical basis for the algorithm simulation experiment. Based on the algorithm of reinforcement learning and joint allocation of communication resources, the joint optimization of data task unloading and power control strategy is carried out for each terminal device, and the local computing model and task unloading model are built. The simulation results show that the total task computation cost of the proposed algorithm is 5%-10% less than that of the two comparison algorithms under the same task input. At the same time, the total task computation cost of the proposed algorithm is more than 5% less than that of the two new comparison algorithms.

A Heuristic for Drone-Utilized Blood Inventory and Delivery Planning (드론 활용 혈액 재고/배송계획 휴리스틱)

  • Jang, Jin-Myeong;Kim, Hwa-Joong;Son, Dong-Hoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.106-116
    • /
    • 2021
  • This paper considers a joint problem for blood inventory planning at hospitals and blood delivery planning from blood centers to hospitals, in order to alleviate the blood service imbalance between big and small hospitals being occurred in practice. The joint problem is to determine delivery timing, delivery quantity, delivery means such as medical drones and legacy blood vehicles, and inventory level to minimize inventory and delivery costs while satisfying hospitals' blood demand over a planning horizon. This problem is formulated as a mixed integer programming model by considering practical constraints such as blood lifespan and drone specification. To solve the problem, this paper employs a Lagrangian relaxation technique and suggests a time efficient Lagrangian heuristic algorithm. The performance of the suggested heuristic is evaluated by conducting computational experiments on randomly-generated problem instances, which are generated by mimicking the real data of Korean Red Cross in Seoul and other reliable sources. The results of computational experiments show that the suggested heuristic obtains near-optimal solutions in a shorter amount of time. In addition, we discuss the effect of changes in the length of blood lifespan, the number of planning periods, the number of hospitals, and drone specifications on the performance of the suggested Lagrangian heuristic.

A Cloud-Edge Collaborative Computing Task Scheduling and Resource Allocation Algorithm for Energy Internet Environment

  • Song, Xin;Wang, Yue;Xie, Zhigang;Xia, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2282-2303
    • /
    • 2021
  • To solve the problems of heavy computing load and system transmission pressure in energy internet (EI), we establish a three-tier cloud-edge integrated EI network based on a cloud-edge collaborative computing to achieve the tradeoff between energy consumption and the system delay. A joint optimization problem for resource allocation and task offloading in the threetier cloud-edge integrated EI network is formulated to minimize the total system cost under the constraints of the task scheduling binary variables of each sensor node, the maximum uplink transmit power of each sensor node, the limited computation capability of the sensor node and the maximum computation resource of each edge server, which is a Mixed Integer Non-linear Programming (MINLP) problem. To solve the problem, we propose a joint task offloading and resource allocation algorithm (JTOARA), which is decomposed into three subproblems including the uplink transmission power allocation sub-problem, the computation resource allocation sub-problem, and the offloading scheme selection subproblem. Then, the power allocation of each sensor node is achieved by bisection search algorithm, which has a fast convergence. While the computation resource allocation is derived by line optimization method and convex optimization theory. Finally, to achieve the optimal task offloading, we propose a cloud-edge collaborative computation offloading schemes based on game theory and prove the existence of Nash Equilibrium. The simulation results demonstrate that our proposed algorithm can improve output performance as comparing with the conventional algorithms, and its performance is close to the that of the enumerative algorithm.

IRSML: An intelligent routing algorithm based on machine learning in software defined wireless networking

  • Duong, Thuy-Van T.;Binh, Le Huu
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.733-745
    • /
    • 2022
  • In software-defined wireless networking (SDWN), the optimal routing technique is one of the effective solutions to improve its performance. This routing technique is done by many different methods, with the most common using integer linear programming problem (ILP), building optimal routing metrics. These methods often only focus on one routing objective, such as minimizing the packet blocking probability, minimizing end-to-end delay (EED), and maximizing network throughput. It is difficult to consider multiple objectives concurrently in a routing algorithm. In this paper, we investigate the application of machine learning to control routing in the SDWN. An intelligent routing algorithm is then proposed based on the machine learning to improve the network performance. The proposed algorithm can optimize multiple routing objectives. Our idea is to combine supervised learning (SL) and reinforcement learning (RL) methods to discover new routes. The SL is used to predict the performance metrics of the links, including EED quality of transmission (QoT), and packet blocking probability (PBP). The routing is done by the RL method. We use the Q-value in the fundamental equation of the RL to store the PBP, which is used for the aim of route selection. Concurrently, the learning rate coefficient is flexibly changed to determine the constraints of routing during learning. These constraints include QoT and EED. Our performance evaluations based on OMNeT++ have shown that the proposed algorithm has significantly improved the network performance in terms of the QoT, EED, packet delivery ratio, and network throughput compared with other well-known routing algorithms.

Energy efficiency task scheduling for battery level-aware mobile edge computing in heterogeneous networks

  • Xie, Zhigang;Song, Xin;Cao, Jing;Xu, Siyang
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.746-758
    • /
    • 2022
  • This paper focuses on a mobile edge-computing-enabled heterogeneous network. A battery level-aware task-scheduling framework is proposed to improve the energy efficiency and prolong the operating hours of battery-powered mobile devices. The formulated optimization problem is a typical mixed-integer nonlinear programming problem. To solve this nondeterministic polynomial (NP)-hard problem, a decomposition-based task-scheduling algorithm is proposed. Using an alternating optimization technology, the original problem is divided into three subproblems. In the outer loop, task offloading decisions are yielded using a pruning search algorithm for the task offloading subproblem. In the inner loop, closed-form solutions for computational resource allocation subproblems are derived using the Lagrangian multiplier method. Then, it is proven that the transmitted power-allocation subproblem is a unimodal problem; this subproblem is solved using a gradient-based bisection search algorithm. The simulation results demonstrate that the proposed framework achieves better energy efficiency than other frameworks. Additionally, the impact of the battery level-aware scheme on the operating hours of battery-powered mobile devices is also investigated.

An Efficient Service Function Chains Orchestration Algorithm for Mobile Edge Computing

  • Wang, Xiulei;Xu, Bo;Jin, Fenglin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4364-4384
    • /
    • 2021
  • The dynamic network state and the mobility of the terminals make the service function chain (SFC) orchestration mechanisms based on static and deterministic assumptions hard to be applied in SDN/NFV mobile edge computing networks. Designing dynamic and online SFC orchestration mechanism can greatly improve the execution efficiency of compute-intensive and resource-hungry applications in mobile edge computing networks. In order to increase the overall profit of service provider and reduce the resource cost, the system running time is divided into a sequence of time slots and a dynamic orchestration scheme based on an improved column generation algorithm is proposed in each slot. Firstly, the SFC dynamic orchestration problem is formulated as an integer linear programming (ILP) model based on layered graph. Then, in order to reduce the computation costs, a column generation model is used to simplify the ILP model. Finally, a two-stage heuristic algorithm based on greedy strategy is proposed. Four metrics are defined and the performance of the proposed algorithm is evaluated based on simulation. The results show that our proposal significantly provides more than 30% reduction of run time and about 12% improvement in service deployment success ratio compared to the Viterbi algorithm based mechanism.

Traffic Forecast Assisted Adaptive VNF Dynamic Scaling

  • Qiu, Hang;Tang, Hongbo;Zhao, Yu;You, Wei;Ji, Xinsheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3584-3602
    • /
    • 2022
  • NFV realizes flexible and rapid software deployment and management of network functions in the cloud network, and provides network services in the form of chained virtual network functions (VNFs). However, using VNFs to provide quality guaranteed services is still a challenge because of the inherent difficulty in intelligently scaling VNFs to handle traffic fluctuations. Most existing works scale VNFs with fixed-capacity instances, that is they take instances of the same size and determine a suitable deployment location without considering the cloud network resource distribution. This paper proposes a traffic forecasted assisted proactive VNF scaling approach, and it adopts the instance capacity adaptive to the node resource. We first model the VNF scaling as integer quadratic programming and then propose a proactive adaptive VNF scaling (PAVS) approach. The approach employs an efficient traffic forecasting method based on LSTM to predict the upcoming traffic demands. With the obtained traffic demands, we design a resource-aware new VNF instance deployment algorithm to scale out under-provisioning VNFs and a redundant VNF instance management mechanism to scale in over-provisioning VNFs. Trace-driven simulation demonstrates that our proposed approach can respond to traffic fluctuation in advance and reduce the total cost significantly.

Optimizing Energy-Latency Tradeoff for Computation Offloading in SDIN-Enabled MEC-based IIoT

  • Zhang, Xinchang;Xia, Changsen;Ma, Tinghuai;Zhang, Lejun;Jin, Zilong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4081-4098
    • /
    • 2022
  • With the aim of tackling the contradiction between computation intensive industrial applications and resource-weak Edge Devices (EDs) in Industrial Internet of Things (IIoT), a novel computation task offloading scheme in SDIN-enabled MEC based IIoT is proposed in this paper. With the aim of reducing the task accomplished latency and energy consumption of EDs, a joint optimization method is proposed for optimizing the local CPU-cycle frequency, offloading decision, and wireless and computation resources allocation jointly. Based on the optimization, the task offloading problem is formulated into a Mixed Integer Nonlinear Programming (MINLP) problem which is a large-scale NP-hard problem. In order to solve this problem in an accessible time complexity, a sub-optimal algorithm GPCOA, which is based on hybrid evolutionary computation, is proposed. Outcomes of emulation revel that the proposed method outperforms other baseline methods, and the optimization result shows that the latency-related weight is efficient for reducing the task execution delay and improving the energy efficiency.

A Heuristic Algorithm of an Efficient Berth Allocation for a Public Container Terminal (공공 컨테이너 터미널의 효율적인 선석할당을 위한 발견적 알고리즘 개발에 관한 연구)

  • Keum, J.S.
    • Journal of Korean Port Research
    • /
    • v.11 no.2
    • /
    • pp.191-202
    • /
    • 1997
  • As the suitability of berth allocation will ultimately have a significant influence on the performance of a berth, a great deal of attention should be given to berth allocation. Generally, a berth allocation problem has conflicting factors between servers and users. In addition, there is uncertainty in great extent caused by various factors such as departure delay, inclement weather on route, poor handling equipment, a lack of storage space, and other factors contribute to the uncertainty of arrival and berthing time. Thus, it is necessary to establish berth allocation planning which reflects the positions of interested parties and the ambiguity of parameters. For this, a berth allocation problem is formulated by fuzzy 0-1 integer programming introducing the concept of maximum Position Shift(MPS). But, the above approach has limitations in terms of computational time and computer memory when the size of problem is increased. It also has limitations with respect to the integration of other sub-systems such as ship planning system and yard planning system. For solving such problem, this paper focuses particularly on developing an efficient heuristic algorithm as a new technique of getting an effective solution. And also the suggested algorithm is verified through the illustrative examples and empirical appalicaton to BCTOC.

  • PDF