• Title/Summary/Keyword: Path-Based Reward Function

Search Result 6, Processing Time 0.021 seconds

Hybrid Learning for Vision-and-Language Navigation Agents (시각-언어 이동 에이전트를 위한 복합 학습)

  • Oh, Suntaek;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.281-290
    • /
    • 2020
  • The Vision-and-Language Navigation(VLN) task is a complex intelligence problem that requires both visual and language comprehension skills. In this paper, we propose a new learning model for visual-language navigation agents. The model adopts a hybrid learning that combines imitation learning based on demo data and reinforcement learning based on action reward. Therefore, this model can meet both problems of imitation learning that can be biased to the demo data and reinforcement learning with relatively low data efficiency. In addition, the proposed model uses a novel path-based reward function designed to solve the problem of existing goal-based reward functions. In this paper, we demonstrate the high performance of the proposed model through various experiments using both Matterport3D simulation environment and R2R benchmark dataset.

Weight Adjustment Scheme Based on Hop Count in Q-routing for Software Defined Networks-enabled Wireless Sensor Networks

  • Godfrey, Daniel;Jang, Jinsoo;Kim, Ki-Il
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.1
    • /
    • pp.22-30
    • /
    • 2022
  • The reinforcement learning algorithm has proven its potential in solving sequential decision-making problems under uncertainties, such as finding paths to route data packets in wireless sensor networks. With reinforcement learning, the computation of the optimum path requires careful definition of the so-called reward function, which is defined as a linear function that aggregates multiple objective functions into a single objective to compute a numerical value (reward) to be maximized. In a typical defined linear reward function, the multiple objectives to be optimized are integrated in the form of a weighted sum with fixed weighting factors for all learning agents. This study proposes a reinforcement learning -based routing protocol for wireless sensor network, where different learning agents prioritize different objective goals by assigning weighting factors to the aggregated objectives of the reward function. We assign appropriate weighting factors to the objectives in the reward function of a sensor node according to its hop-count distance to the sink node. We expect this approach to enhance the effectiveness of multi-objective reinforcement learning for wireless sensor networks with a balanced trade-off among competing parameters. Furthermore, we propose SDN (Software Defined Networks) architecture with multiple controllers for constant network monitoring to allow learning agents to adapt according to the dynamics of the network conditions. Simulation results show that our proposed scheme enhances the performance of wireless sensor network under varied conditions, such as the node density and traffic intensity, with a good trade-off among competing performance metrics.

Leveraging Reinforcement Learning for Generating Construction Workers' Moving Path: Opportunities and Challenges

  • Kim, Minguk;Kim, Tae Wan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1085-1092
    • /
    • 2022
  • Travel distance is a parameter mainly used in the objective function of Construction Site Layout Planning (CSLP) automation models. To obtain travel distance, common approaches, such as linear distance, shortest-distance algorithm, visibility graph, and access road path, concentrate only on identifying the shortest path. However, humans do not necessarily follow one shortest path but can choose a safer and more comfortable path according to their situation within a reasonable range. Thus, paths generated by these approaches may be different from the actual paths of the workers, which may cause a decrease in the reliability of the optimized construction site layout. To solve this problem, this paper adopts reinforcement learning (RL) inspired by various concepts of cognitive science and behavioral psychology to generate a realistic path that mimics the decision-making and behavioral processes of wayfinding of workers on the construction site. To do so, in this paper, the collection of human wayfinding tendencies and the characteristics of the walking environment of construction sites are investigated and the importance of taking these into account in simulating the actual path of workers is emphasized. Furthermore, a simulation developed by mapping the identified tendencies to the reward design shows that the RL agent behaves like a real construction worker. Based on the research findings, some opportunities and challenges were proposed. This study contributes to simulating the potential path of workers based on deep RL, which can be utilized to calculate the travel distance of CSLP automation models, contributing to providing more reliable solutions.

  • PDF

Obstacle Avoidance for Unmanned Air Vehicles Using Monocular-SLAM with Chain-Based Path Planning in GPS Denied Environments

  • Bharadwaja, Yathirajam;Vaitheeswaran, S.M;Ananda, C.M
    • Journal of Aerospace System Engineering
    • /
    • v.14 no.2
    • /
    • pp.1-11
    • /
    • 2020
  • Detecting obstacles and generating a suitable path to avoid obstacles in real time is a prime mission requirement for UAVs. In areas, close to buildings and people, detecting obstacles in the path and estimating its own position (egomotion) in GPS degraded/denied environments are usually addressed with vision-based Simultaneous Localization and Mapping (SLAM) techniques. This presents possibilities and challenges for the feasible path generation with constraints of vehicle dynamics in the configuration space. In this paper, a near real-time feasible path is shown to be generated in the ORB-SLAM framework using a chain-based path planning approach in a force field with dynamic constraints on path length and minimum turn radius. The chain-based path plan approach generates a set of nodes which moves in a force field that permits modifications of path rapidly in real time as the reward function changes. This is different from the usual approach of generating potentials in the entire search space around UAV, instead a set of connected waypoints in a simulated chain. The popular ORB-SLAM, suited for real time approach is used for building the map of the environment and UAV position and the UAV path is then generated continuously in the shortest time to navigate to the goal position. The principal contribution are (a) Chain-based path planning approach with built in obstacle avoidance in conjunction with ORB-SLAM for the first time, (b) Generation of path with minimum overheads and (c) Implementation in near real time.

Development of Stochastic Decision Model for Estimation of Optimal In-depth Inspection Period of Harbor Structures (항만 구조물의 최적 정밀점검 시기 추정을 위한 추계학적 결정모형의 개발)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.28 no.2
    • /
    • pp.63-72
    • /
    • 2016
  • An expected-discounted cost model based on RRP(Renewal Reward Process), referred to as a stochastic decision model, has been developed to estimate the optimal period of in-depth inspection which is one of critical issues in the life-cycle maintenance management of harbor structures such as rubble-mound breakwaters. A mathematical model, which is a function of the probability distribution of the service-life, has been formulated by simultaneously adopting PIM(Periodic Inspection and Maintenance) and CBIM(Condition-Based Inspection and Maintenance) policies so as to resolve limitations of other models, also all the costs in the model associated with monitoring and repair have been discounted with time. From both an analytical solution derived in this paper under the condition in which a failure rate function is a constant and the sensitivity analyses for the variety of different distribution functions and conditions, it has been confirmed that the present solution is more versatile than the existing solution suggested in a very simplified setting. Additionally, even in that case which the probability distribution of the service-life is estimated through the stochastic process, the present model is of course also well suited to interpret the nonlinearity of deterioration process. In particular, a MCS(Monte-Carlo Simulation)-based sample path method has been used to evaluate the parameters of a damage intensity function in stochastic process. Finally, the present stochastic decision model can satisfactorily be applied to armor units of rubble mound breakwaters. The optimal periods of in-depth inspection of rubble-mound breakwaters can be determined by minimizing the expected total cost rate with respect to the behavioral feature of damage process, the level of serviceability limit, and the consequence of that structure.

Determination of Ship Collision Avoidance Path using Deep Deterministic Policy Gradient Algorithm (심층 결정론적 정책 경사법을 이용한 선박 충돌 회피 경로 결정)

  • Kim, Dong-Ham;Lee, Sung-Uk;Nam, Jong-Ho;Furukawa, Yoshitaka
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.56 no.1
    • /
    • pp.58-65
    • /
    • 2019
  • The stability, reliability and efficiency of a smart ship are important issues as the interest in an autonomous ship has recently been high. An automatic collision avoidance system is an essential function of an autonomous ship. This system detects the possibility of collision and automatically takes avoidance actions in consideration of economy and safety. In order to construct an automatic collision avoidance system using reinforcement learning, in this work, the sequential decision problem of ship collision is mathematically formulated through a Markov Decision Process (MDP). A reinforcement learning environment is constructed based on the ship maneuvering equations, and then the three key components (state, action, and reward) of MDP are defined. The state uses parameters of the relationship between own-ship and target-ship, the action is the vertical distance away from the target course, and the reward is defined as a function considering safety and economics. In order to solve the sequential decision problem, the Deep Deterministic Policy Gradient (DDPG) algorithm which can express continuous action space and search an optimal action policy is utilized. The collision avoidance system is then tested assuming the $90^{\circ}$intersection encounter situation and yields a satisfactory result.