• 제목/요약/키워드: Q learning

검색결과 424건 처리시간 0.036초

Multiple Reward Reinforcement learning control of a mobile robot in home network environment

  • Kang, Dong-Oh;Lee, Jeun-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1300-1304
    • /
    • 2003
  • The following paper deals with a control problem of a mobile robot in home network environment. The home network causes the mobile robot to communicate with sensors to get the sensor measurements and to be adapted to the environment changes. To get the improved performance of control of a mobile robot in spite of the change in home network environment, we use the fuzzy inference system with multiple reward reinforcement learning. The multiple reward reinforcement learning enables the mobile robot to consider the multiple control objectives and adapt itself to the change in home network environment. Multiple reward fuzzy Q-learning method is proposed for the multiple reward reinforcement learning. Multiple Q-values are considered and max-min optimization is applied to get the improved fuzzy rule. To show the effectiveness of the proposed method, some simulation results are given, which are performed in home network environment, i.e., LAN, wireless LAN, etc.

  • PDF

강화학습을 이용한 주제별 웹 탐색 (Topic directed Web Spidering using Reinforcement Learning)

  • 임수연
    • 한국지능시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.395-399
    • /
    • 2005
  • 본 논문에서는 특정 주제에 관한 웹 문서들을 더욱 빠르고 정확하게 탐색하기 위하여 강화학습을 이용한 HIGH-Q 학습 알고리즘을 제안한다. 강화학습의 목적은 환경으로부터 주어지는 보상(reward)을 최대화하는 것이며 강화학습 에이전트는 외부에 존재하는 환경과 시행착오를 통하여 상호작용하면서 학습한다. 제안한 알고리즘이 주어진 환경에서 빠르고 효율적임을 보이기 위하여 넓이 우선 탐색과 비교하는 실험을 수행하고 이를 평가하였다. 실험한 결과로부터 우리는 미래의 할인된 보상을 이용하는 강화학습 방법이 정답을 찾기 위한 탐색 페이지의 수를 줄여줌으로써 더욱 정확하고 빠른 검색을 수행할 수 있음을 알 수 있었다.

Deep Q-Network를 이용한 준능동 제어알고리즘 개발 (Development of Semi-Active Control Algorithm Using Deep Q-Network)

  • 김현수;강주원
    • 한국공간구조학회논문집
    • /
    • 제21권1호
    • /
    • pp.79-86
    • /
    • 2021
  • Control performance of a smart tuned mass damper (TMD) mainly depends on control algorithms. A lot of control strategies have been proposed for semi-active control devices. Recently, machine learning begins to be applied to development of vibration control algorithm. In this study, a reinforcement learning among machine learning techniques was employed to develop a semi-active control algorithm for a smart TMD. The smart TMD was composed of magnetorheological damper in this study. For this purpose, an 11-story building structure with a smart TMD was selected to construct a reinforcement learning environment. A time history analysis of the example structure subject to earthquake excitation was conducted in the reinforcement learning procedure. Deep Q-network (DQN) among various reinforcement learning algorithms was used to make a learning agent. The command voltage sent to the MR damper is determined by the action produced by the DQN. Parametric studies on hyper-parameters of DQN were performed by numerical simulations. After appropriate training iteration of the DQN model with proper hyper-parameters, the DQN model for control of seismic responses of the example structure with smart TMD was developed. The developed DQN model can effectively control smart TMD to reduce seismic responses of the example structure.

Optimizing Energy Efficiency in Mobile Ad Hoc Networks: An Intelligent Multi-Objective Routing Approach

  • Sun Beibei
    • 대한임베디드공학회논문지
    • /
    • 제19권2호
    • /
    • pp.107-114
    • /
    • 2024
  • Mobile ad hoc networks represent self-configuring networks of mobile devices that communicate without relying on a fixed infrastructure. However, traditional routing protocols in such networks encounter challenges in selecting efficient and reliable routes due to dynamic nature of these networks caused by unpredictable mobility of nodes. This often results in a failure to meet the low-delay and low-energy consumption requirements crucial for such networks. In order to overcome such challenges, our paper introduces a novel multi-objective and adaptive routing scheme based on the Q-learning reinforcement learning algorithm. The proposed routing scheme dynamically adjusts itself based on measured network states, such as traffic congestion and mobility. The proposed approach utilizes Q-learning to select routes in a decentralized manner, considering factors like energy consumption, load balancing, and the selection of stable links. We present a formulation of the multi-objective optimization problem and discuss adaptive adjustments of the Q-learning parameters to handle the dynamic nature of the network. To speed up the learning process, our scheme incorporates informative shaped rewards, providing additional guidance to the learning agents for better solutions. Implemented on the widely-used AODV routing protocol, our proposed approaches demonstrate better performance in terms of energy efficiency and improved message delivery delay, even in highly dynamic network environments, when compared to the traditional AODV. These findings show the potential of leveraging reinforcement learning for efficient routing in ad hoc networks, making the way for future advancements in the field of mobile ad hoc networking.

강화학습기법을 이용한 TSP의 해법 (A Learning based Algorithm for Traveling Salesman Problem)

  • 임준묵;배성민;서재준
    • 대한산업공학회지
    • /
    • 제32권1호
    • /
    • pp.61-73
    • /
    • 2006
  • This paper deals with traveling salesman problem(TSP) with the stochastic travel time. Practically, the travel time between demand points changes according to day and time zone because of traffic interference and jam. Since the almost pervious studies focus on TSP with the deterministic travel time, it is difficult to apply those results to logistics problem directly. But many logistics problems are strongly related with stochastic situation such as stochastic travel time. We need to develop the efficient solution method for the TSP with stochastic travel time. From the previous researches, we know that Q-learning technique gives us to deal with stochastic environment and neural network also enables us to calculate the Q-value of Q-learning algorithm. In this paper, we suggest an algorithm for TSP with the stochastic travel time integrating Q-learning and neural network. And we evaluate the validity of the algorithm through computational experiments. From the simulation results, we conclude that a new route obtained from the suggested algorithm gives relatively more reliable travel time in the logistics situation with stochastic travel time.

SVM-기반 제약 조건과 강화학습의 Q-learning을 이용한 변별력이 확실한 특징 패턴 선택 (Variable Selection of Feature Pattern using SVM-based Criterion with Q-Learning in Reinforcement Learning)

  • 김차영
    • 인터넷정보학회논문지
    • /
    • 제20권4호
    • /
    • pp.21-27
    • /
    • 2019
  • RNA 시퀀싱 데이터 (RNA-seq)에서 수집된 많은 양의 데이터에 변별력이 확실한 특징 패턴 선택이 유용하며, 차별성 있는 특징을 정의하는 것이 쉽지 않다. 이러한 이유는 빅데이터 자체의 특징으로써, 많은 양의 데이터에 중복이 포함되어 있기 때문이다. 해당이슈 때문에, 컴퓨터를 사용하여 처리하는 분야에서 특징 선택은 랜덤 포레스트, K-Nearest, 및 서포트-벡터-머신 (SVM)과 같은 다양한 머신러닝 기법을 도입하여 해결하려고 노력한다. 해당 분야에서도 SVM-기반 제약을 사용하는 서포트-벡터-머신-재귀-특징-제거(SVM-RFE) 알고리즘은 많은 연구자들에 의해 꾸준히 연구 되어 왔다. 본 논문의 제안 방법은 RNA 시퀀싱 데이터에서 빅-데이터처리를 위해 SVM-RFE에 강화학습의 Q-learning을 접목하여, 중요도가 추가되는 벡터를 세밀하게 추출함으로써, 변별력이 확실한 특징선택 방법을 제안한다. NCBI-GEO와 같은 빅-데이터에서 공개된 일부의 리보솜 단백질 클러스터 데이터에 본 논문에서 제안된 알고리즘을 적용하고, 해당 알고리즘에 의해 나온 결과와 이전 공개된 SVM의 Welch' T를 적용한 알고리즘의 결과를 비교 평가하였다. 해당결과의 비교가 본 논문에서 제안하는 알고리즘이 좀 더 나은 성능을 보여줌을 알 수 있다.

이러닝 만족도 영향요인으로서의 상호작용과 몰입 (Interaction and Flow as the Antecedents of e-Learner Satisfaction)

  • 문철우;김재현
    • 컴퓨터교육학회논문지
    • /
    • 제14권3호
    • /
    • pp.63-72
    • /
    • 2011
  • 사이버 공간에서 학업을 병행하는 직장인 학생에게 강의만족은 매우 역동적이고 다차원적인 과정으로 개개인의 학업 니즈와 능력을 반영된 결과이기도 하다. 본 연구는 사이버 경영대학원에 재학 중인 직장인 학생을 대상으로 교수 학생 간 상호작용, 학생 상호간 상호작용, 몰입, 콘텐츠의 질과 구조화, 실시간 Q&A와 사이버 강의를 보완하는 수단으로서의 오프라인 보충강의 등이 만족도에 미칠 직 간접적 영향 정도를 분석하는 데 목적이 있다. 인과관계 검증에 집중하기 보다는 수강생 입장에서 흥미롭다고 인지된 과목과 어렵다고 판단된 과목을 중심으로 인과관계의 강약 정도를 그룹 별로 비교하였다. 분석결과, 어렵다고 인지된 과목을 중심으로 답한 그룹의 경우 교수 학생 간 상호작용에서 만족도, 콘텐츠품질에서 몰입, Q&A에서 교수 학생 간 상호작용 그리고 Q&A에서 학생 간 상호작용으로 이어지는 경로계수값이 흥미롭다고 인지된 과목을 택한 그룹의 경우보다 더 높은 것으로 나타났다. 반대로 학생 간 상호작용에서 만족도와 콘텐츠 구조에서 몰입으로 이어지는 경로계수값은 흥미롭다고 인지된 과목을 택한 그룹이 더 높은 것으로 나타났다. 이를 토대로 이러닝 설계상의 시사점도 간략히 제시하였다.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Dynamic CBDT : Q-learning의 강화기법을 응용한 CBDT 확장 기법 (Dynamic CBDT : Extension of CBDT via Reinforcement Method of Q-learning)

  • 진영균;장형수
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (B)
    • /
    • pp.194-199
    • /
    • 2006
  • 본 논문에서는 불확실한 환경 상에서의 의사결정 알고리즘인 "Case-based Decision Theory" (CBDT) 알고리즘을 dynamic하게 연동되는 연속된 의사결정 문제에 대하여 강화학습의 대표적인 Q-learning의 강화기법을 응용하여 확장한 새로운 의사결정 알고리즘 "Dynamic CBDT"를 제안하고, CBDT알고리즘에 대한 Dynamic CBDT의 효율성을 테트리스 실험을 통하여 확인한다.

  • PDF

강화 학습에 의한 소형 자율 이동 로봇의 협동 알고리즘 구현 (A reinforcement learning-based method for the cooperative control of mobile robots)

  • 김재희;조재승;권인소
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.648-651
    • /
    • 1997
  • This paper proposes methods for the cooperative control of multiple mobile robots and constructs a robotic soccer system in which the cooperation will be implemented as a pass play of two robots. To play a soccer game, elementary actions such as shooting and moving have been designed, and Q-learning, which is one of the popular methods for reinforcement learning, is used to determine what actions to take. Through simulation, learning is successful in case of deliberate initial arrangements of ball and robots, thereby cooperative work can be accomplished.

  • PDF