• 제목/요약/키워드: Deep Q-Network

검색결과 63건 처리시간 0.03초

심층 순환 Q 네트워크 기반 목적 지향 대화 시스템 (Goal Oriented Dialogue System Based on Deep Recurrent Q Network)

  • 박건우;김학수
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2018년도 제30회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.147-150
    • /
    • 2018
  • 목적 지향 대화 시스템은 자연어 이해, 대화 관리자, 자연어 생성과 같은 세분화 모델들의 결합으로 이루어져있어 하위 모델에 대한 오류 전파에 취약하다. 이러한 문제점을 해결하기 위해 자연어 이해 모델과 대화 관리자를 하나의 네트워크로 구성하고 오류에 강건한 심층 Q 네트워크를 제안한다. 본 논문에서는 대화의 전체 흐름을 파악 할 수 있는 순환 신경망인 LSTM에 심층 Q 네트워크 적용한 심층 순환 Q 네트워크 기반 목적 지향 대화 시스템을 제안한다. 실험 결과, 제안한 심층 순환 Q 네트워크는 LSTM, 심층 Q 네트워크보다 각각 정밀도 1.0%p, 6.7%p 높은 성능을 보였다.

  • PDF

지도학습과 강화학습을 이용한 준능동 중간층면진시스템의 최적설계 (Optimal Design of Semi-Active Mid-Story Isolation System using Supervised Learning and Reinforcement Learning)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제21권4호
    • /
    • pp.73-80
    • /
    • 2021
  • A mid-story isolation system was proposed for seismic response reduction of high-rise buildings and presented good control performance. Control performance of a mid-story isolation system was enhanced by introducing semi-active control devices into isolation systems. Seismic response reduction capacity of a semi-active mid-story isolation system mainly depends on effect of control algorithm. AI(Artificial Intelligence)-based control algorithm was developed for control of a semi-active mid-story isolation system in this study. For this research, an practical structure of Shiodome Sumitomo building in Japan which has a mid-story isolation system was used as an example structure. An MR (magnetorheological) damper was used to make a semi-active mid-story isolation system in example model. In numerical simulation, seismic response prediction model was generated by one of supervised learning model, i.e. an RNN (Recurrent Neural Network). Deep Q-network (DQN) out of reinforcement learning algorithms was employed to develop control algorithm The numerical simulation results presented that the DQN algorithm can effectively control a semi-active mid-story isolation system resulting in successful reduction of seismic responses.

심층 강화학습 기반의 대학 전공과목 추천 시스템 (Recommendation System of University Major Subject based on Deep Reinforcement Learning)

  • 임덕선;민연아;임동균
    • 한국인터넷방송통신학회논문지
    • /
    • 제23권4호
    • /
    • pp.9-15
    • /
    • 2023
  • 기존의 단순 통계 기반 추천 시스템은 학생들의 수강 이력 데이터만을 활용하기 때문에 선호하는 수업을 찾는 것에 많은 어려움을 겪고 있다. 이를 해결하기 위해, 본 연구에서는 심층 강화학습 기반의 개인화된 전공과목 추천 시스템을 제안한다. 이 시스템은 학생의 학과, 학년, 수강 이력 등의 정형 데이터를 기반으로 학생들 간의 유사도를 측정하며, 이를 통해 각 전공과목에 대한 정보와 학생들의 강의 평가를 종합적으로 고려하여 가장 적합한 전공과목을 추천한다. 본 논문에서는 이 DRL 기반의 추천 시스템을 통해 대학생들이 전공과목을 선택하는 데에 유용한 정보를 제공하며, 이를 통계 기반 추천 시스템과 비교하였을 때 더 우수한 성능을 보여주는 것을 확인하였다. 시뮬레이션 결과, 심층 강화학습 기반의 추천 시스템은 통계 기반 추천 시스템에 비해 수강 과목 예측률에서 약 20%의 성능 향상을 보였다. 이러한 결과를 바탕으로, 학생들의 강의 평가를 반영하여 개인화된 과목 추천을 제공하는 새로운 시스템을 제안한다. 이 시스템은 학생들이 자신의 선호와 목표에 맞는 전공과목을 찾는 데에 큰 도움이 될 것으로 기대한다.

Resource Allocation Strategy of Internet of Vehicles Using Reinforcement Learning

  • Xi, Hongqi;Sun, Huijuan
    • Journal of Information Processing Systems
    • /
    • 제18권3호
    • /
    • pp.443-456
    • /
    • 2022
  • An efficient and reasonable resource allocation strategy can greatly improve the service quality of Internet of Vehicles (IoV). However, most of the current allocation methods have overestimation problem, and it is difficult to provide high-performance IoV network services. To solve this problem, this paper proposes a network resource allocation strategy based on deep learning network model DDQN. Firstly, the method implements the refined modeling of IoV model, including communication model, user layer computing model, edge layer offloading model, mobile model, etc., similar to the actual complex IoV application scenario. Then, the DDQN network model is used to calculate and solve the mathematical model of resource allocation. By decoupling the selection of target Q value action and the calculation of target Q value, the phenomenon of overestimation is avoided. It can provide higher-quality network services and ensure superior computing and processing performance in actual complex scenarios. Finally, simulation results show that the proposed method can maintain the network delay within 65 ms and show excellent network performance in high concurrency and complex scenes with task data volume of 500 kbits.

스마트 제어알고리즘 개발을 위한 강화학습 리워드 설계 (Reward Design of Reinforcement Learning for Development of Smart Control Algorithm)

  • 김현수;윤기용
    • 한국공간구조학회논문집
    • /
    • 제22권2호
    • /
    • pp.39-46
    • /
    • 2022
  • Recently, machine learning is widely used to solve optimization problems in various engineering fields. In this study, machine learning is applied to development of a control algorithm for a smart control device for reduction of seismic responses. For this purpose, Deep Q-network (DQN) out of reinforcement learning algorithms was employed to develop control algorithm. A single degree of freedom (SDOF) structure with a smart tuned mass damper (TMD) was used as an example structure. A smart TMD system was composed of MR (magnetorheological) damper instead of passive damper. Reward design of reinforcement learning mainly affects the control performance of the smart TMD. Various hyper-parameters were investigated to optimize the control performance of DQN-based control algorithm. Usually, decrease of the time step for numerical simulation is desirable to increase the accuracy of simulation results. However, the numerical simulation results presented that decrease of the time step for reward calculation might decrease the control performance of DQN-based control algorithm. Therefore, a proper time step for reward calculation should be selected in a DQN training process.

딥 러닝을 이용한 자동 댓글 생성에 관한 연구 (A Study on Automatic Comment Generation Using Deep Learning)

  • 최재용;성소윤;김경철
    • 한국게임학회 논문지
    • /
    • 제18권5호
    • /
    • pp.83-92
    • /
    • 2018
  • 최근 다수의 분야에서 딥 러닝을 통한 연구 성과들이 사람의 판단력에 근접하는 결과를 보여주고 있다. 그리고 게임 산업에서는 온라인 커뮤니티, SNS의 활성화가 게임 흥행 여부를 결정할 정도로 중요성이 높아지고 있다. 본 연구는 딥 러닝을 이용해 온라인 커뮤니티, SNS에서 활동할 수 있는 시스템을 구성하고, 온라인 공간에서 사람들이 작성한 텍스트를 읽고 그에 대한 반응을 생성하고 스케쥴에 따라 트위터에 올리는 것을 목표로 한다. 순환 신경망(Recurrent Neural Network)을 이용해 텍스트를 생성하고 글 작성 스케쥴을 생성하는 모델들을 구성했고, 생성한 시각에 맞춰 모델들에 뉴스 제목을 입력해 댓글을 출력 받고 트위터에 작성하는 프로그램을 구현했다. 본 연구결과는 온라인 게임 커뮤니티 활성화, Q&A 서비스 등에 적용이 가능할 것으로 예상된다.

Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks

  • Yoonjeong, Choi; Yujin, Lim
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.803-812
    • /
    • 2022
  • With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devices close to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizes to be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations (SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multiple SBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate content and increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategy that considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored in the SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of the content. Experiments in various environments show that the proposed caching strategy outperforms the other methods in terms of hit ratio.

다중 에이전트 강화학습을 이용한 RC보 최적설계 기술개발 (Development of Optimal Design Technique of RC Beam using Multi-Agent Reinforcement Learning)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제23권2호
    • /
    • pp.29-36
    • /
    • 2023
  • Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.

정리정돈을 위한 Q-learning 기반의 작업계획기 (Tidy-up Task Planner based on Q-learning)

  • 양민규;안국현;송재복
    • 로봇학회논문지
    • /
    • 제16권1호
    • /
    • pp.56-63
    • /
    • 2021
  • As the use of robots in service area increases, research has been conducted to replace human tasks in daily life with robots. Among them, this study focuses on the tidy-up task on a desk using a robot arm. The order in which tidy-up motions are carried out has a great impact on the success rate of the task. Therefore, in this study, a neural network-based method for determining the priority of the tidy-up motions from the input image is proposed. Reinforcement learning, which shows good performance in the sequential decision-making process, is used to train such a task planner. The training process is conducted in a virtual tidy-up environment that is configured the same as the actual tidy-up environment. To transfer the learning results in the virtual environment to the actual environment, the input image is preprocessed into a segmented image. In addition, the use of a neural network that excludes unnecessary tidy-up motions from the priority during the tidy-up operation increases the success rate of the task planner. Experiments were conducted in the real world to verify the proposed task planning method.

MEC 환경에서 심층 강화학습을 이용한 오프로딩 기법의 성능비교 (Performance Comparison of Deep Reinforcement Learning based Computation Offloading in MEC)

  • 문성원;임유진
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 춘계학술발표대회
    • /
    • pp.52-55
    • /
    • 2022
  • 5G 시대에 스마트 모바일 기기가 기하급수적으로 증가하면서 멀티 액세스 엣지 컴퓨팅(MEC)이 유망한 기술로 부상했다. 낮은 지연시간 안에 계산 집약적인 서비스를 제공하기 위해 MEC 서버로 오프로딩하는 특히, 태스크 도착률과 무선 채널의 상태가 확률적인 MEC 시스템 환경에서의 오프로딩 연구가 주목받고 있다. 본 논문에서는 차량의 전력과 지연시간을 최소화하기 위해 로컬 실행을 위한 연산 자원과 오프로딩을 위한 전송 전력을 할당하는 심층 강화학습 기반의 오프로딩 기법을 제안하였다. Deep Deterministic Policy Gradient (DDPG) 기반 기법과 Deep Q-network (DQN) 기반 기법을 차량의 전력 소비량과 큐잉 지연시간 측면에서 성능을 비교 분석하였다.