• Title/Summary/Keyword: Deep Reinforcement Learning

Search Result 208, Processing Time 0.022 seconds

Enhancing Location Privacy through P2P Network and Caching in Anonymizer

  • Liu, Peiqian;Xie, Shangchen;Shen, Zihao;Wang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1653-1670
    • /
    • 2022
  • The fear that location privacy may be compromised greatly hinders the development of location-based service. Accordingly, some schemes based on the distributed architecture in peer-to-peer network for location privacy protection are proposed. Most of them assume that mobile terminals are mutually trusted, but this does not conform to realistic scenes, and they cannot make requirements for the level of location privacy protection. Therefore, this paper proposes a scheme for location attribute-based security authentication and private sharing data group, so that they trust each other in peer-to-peer network and the trusted but curious mobile terminal cannot access the initiator's query request. A new identifier is designed to allow mobile terminals to customize the protection strength. In addition, the caching mechanism is introduced considering the cache capacity, and a cache replacement policy based on deep reinforcement learning is proposed to reduce communications with location-based service server for achieving location privacy protection. Experiments show the effectiveness and efficiency of the proposed scheme.

Development of Artificial Intelligence Janggi Game based on Machine Learning Algorithm (기계학습 알고리즘 기반의 인공지능 장기 게임 개발)

  • Jang, Myeonggyu;Kim, Youngho;Min, Dongyeop;Park, Kihyeon;Lee, Seungsoo;Woo, Chongwoo
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.137-148
    • /
    • 2017
  • Researches on the Artificial Intelligence has been explosively activated in various fields since the advent of AlphaGo. Particularly, researchers on the application of multi-layer neural network such as deep learning, and various machine learning algorithms are being focused actively. In this paper, we described a development of an artificial intelligence Janggi game based on reinforcement learning algorithm and MCTS (Monte Carlo Tree Search) algorithm with accumulated game data. The previous artificial intelligence games are mostly developed based on mini-max algorithm, which depends only on the results of the tree search algorithms. They cannot use of the real data from the games experts, nor cannot enhance the performance by learning. In this paper, we suggest our approach to overcome those limitations as follows. First, we collects Janggi expert's game data, which can reflect abundant real game results. Second, we create a graph structure by using the game data, which can remove redundant movement. And third, we apply the reinforcement learning algorithm and MCTS algorithm to select the best next move. In addition, the learned graph is stored by object serialization method to provide continuity of the game. The experiment of this study is done with two different types as follows. First, our system is confronted with other AI based system that is currently being served on the internet. Second, our system confronted with some Janggi experts who have winning records of more than 50%. Experimental results show that the rate of our system is significantly higher.

Experimental Study of Hybrid Super Coating (HSC) and Cast Reinforcement for Masonry Wall (하이브리드 슈퍼코팅(HSC)과 유리섬유를 통한 조적조 내진보강 연구)

  • Lee, Ga Yoon;Moon, A hea;Lee, Seung Jun;Kim, Jae Hyun;Lee, Kihak
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2021
  • Many Korean domestic masonry structures constructed since 1970 have been found to be vulnerable to earthquakes because they lack efficient lateral force resistance. Many studies have shown that the brick and mortar suddenly experience brittle fracture and out-of-plane collapse when they reach the inelastic range. This study evaluated the seismic retrofitting of non-reinforced masonry with Hybrid Super Coating (HSC) and Cast, manufactured using glass fiber. Four types of specimen original specimen (BR-OR), one layered HSC (BR-HS-O), two-layered HSC (BR-HS-B), one layered HSC, and Cast (BR-CT-HS-O) were constructed and analyzed using compression, flexural tensile, diagonal compression, and triplet tests. The specimen responses were presented and discussed in load-displacement curves, maximum strength, and crack propagation. The compressive strength of the retrofit specimens slightly increased, while the flexural tensile strength of the retrofit specimens increased significantly. In addition, the HSC and Cast also produced a considerable increase in the ductile response of specimens before failure. Diagonal compression test results showed that HSC delayed brittle cracks between the mortar and bricks and resulted in larger displacement before failure than the original brick. The triplet test results confirmed that the bonding strength of the retrofit specimens also increased. The application of HSC and Cast was found to restrain the occurrence of brittle failure effectively and delayed the collapse of masonry wall structures.

Lane Change Methodology for Autonomous Vehicles Based on Deep Reinforcement Learning (심층강화학습 기반 자율주행차량의 차로변경 방법론)

  • DaYoon Park;SangHoon Bae;Trinh Tuan Hung;Boogi Park;Bokyung Jung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.1
    • /
    • pp.276-290
    • /
    • 2023
  • Several efforts in Korea are currently underway with the goal of commercializing autonomous vehicles. Hence, various studies are emerging on autonomous vehicles that drive safely and quickly according to operating guidelines. The current study examines the path search of an autonomous vehicle from a microscopic viewpoint and tries to prove the efficiency required by learning the lane change of an autonomous vehicle through Deep Q-Learning. A SUMO was used to achieve this purpose. The scenario was set to start with a random lane at the starting point and make a right turn through a lane change to the third lane at the destination. As a result of the study, the analysis was divided into simulation-based lane change and simulation-based lane change applied with Deep Q-Learning. The average traffic speed was improved by about 40% in the case of simulation with Deep Q-Learning applied, compared to the case without application, and the average waiting time was reduced by about 2 seconds and the average queue length by about 2.3 vehicles.

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

A Study on Ship Route Generation with Deep Q Network and Route Following Control

  • Min-Kyu Kim;Hyeong-Tak Lee
    • Journal of Navigation and Port Research
    • /
    • v.47 no.2
    • /
    • pp.75-84
    • /
    • 2023
  • Ships need to ensure safety during their navigation, which makes route determination highly important. It must be accompanied by a route following controller that can accurately follow the route. This study proposes a method for automatically generating the ship route based on deep reinforcement learning algorithm and following it using a route following controller. To generate a ship route, under keel clearance was applied to secure the ship's safety and navigation chart information was used to apply ship navigation related regulations. For the experiment, a target ship with a draft of 8.23 m was designated. The target route in this study was to depart from Busan port and arrive at the pilot boarding place of the Ulsan port. As a route following controller, a velocity type fuzzy P ID controller that could compensate for the limitation of a linear controller was applied. As a result of using the deep Q network, a route with a total distance of 62.22 km and 81 waypoints was generated. To simplify the route, the Douglas-Peucker algorithm was introduced to reduce the total distance to 55.67 m and the number of way points to 3. After that, an experiment was conducted to follow the path generated by the target ship. Experiment results revealed that the velocity type fuzzy P ID controller had less overshoot and fast settling time. In addition, it had the advantage of reducing the energy loss of the ship because the change in rudder angle was smooth. This study can be used as a basic study of route automatic generation. It suggests a method of combining ship route generation with the route following control.

Path Planning with Obstacle Avoidance Based on Double Deep Q Networks (이중 심층 Q 네트워크 기반 장애물 회피 경로 계획)

  • Yongjiang Zhao;Senfeng Cen;Seung-Je Seong;J.G. Hur;Chang-Gyoon Lim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.231-240
    • /
    • 2023
  • It remains a challenge for robots to learn avoiding obstacles automatically in path planning using deep reinforcement learning (DRL). More and more researchers use DRL to train a robot in a simulated environment and verify the possibility of DRL to achieve automatic obstacle avoidance. Due to the influence factors of different environments robots and sensors, it is rare to realize automatic obstacle avoidance of robots in real scenarios. In order to learn automatic path planning by avoiding obstacles in the actual scene we designed a simple Testbed with the wall and the obstacle and had a camera on the robot. The robot's goal is to get from the start point to the end point without hitting the wall as soon as possible. For the robot to learn to avoid the wall and obstacle we propose to use the double deep Q networks (DDQN) to verify the possibility of DRL in automatic obstacle avoidance. In the experiment the robot used is Jetbot, and it can be applied to some robot task scenarios that require obstacle avoidance in automated path planning.

Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving (안전하고 효과적인 자율주행을 위한 불확실성 순차 모델링)

  • Yoon, Jae Ung;Lee, Ju Hong
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.9-20
    • /
    • 2022
  • Deep reinforcement learning(RL) is an end-to-end data-driven control method that is widely used in the autonomous driving domain. However, conventional RL approaches have difficulties in applying it to autonomous driving tasks due to problems such as inefficiency, instability, and uncertainty. These issues play an important role in the autonomous driving domain. Although recent studies have attempted to solve these problems, they are computationally expensive and rely on special assumptions. In this paper, we propose a new algorithm MCDT that considers inefficiency, instability, and uncertainty by introducing a method called uncertainty sequence modeling to autonomous driving domain. The sequence modeling method, which views reinforcement learning as a decision making generation problem to obtain high rewards, avoids the disadvantages of exiting studies and guarantees efficiency, stability and also considers safety by integrating uncertainty estimation techniques. The proposed method was tested in the OpenAI Gym CarRacing environment, and the experimental results show that the MCDT algorithm provides efficient, stable and safe performance compared to the existing reinforcement learning method.

The Effect of Segment Size on Quality Selection in DQN-based Video Streaming Services (DQN 기반 비디오 스트리밍 서비스에서 세그먼트 크기가 품질 선택에 미치는 영향)

  • Kim, ISeul;Lim, Kyungshik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1182-1194
    • /
    • 2018
  • The Dynamic Adaptive Streaming over HTTP(DASH) is envisioned to evolve to meet an increasing demand on providing seamless video streaming services in the near future. The DASH performance heavily depends on the client's adaptive quality selection algorithm that is not included in the standard. The existing conventional algorithms are basically based on a procedural algorithm that is not easy to capture and reflect all variations of dynamic network and traffic conditions in a variety of network environments. To solve this problem, this paper proposes a novel quality selection mechanism based on the Deep Q-Network(DQN) model, the DQN-based DASH Adaptive Bitrate(ABR) mechanism. The proposed mechanism adopts a new reward calculation method based on five major performance metrics to reflect the current conditions of networks and devices in real time. In addition, the size of the consecutive video segment to be downloaded is also considered as a major learning metric to reflect a variety of video encodings. Experimental results show that the proposed mechanism quickly selects a suitable video quality even in high error rate environments, significantly reducing frequency of quality changes compared to the existing algorithm and simultaneously improving average video quality during video playback.