• Title/Summary/Keyword: cooperative AI

Search Result 33, Processing Time 0.021 seconds

ETRI AI Strategy #4: Expanding AI Open Platform (ETRI AI 실행전략 4: AI 개방형 플랫폼 제공 확대)

  • Kim, S.M.;Hong, A.R.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.36-45
    • /
    • 2020
  • The method and process of research and development (R&D) is changing when we develop artificial intelligence (AI), and the way R&D results are dispersed is also changing. For the R&D process, using and participating in open-source ecosystems has become more important, so we need to be prepared for open source. For product and service development, a combination of AI algorithm, data, and computing power is needed. In this paper, we introduce ETRI AI Strategy #4, "Expanding AI Open Platform." It consists of two key tasks: one to build an AI open source platform (OSP) to create a cooperative AI R&D ecosystem, and another to systematize the "x+AI" open platform (XOP) to disperse AI technologies into the ecosystem.

Design of Omok AI using Genetic Algorithm and Game Trees and Their Parallel Processing on the GPU (유전 알고리즘과 게임 트리를 병합한 오목 인공지능 설계 및 GPU 기반 병렬 처리 기법)

  • Ahn, Il-Jun;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.2
    • /
    • pp.66-75
    • /
    • 2010
  • This paper proposes an efficient method for design and implementation of the artificial intelligence (AI) of 'omok' game on the GPU. The proposed AI is designed on a cooperative structure using min-max game tree and genetic algorithm. Since the evaluation function needs intensive computation but is independently performed on a lot of candidates in the solution space, it is computed on the GPU in a massive parallel way. The implementation on NVIDIA CUDA and the experimental results show that it outperforms significantly over the CPU, in which parallel game tree and genetic algorithm on the GPU runs more than 400 times and 300 times faster than on the CPU. In the proposed cooperative AI, selective search using genetic algorithm is performed subsequently after the full search using game tree to search the solution space more efficiently as well as to avoid the thread overflow. Experimental results show that the proposed algorithm enhances the AI significantly and makes it run within the time limit given by the game's rule.

Build reinforcement learning AI process for cooperative play with users (사용자와의 협력 플레이를 위한 강화학습 인공지능 프로세스 구축)

  • Jung, Won-Joe
    • Journal of Korea Game Society
    • /
    • v.20 no.1
    • /
    • pp.57-66
    • /
    • 2020
  • The goal is to implement AI using reinforcement learning, which replaces the less favored Supporter in MOBA games. ML_Agent implements game rules, environment, observation information, rewards, and punishment. The experiment was divided into P and C group. Experiments were conducted to compare the cumulative compensation values and the number of deaths to draw conclusions. In group C, the mean cumulative compensation value was 3.3 higher than that in group P, and the total mean number of deaths was 3.15 lower. performed cooperative play to minimize death and maximize rewards was confirmed.

The Effect of Appreciative Inquiry on Positive Psychological Capital and Organizational Commitment of New Nurses (긍정적 탐구 활동이 신규간호사의 긍정심리자본과 조직몰입에 미치는 효과)

  • Kim, Hyunju;Yi, Young Hee
    • Journal of Korean Critical Care Nursing
    • /
    • v.12 no.3
    • /
    • pp.13-23
    • /
    • 2019
  • Purpose : The purpose of this study was to determine whether appreciative inquiry (AI) is an effective intervention for increasing the positive psychological capital and organizational commitment of new nurses. Method : The study used a nonequivalent control group pretest-posttest design. The participants were 60 new nurses in a tertiary hospital in Seoul. The experimental group received 2 classes of AI education and in-unit AI activities. The control group received the existing education program. Results : There was no statistically significant difference in the positive psychological capital and organizational commitment between the experimental group and the control group over time. Satisfaction with the AI education scored 3.69, which was higher than the average. The reason why the experimental group members were satisfied with the program was that AI education helped them to adapt and the in-unit AI activities made staff more cooperative and the atmosphere of the unit more positive. Conclusion : When applying AI activities to new nurses to promote positive psychological capital and organizational commitment, it is necessary to provide a workshop in which the participants can fully concentrate on education and to extend the period of use to one year in order to maintain the effect of AI activities.

Stochastic Initial States Randomization Method for Robust Knowledge Transfer in Multi-Agent Reinforcement Learning (멀티에이전트 강화학습에서 견고한 지식 전이를 위한 확률적 초기 상태 랜덤화 기법 연구)

  • Dohyun Kim;Jungho Bae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.474-484
    • /
    • 2024
  • Reinforcement learning, which are also studied in the field of defense, face the problem of sample efficiency, which requires a large amount of data to train. Transfer learning has been introduced to address this problem, but its effectiveness is sometimes marginal because the model does not effectively leverage prior knowledge. In this study, we propose a stochastic initial state randomization(SISR) method to enable robust knowledge transfer that promote generalized and sufficient knowledge transfer. We developed a simulation environment involving a cooperative robot transportation task. Experimental results show that successful tasks are achieved when SISR is applied, while tasks fail when SISR is not applied. We also analyzed how the amount of state information collected by the agents changes with the application of SISR.

Making Levels More Challenging with a Cooperative Strategy of Ghosts in Pac-Man (고스트들의 협력전술에 의한 팩맨게임 난이도 제고)

  • Choi, Taeyeong;Na, Hyeon-Suk
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.89-98
    • /
    • 2015
  • The artificial intelligence (AI) of Non-Player Companions (NPC), especially opponents, is a key element to adjust the level of games in game design. Smart opponents can make games more challenging as well as allow players for diverse experiences, even in the same game environment. Since game users interact with more than one opponent in most of today's games, collaboration control of opponent characters becomes more important than ever before. In this paper, we introduce a cooperative strategy based on the A* algorithm for enemies' AI in the Pac-Man game. A survey from 17 human testers shows that the levels with our collaborative opponents are more difficult but interesting than those with either the original Pac-Man's personalities or the non-cooperative greedy opponents.

Case Study for the Application of PBL in Engineering School : Focused on an Artificial Intelligence Class (공과대학에서 문제중심학습 적용 사례 연구 : 인공지능 과목을 중심으로)

  • Lee, Keunsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.154-160
    • /
    • 2018
  • This thesis aims to develop PBL (Problem-Based-Learning) problems. Its goal is for some groups of students to creative their own problems and to confirm the effectiveness of PBL as they apply it to AI (Artificial Intelligence) in engineering schools. Modern industrial society needs competent people who have abilities in cooperative learning, self-controlled learning, united knowledge application, and creative problem-solving. Universities need to offer their students the opportunity to improve their problem-solving and cooperative learning abilities in order to train the competent people that society demands. PBL activity is an appropriate learning method for the accomplishment of these goals. The study subjects are 37 sophomore students in H University who are studying 'AI'. Five PBL problems were submitted to the class over a period of 15 weeks. The students wrote and submitted a reflective journal after they finished each PBL activity. In addition, they filled out a class evaluation form to assess the performances of each member when the $5^{th}$ PBL problem activity was accomplished. The study shows that the students experienced the effectiveness of PBL in many fields, such as the comprehension of the studied contents (86.48%), comprehension of cooperative learning (94.59%), authentic experience (75.67%), problem-solving skills (89.18%), presentation skills (97.29%), creativity improvement (81.08%), knowledge acquisition ability (86.48%), communication ability (97.29%), united knowledge application (78.37%), self-directed study ability (86.48%) and confidence (97.29%). Through these methods, the students were able to realize that PBL learning activities play an important role in their learning. These methods prepare and enhance their ability to think creatively, work systematically and speak confidently as they learn to become competitive engineers equipped with the knowledge and skills that modern industrial society demands.

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

Integrated System of Mobile Manipulator with Speech Recognition and Deep Learning-based Object Detection (음성인식과 딥러닝 기반 객체 인식 기술이 접목된 모바일 매니퓰레이터 통합 시스템)

  • Jang, Dongyeol;Yoo, Seungryeol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.270-275
    • /
    • 2021
  • Most of the initial forms of cooperative robots were intended to repeat simple tasks in a given space. So, they showed no significant difference from industrial robots. However, research for improving worker's productivity and supplementing human's limited working hours is expanding. Also, there have been active attempts to use it as a service robot by applying AI technology. In line with these social changes, we produced a mobile manipulator that can improve the worker's efficiency and completely replace one person. First, we combined cooperative robot with mobile robot. Second, we applied speech recognition technology and deep learning based object detection. Finally, we integrated all the systems by ROS (robot operating system). This system can communicate with workers by voice and drive autonomously and perform the Pick & Place task.