• Title/Summary/Keyword: robot actor

Search Result 23, Processing Time 0.021 seconds

The Influence of the Appearance of 'Robot Actor' on the Features of the Theater ('로봇배우'의 등장이 연극의 특성에 미치는 영향)

  • Park, Yeon-Joo;Oh, Se-Kon
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.507-515
    • /
    • 2019
  • The positive effects of 'robot actor' born in the age of artificial intelligence on the characteristics of theater (comprehensive, liveness, duality, planning) is due to the collaboration with 'robot' engineers, which increases the comprehensive. It is possible to respond to it, so that various reaction are maintained in every performance, and enhanced illusion can be provided in 'robot' material works in which 'robot actor' plays the role of 'robot'. However, the power focused on the director can reduce the comprehensiceness, the synthesis is reduced, and the 'robot actor' cannot perform the sweat or breath of 'human actor'. In itself, duality is incomplete. In addition, there is a high risk that the improvisation within the scope of planning is likely to occur as a sudden reaction, which may limit the postponement of the 'human actor'. Based on these findings, 'philosophy', 'science' and 'art' can predict the development of artificial intelligence side by side. It is considered necessary to study to redefine the direction and identity of arts and theater that should be moved forward.

Development of Robot Performance Platform Interoperating with an Industrial Robot Arm and a Humanoid Robot Actor (산업용 로봇 Arm과 휴머노이드 로봇 액터를 연동한 로봇 공연 플랫폼 개발)

  • Cho, Jayang;Kim, Jinyoung;Lee, Sulhee;Lee, Sang-won;Kim, Hyungtae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.487-496
    • /
    • 2020
  • For the purpose of next generation technology for robot perfomances, a RAoRA (Robot Actor on Robot Arm) structure was proposed using a robot arm joined with a humanoid robot actor. Mechanical analysis, machine design and fabrication were performed for motions combined with the robot arm and the humanoid robot actor. Kinematical analysis for 3D model, spline interpolation of positions, motion control algorithm and control devices were developed for movements of the robot actor. Preliminary visualization, simulation tools and integrated operation of consoles were constructed for the non-professionals to produce intuitive and safe contents. Air walk was applied to test the developed platform. The air walk is a natural walk close to a floor or slow ascension to the air. The RAoRA also executed a performance with 5 minute-running time. Finally, the proposed platform of robot performance presented intensive and live motions which was impossible in conventional robot performances.

Intelligent Warehousing: Comparing Cooperative MARL Strategies

  • Yosua Setyawan Soekamto;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.205-211
    • /
    • 2024
  • Effective warehouse management requires advanced resource planning to optimize profits and space. Robots offer a promising solution, but their effectiveness relies on embedded artificial intelligence. Multi-agent reinforcement learning (MARL) enhances robot intelligence in these environments. This study explores various MARL algorithms using the Multi-Robot Warehouse Environment (RWARE) to determine their suitability for warehouse resource planning. Our findings show that cooperative MARL is essential for effective warehouse management. IA2C outperforms MAA2C and VDA2C on smaller maps, while VDA2C excels on larger maps. IA2C's decentralized approach, focusing on cooperation over collaboration, allows for higher reward collection in smaller environments. However, as map size increases, reward collection decreases due to the need for extensive exploration. This study highlights the importance of selecting the appropriate MARL algorithm based on the specific warehouse environment's requirements and scale.

Robot Locomotion via RLS-based Actor-Critic Learning (RLS 기반 Actor-Critic 학습을 이용한 로봇이동)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.234-237
    • /
    • 2005
  • 강화학습을 위한 많은 방법 중 정책 반복을 이용한 actor-critic 학습 방법이 많은 적용 사례를 통해서 그 가능성을 인정받고 있다. Actor-critic 학습 방법은 제어입력 선택 전략을 위한 actor 학습과 가치 함수 근사를 위한 critic 학습이 필요하다. 본 논문은 critic의 학습을 위해 빠른 수렴성을 보장하는 RLS(recursive least square)를 사용하고, actor의 학습을 위해 정책의 기울기(policy gradient)를 이용하는 새로운 알고리즘을 제안하였다. 그리고 이를 실험적으로 확인하여 제안한 논문의 성능을 확인해 보았다.

  • PDF

Kernel-based actor-critic approach with applications

  • Chu, Baek-Suk;Jung, Keun-Woo;Park, Joo-Young
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.267-274
    • /
    • 2011
  • Recently, actor-critic methods have drawn significant interests in the area of reinforcement learning, and several algorithms have been studied along the line of the actor-critic strategy. In this paper, we consider a new type of actor-critic algorithms employing the kernel methods, which have recently shown to be very effective tools in the various fields of machine learning, and have performed investigations on combining the actor-critic strategy together with kernel methods. More specifically, this paper studies actor-critic algorithms utilizing the kernel-based least-squares estimation and policy gradient, and in its critic's part, the study uses a sliding-window-based kernel least-squares method, which leads to a fast and efficient value-function-estimation in a nonparametric setting. The applicability of the considered algorithms is illustrated via a robot locomotion problem and a tunnel ventilation control problem.

Robot Locomotion via RLS-based Actor-Critic Learning (RLS 기반 Actor-Critic 학습을 이용한 로봇이동)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.893-898
    • /
    • 2005
  • Due to the merits that only a small amount of computation is needed for solutions and stochastic policies can be handled explicitly, the actor-critic algorithm, which is a class of reinforcement learning methods, has recently attracted a lot of interests in the area of artificial intelligence. The actor-critic network composes of tile actor network for selecting control inputs and the critic network for estimating value functions, and in its training stage, the actor and critic networks take the strategy, of changing their parameters adaptively in order to select excellent control inputs and yield accurate approximation for value functions as fast as possible. In this paper, we consider a new actor-critic algorithm employing an RLS(Recursive Least Square) method for critic learning, and policy gradients for actor learning. The applicability of the considered algorithm is illustrated with experiments on the two linked robot arm.

Control of Crawling Robot using Actor-Critic Fuzzy Reinforcement Learning (액터-크리틱 퍼지 강화학습을 이용한 기는 로봇의 제어)

  • Moon, Young-Joon;Lee, Jae-Hoon;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.519-524
    • /
    • 2009
  • Recently, reinforcement learning methods have drawn much interests in the area of machine learning. Dominant approaches in researches for the reinforcement learning include the value-function approach, the policy search approach, and the actor-critic approach, among which pertinent to this paper are algorithms studied for problems with continuous states and continuous actions along the line of the actor-critic strategy. In particular, this paper focuses on presenting a method combining the so-called ACFRL(actor-critic fuzzy reinforcement learning), which is an actor-critic type reinforcement learning based on fuzzy theory, together with the RLS-NAC which is based on the RLS filters and natural actor-critic methods. The presented method is applied to a control problem for crawling robots, and some results are reported from comparison of learning performance.

Active assisted-living system using a robot in WSAN (WSAN에서 로봇을 활용한 능동 생활지원 시스템)

  • Kim, Hong-Seok;Yi, Soo-Yeong;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.177-184
    • /
    • 2009
  • This paper presents an active assisted-living system in wireless sensor and actor network (WSAN) in which the mobile robot roles an actor. In order to provide assisted-living service to the elderly people, position recognition of the sensor node attached on the user and localization of the mobile robot should be performed at the same time. For the purpose, we use received signal strength indication (RSSI) to find the position of the person and ubiquitous sensor nodes including ultrasonic sensor which performs both transmission of sensor information and localization like global positioning system. Active services are moving to the elderly people by detecting activity sensor and visual tracking and voice chatting with remote monitoring system.

  • PDF

Human-like Whole Body Motion Generation of Humanoid Based on Simplified Human Model (단순인체모델 기반 휴머노이드의 인간형 전신동작 생성)

  • Kim, Chang-Hwan;Kim, Seung-Su;Ra, Syung-Kwon;You, Bum-Jae
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.287-299
    • /
    • 2008
  • People have expected a humanoid robot to move as naturally as a human being does. The natural movements of humanoid robot may provide people with safer physical services and communicate with persons through motions more correctly. This work presented a methodology to generate the natural motions for a humanoid robot, which are converted from human motion capture data. The methodology produces not only kinematically mapped motions but dynamically mapped ones. The kinematical mapping reflects the human-likeness in the converted motions, while the dynamical mapping could ensure the movement stability of whole body motions of a humanoid robot. The methodology consists of three processes: (a) Human modeling, (b) Kinematic mapping and (c) Dynamic mapping. The human modeling based on optimization gives the ZMP (Zero Moment Point) and COM (Center of Mass) time trajectories of an actor. Those trajectories are modified for a humanoid robot through the kinematic mapping. In addition to modifying the ZMP and COM trajectories, the lower body (pelvis and legs) motion of the actor is then scaled kinematically and converted to the motion available to the humanoid robot considering dynamical aspects. The KIST humanoid robot, Mahru, imitated a dancing motion to evaluate the methodology, showing the good agreement in the motion.

  • PDF

Improved Deep Q-Network Algorithm Using Self-Imitation Learning (Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘)

  • Sunwoo, Yung-Min;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning is a simple off-policy actor-critic algorithm that makes an agent find an optimal policy by using past good experiences. In case that Self-Imitation Learning is combined with reinforcement learning algorithms that have actor-critic architecture, it shows performance improvement in various game environments. However, its applications are limited to reinforcement learning algorithms that have actor-critic architecture. In this paper, we propose a method of applying Self-Imitation Learning to Deep Q-Network which is a value-based deep reinforcement learning algorithm and train it in various game environments. We also show that Self-Imitation Learning can be applied to Deep Q-Network to improve the performance of Deep Q-Network by comparing the proposed algorithm and ordinary Deep Q-Network training results.