• Title/Summary/Keyword: Action Selection/Learning

Search Result 40, Processing Time 0.029 seconds

Teaching-based Perception-Action Learning under an Ethology-based Action Selection Mechanism (동물 행동학 기반 행동 선택 메커니즘하에서의 교시 기반 행동 학습 방법)

  • Moon, Ji-Sub;Lee, Sang-Hyoung;Suh, Il-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1147-1148
    • /
    • 2008
  • In this paper, we propose action-learning method based on teaching. By adopting this method, we can handle an exception case which cannot be handled in an Ethology-based Action SElection mechanism. Our proposed method is verified by employing AIBO robot as well as EASE platform.

  • PDF

A Motivation-Based Action-Selection-Mechanism Involving Reinforcement Learning

  • Lee, Sang-Hoon;Suh, Il-Hong;Kwon, Woo-Young
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.904-914
    • /
    • 2008
  • An action-selection-mechanism(ASM) has been proposed to work as a fully connected finite state machine to deal with sequential behaviors as well as to allow a state in the task program to migrate to any state in the task, in which a primitive node in association with a state and its transitional conditions can be easily inserted/deleted. Also, such a primitive node can be learned by a shortest path-finding-based reinforcement learning technique. Specifically, we define a behavioral motivation as having state-dependent value as a primitive node for action selection, and then sequentially construct a network of behavioral motivations in such a way that the value of a parent node is allowed to flow into a child node by a releasing mechanism. A vertical path in a network represents a behavioral sequence. Here, such a tree for our proposed ASM can be newly generated and/or updated whenever a new behavior sequence is learned. To show the validity of our proposed ASM, experimental results of a mobile robot performing the task of pushing- a- box-in to- a-goal(PBIG) will be illustrated.

An Action Selection Mechanism and Learning Algorithm for Intelligent Robot (지능로봇을 위한 행동선택 및 학습구조)

  • Yoon, Young-Min;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.496-498
    • /
    • 2004
  • An action-selection-mechanism is proposed to deal with sequential behaviors, where associations between some of stimulus and behaviors will be learned by a shortest-path-finding-based reinforcement team ins technique. To be specific, we define behavioral motivation as a primitive node for action selection, and then sequentially construct a network with behavioral motivations. The vertical path of the network represents a behavioral sequence. Here, such a tree fur our proposed ASM can be newly generated and/or updated. whenever a new sequential behaviors is learned. To show the validity of our proposed ASM, some experimental results on a "pushing-box-into-a-goal task" of a mobile robot will be illustrated.

  • PDF

A Novel Action Selection Mechanism for Intelligent Service Robots

  • Suh, Il-Hong;Kwon, Woo-Young;Lee, Sang-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2027-2032
    • /
    • 2003
  • For action selection as well as learning, simple associations between stimulus and response have been employed in most of literatures. But, for a successful task accomplishment, it is required that an animat can learn and express behavioral sequences. In this paper, we propose a novel action-selection-mechanism to deal with sequential behaviors. For this, we define behavioral motivation as a primitive node for action selection, and then hierarchically construct a network with behavioral motivations. The vertical path of the network represents behavioral sequences. Here, such a tree for our proposed ASM can be newly generated and/or updated, whenever a new sequential behaviors is learned. To show the validity of our proposed ASM, three 2-D grid world simulations will be illustrated.

  • PDF

Intelligent Robot Design: Intelligent Agent Based Approach (지능로봇: 지능 에이전트를 기초로 한 접근방법)

  • Kang, Jin-Shig
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.457-467
    • /
    • 2004
  • In this paper, a robot is considered as an agent, a structure of robot is presented which consisted by multi-subagents and they have diverse capacity such as perception, intelligence, action etc., required for robot. Also, subagents are consisted by micro-agent($\mu$agent) charged for elementary action required. The structure of robot control have two sub-agents, the one is behavior based reactive controller and action selection sub agent, and action selection sub-agent select a action based on the high label action and high performance, and which have a learning mechanism based on the reinforcement learning. For presented robot structure, it is easy to give intelligence to each element of action and a new approach of multi robot control. Presented robot is simulated for two goals: chaotic exploration and obstacle avoidance, and fabricated by using 8bit microcontroller, and experimented.

Improvement Plan of Employment Camp using Action Learning : based on the case of learning community in P university (액션러닝을 활용한 취업캠프 개선방안 : P대학 학습공동체 사례를 중심으로)

  • LEE, Jian;KIM, Hyojeong;LEE, Yoona;JEONG, Yuseop;PARK, Suhong
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.29 no.3
    • /
    • pp.677-688
    • /
    • 2017
  • The purpose of this study is to analyze the action learning lesson about the improvement process of the job support program of P university students. As a research method, we applied the related classes during the semester to the students who took courses in the course of 'Human Resource Development', which is a subject of P university, and analyzed the learner's reflection journal, interview data. As a result of the research, we went through the problem selection stage, the team construction and the team building stage. And then we searched for the root cause of the problem, clarified the problem, derived the possible solution, determined the priority and created the action plan. There are 10 solutions to the practical problems of poor job camps. Through two interviews with field experts it offered final solutions focused on promoting employment and Camp students participate in the management of post-employment into six camps. According to the first rank, job board integration, vendor selection upon student feedback, reflecting improved late questionnaire, public relations utilizing KakaoTalk, recruiting additional selection criteria, the camp provides recorded images in order. The results of this study suggest that the university's employment support program will strengthen the competitiveness of students' employment and become the basic data for the customized employment support program.

A Study of Cooperative Algorithm in Multi Robots by Reinforcement Learning

  • Hong, Seong-Woo;Park, Gyu-Jong;Bae, Jong-I1;Ahn, Doo-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.1-149
    • /
    • 2001
  • In multi robot environment, the action selection strategy is important for the cooperation and coordination of multi agents. However the overlap of actions selected individually by each robot makes the acquisition of cooperation behaviors less efficient. In addition to that, a complex and dynamic environment makes cooperation even more difficult. So in this paper, we propose a control algorithm which enables each robot to determine the action for the effective cooperation in multi-robot system. Here, we propose cooperative algorithm with reinforcement learning to determine the action selection In this paper, when the environment changes, each robot selects an appropriate behavior strategy intelligently. We employ ...

  • PDF

Action Selection by Voting with Loaming Capability for a Behavior-based Control Approach (행동기반 제어방식을 위한 득점과 학습을 통한 행동선택기법)

  • Jeong, S.M.;Oh, S.R.;Yoon, D.Y.;You, B.J.;Chung, C.C.
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.163-168
    • /
    • 2002
  • The voting algorithm for action selection performs self-improvement by Reinforcement learning algorithm in the dynamic environment. The proposed voting algorithm improves the navigation of the robot by adapting the eligibility of the behaviors and determining the Command Set Generator (CGS). The Navigator that using a proposed voting algorithm corresponds to the CGS for giving the weight values and taking the reward values. It is necessary to decide which Command Set control the mobile robot at given time and to select among the candidate actions. The Command Set was learnt online by means as Q-learning. Action Selector compares Q-values of Navigator with Heterogeneous behaviors. Finally, real-world experimentation was carried out. Results show the good performance for the selection on command set as well as the convergence of Q-value.

  • PDF

An Analysis of Action Learning Process in Education Programs for Senior Officials, Engineers, Chief Executive Officers (고위공직 후보자-엔지니어-최고경영자 교육 프로그램의 액션러닝 프로세스 분석)

  • Jung, Hyun-Kon;Moon, Sung-Han
    • Journal of Digital Convergence
    • /
    • v.10 no.1
    • /
    • pp.87-104
    • /
    • 2012
  • The purpose of this study was to analyze and present of action learning process in education programs for senior officials, engineers, chief executive officers. The main contents of this study is focused on analysis of orientation activities for each step of action learning process, project selection, analysis of problem clarification, review of data research and analysis, analysis of process for seeking of alternative and selecting execution item, comparison and analysis for the results of execution.

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.