• Title/Summary/Keyword: Action Learning Process

Search Result 167, Processing Time 0.026 seconds

Comparative Study on Self-leadership, Team Efficacy, Problem Solving Process and Task Satisfaction of Nursing Students in Response to Clinical Training (임상 실습과제 방법에 따른 간호학생의 셀프리더십, 팀효능감, 문제해결과정 및 과제만족도 비교연구)

  • Kim, Jung Hyo;Park, Mi Kyung
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.20 no.4
    • /
    • pp.482-490
    • /
    • 2014
  • Purpose: This research compares self-leadership, team efficacy, problem solving processes and task satisfaction in response to teaching methods applied to nursing students, and determines whether variations exist. Method: This research experiments before and after the training of a nonequivalent group. The subjects were 36 learners of action learning methods and 39 learners of nursing course methods, and the research took place from October through December 2012. Results: Prior to the training, the general features and measurable variables of the two groups of subjects were similar, and self-leadership, team efficacy, problem solving process and task satisfaction in both groups were elevated compared to pre-training. In particular, in comparison with the nursing course, there was a notable difference in scores, the action learning method receiving high scores in the problem solving process (t=2.92, p=.005) and task satisfaction (t=2.54, p=.013) Conclusion: It is recommended that educators not only conduct the practice training course for teaching methods, but also incorporate action learning.

A Survey on Deep Reinforcement Learning Libraries (심층강화학습 라이브러리 기술동향)

  • Shin, S.J.;Cho, C.L.;Jeon, H.S.;Yoon, S.H.;Kim, T.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.87-99
    • /
    • 2019
  • Reinforcement learning is a type of machine learning paradigm that forces agents to repeat the observation-action-reward process to assess and predict the values of possible future action sequences. This allows the agents to incrementally reinforce the desired behavior for a given observation. Thanks to the recent advancements of deep learning, reinforcement learning has evolved into deep reinforcement learning that introduces promising results in various control and optimization domains, such as games, robotics, autonomous vehicles, computing, industrial control, and so on. In addition to this trend, a number of programming libraries have been developed for importing deep reinforcement learning into a variety of applications. In this article, we briefly review and summarize 10 representative deep reinforcement learning libraries and compare them from a development project perspective.

Reflective Abstraction and Operational Instruction of Mathematics (반영적 추상화와 조작적 수학 학습-지도)

  • 우정호;홍진곤
    • Journal of Educational Research in Mathematics
    • /
    • v.9 no.2
    • /
    • pp.383-404
    • /
    • 1999
  • This study began with an epistemological question about the nature of mathematical cognition in relation to the learner's activity. Therefore, by examining Piaget's 'reflective abstraction' theory which can be an answer to the question, we tried to get suggestions which can be given to the mathematical education in practice. 'Reflective abstraction' is formed through the coordination of the epistmmic subject's action while 'empirical abstraction' is formed by the characters of observable concrete object. The reason Piaget distinguished these two kinds of abstraction is that the foundation for the peculiar objectivity and inevitability can be taken from the coordination of the action which is shared by all the epistemic subjects. Moreover, because the mechanism of reflective abstraction, unlike empirical abstraction, does not construct a new operation by simply changing the result of the previous construction, but is forming re-construction which includes the structure previously constructed as a special case, the system which is developed by this mechanism is able to have reasonability constantly. The mechanism of the re-construction of the intellectual system through the reflective abstraction can be explained as continuous spiral alternance between the two complementary processes, 'reflechissement' and 'reflexion'; reflechissement is that the action moves to the higher level through the process of 'int riorisation' and 'thematisation'; reflexion is a process of 'equilibration'between the assimilation and the accomodation of the unbalance caused by the movement of the level. The operational learning principle of the theorists like Aebli who intended to embody Piaget's operational constructivism, attempts to explain the construction of the operation through 'internalization' of the action, but does not sufficiently emphasize the integration of the structure through the 'coordination' of the action and the ensuing discontinuous evolvement of learning level. Thus, based on the examination on the essential characteristic of the reflective abstraction and the mechanism, this study presents the principles of teaching and learning as following; $\circled1$ the principle of the operational interpretation of knowledge, $\circled2$ the principle of the structural interpretation of the operation, $\circled3$ the principle of int riorisation, $\circled4$ the principle of th matisation, $\circled5$ the principle of coordination, reflexion, and integration, $\circled6$ the principle of the discontinuous evolvement of learning level.

  • PDF

Case Study on Dynamics of RDA PLA Model with Agri-SMEs (농업인 참여식 실천학습모델 개발과 성과분석 -농촌진흥청 강소농 사업을 중심으로-)

  • Kim, Sa Gyun;Lee, Mi Hwa;Park, Heun Dong
    • Journal of Agricultural Extension & Community Development
    • /
    • v.19 no.3
    • /
    • pp.551-579
    • /
    • 2012
  • This case study aims to explore how RDA PLA model affects the agri-SMEs' empowerment. As an agri-business management renovation program from main workshop it was conducted on March to December 2011 with agri-SMEs and extension officials nationwide by RDA. Especially, as a packaged action learning process in the model used participatory action research. This study collected data with participants observation, interviews, situational analysis and systematic review of discourse in qualitative method. For the validity and identifying empirical results, this study used statistic analysis as a mixed method. Further including various pedagogic methods and business coaching skills, this model was conducted from workshop in RDA, in turn, on-farm business coaching as follow-up, CoPs' activities, and local ATCs extension services by each actors. The dynamic process and effects of each process led some change for farmers' innovative knowledge, skills, attitude, practice and aspiration on their farm business. RDA PLA model development based on the previous practices and research, which provided a configurated picture in the holistic action learning process. In statistic research, this study focused on 279 farmers as respondents who had participated in the program. It shows that their income and benefits increased from their renovative practices on farm business. Following the sampling group, it was surveyed by four indicators - products, customer, quality and cost. The level of contribution of education on economic impact 15% is quoted from previous paper. Even in some limitations of public sector, RDA PLA model actively suggests the paradigm shift of agricultural HRD and development of alternative extension-service system.

RULE-BASE SIZE-REDUCTION TECHNIQUES IN A LEARNING FUZZY CONTROLLER

  • Lembessis, E.;Tnascheit, R.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.761-764
    • /
    • 1993
  • In this paper we consider techniques for reducing the generated number of rules in learning fuzzy controllers of the state-space action-reinforcement type that can be simply implemented and that behave well in the presence of process noise. Fewer rules lead to better performance, less contradiction in controller action estimation, smaller required execution-time and make it easier for a human to comprehend the generated rules and possibly intervene.

  • PDF

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.

Design of Autonomous Mobile Robot System Based on Artificial Immune Network and Internet (인공 면역망과 인터넷에 의한 자율이동로봇 시스템 설계)

  • Lee, Dong-Je;Lee, Min-Jung;Choi, Young-Kiu
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.11
    • /
    • pp.522-531
    • /
    • 2001
  • Recently conventional artificial intelligence(AI) approaches have been employed to build action selectors for the autonomous mobile robot(AMR). However, in these approaches, the decision making process to choose an action from multiple competence modules is still an open question. Many researches have been focused on the reactive planning systems such as the biological immune system. In this paper, we attempt to construct an action selector for an AMR based on the artificial immune network and internet. The information from vision sensors is used for antibody. We propose a learning method for artificial immune network using evolutionary algorithm to produce antibody automatically. The internet environment for an AMR action selector shows the usefulness of the proposed learning artificial immune network application.

  • PDF

Fuzzy Q-learning using Distributed Eligibility (분포 기여도를 이용한 퍼지 Q-learning)

  • 정석일;이연정
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.388-394
    • /
    • 2001
  • Reinforcement learning is a kind of unsupervised learning methods that an agent control rules from experiences acquired by interactions with environment. The eligibility is used to resolve the credit-assignment problem which is one of important problems in reinforcement learning, Conventional eligibilities such as the accumulating eligibility and the replacing eligibility are ineffective in use of rewards acquired in learning process, since on1y one executed action for a visited state is learned. In this paper, we propose a new eligibility, called the distributed eligibility, with which not only an executed action but also neighboring actions in a visited state are to be learned. The fuzzy Q-learning algorithm using the proposed eligibility is applied to a cart-pole balancing problem, which shows the superiority of the proposed method to conventional methods in terms of learning speed.

  • PDF

A RESEARCH ANALYSIS ON EFFECTIVE LEARNING IN INTERNATIONAL CONSTRUCTION JOINT VENTURES

  • L.T. Zhang;W.F. Wong;Charles Y.J. Cheah
    • International conference on construction engineering and project management
    • /
    • 2007.03a
    • /
    • pp.450-458
    • /
    • 2007
  • This paper presents the results of a statistical analysis and its research findings focusing on the learning aspect in the process of international joint ventures (IJVs). The contents of this paper is derived from a sample of 96 field cases based on a proposed conceptual model of effective learning for international construction joint ventures (ICJVs). The paper presents a brief review on the conceptual model with hypotheses and summarized the key results of statistical analysis including factor and multiple regression analysis for the testing of the validity of the proposed conceptual model and its associated research hypotheses. Among other research findings, the research confirms that ICJVs provides an excellent platform of in-action learning for construction organization and suggests that good outcomes in learning could be reaped by a company who has a clear learning intent from the beginning and subsequently take corresponding learning actions during the full process of the joint venture.

  • PDF

Computational Model of a Mirror Neuron System for Intent Recognition through Imitative Learning of Objective-directed Action (목적성 행동 모방학습을 통한 의도 인식을 위한 거울뉴런 시스템 계산 모델)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.606-611
    • /
    • 2014
  • The understanding of another's behavior is a fundamental cognitive ability for primates including humans. Recent neuro-physiological studies suggested that there is a direct matching algorithm from visual observation onto an individual's own motor repertories for interpreting cognitive ability. The mirror neurons are known as core regions and are handled as a functionality of intent recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper, we addressed previous works used to model the function and mechanisms of mirror neurons and proposed a computational model of a mirror neuron system which can be used in human-robot interaction environments. The major focus of the computation model is the reproduction of an individual's motor repertory with different embodiments. The model's aim is the design of a continuous process which combines sensory evidence, prior task knowledge and a goal-directed matching of action observation and execution. We also propose a biologically inspired plausible equation model.