• Title/Summary/Keyword: Learning Agent

Search Result 448, Processing Time 0.044 seconds

Development of the e-Learning Contents for the First Programming Course (초보자 프로그래밍 개발을 위한 e-Learning 콘텐츠 개발)

  • Kim Jung-Sook
    • KSCI Review
    • /
    • v.14 no.1
    • /
    • pp.213-219
    • /
    • 2006
  • We need the customized e-Learning service according to not only developing the wireless mobile and hardware technology, also developing the multimedia process skills. Especially, the beginner who start to learn the first programming course must be provided the personalized learning. The beginner require the repeated practices to obtain the programming skills, also they reveal the different learning effects following the learner capability In this paper. we develop a new e-Learning contents which give the individual service for learner and show the simulation which is program execution to maximize the learning effects.

  • PDF

A Learning Agent for Automatic Bookmark Classification (북 마크 자동 분류를 위한 학습 에이전트)

  • Kim, In-Cheol;Cho, Soo-Sun
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.455-462
    • /
    • 2001
  • The World Wide Web has become one of the major services provided through Internet. When searching the vast web space, users use bookmarking facilities to record the sites of interests encountered during the course of navigation. One of the typical problems arising from bookmarking is that the list of bookmarks lose coherent organization when the the becomes too lengthy, thus ceasing to function as a practical finding aid. In order to maintain the bookmark file in an efficient, organized manner, the user has to classify all the bookmarks newly added to the file, and update the folders. This paper introduces our learning agent called BClassifier that automatically classifies bookmarks by analyzing the contents of the corresponding web documents. The chief source for the training examples are the bookmarks already classified into several bookmark folders according to their subject by the user. Additionally, the web pages found under top categories of Yahoo site are collected and included in the training examples for diversifying the subject categories to be represented, and the training examples for these categories as well. Our agent employs naive Bayesian learning method that is a well-tested, probability-based categorizing technique. In this paper, the outcome of some experimentation is also outlined and evaluated. A comparison of naive Bayesian learning method alongside other learning methods such as k-Nearest Neighbor and TFIDF is also presented.

  • PDF

Modeling and Simulation on One-vs-One Air Combat with Deep Reinforcement Learning (깊은강화학습 기반 1-vs-1 공중전 모델링 및 시뮬레이션)

  • Moon, Il-Chul;Jung, Minjae;Kim, Dongjun
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.39-46
    • /
    • 2020
  • The utilization of artificial intelligence (AI) in the engagement has been a key research topic in the defense field during the last decade. To pursue this utilization, it is imperative to acquire a realistic simulation to train an AI engagement agent with a synthetic, but realistic field. This paper is a case study of training an AI agent to operate with a hardware realism in the air-warfare dog-fighting. Particularly, this paper models the pursuit of an opponent in the dog-fighting setting with a gun-only engagement. In this context, the AI agent requires to make a decision on the pursuit style and intensity. We developed a realistic hardware simulator and trained the agent with a reinforcement learning. Our training shows a success resulting in a lead pursuit with a decreased engagement time and a high reward.

A Theoretical Investigation on Agency to Facilitate the Understanding of Student-Centered Learning Communities in Science Classrooms (학생 중심의 과학 학습 공동체 이해를 위한 행위주체성에 대한 이론적 고찰)

  • Ha, Heesoo;Kim, Heui-Baik
    • Journal of The Korean Association For Science Education
    • /
    • v.39 no.1
    • /
    • pp.101-113
    • /
    • 2019
  • This study aims to explore which aspects of student agency have previously been studied and the ways agent practices have been investigated in learning communities in research on science education. Results reveal five aspects of agency related to students' actions in a learning community: epistemic agency, transformative agency, educated action in science, disciplinary agency, and material agency. We delineated how agency is captured in epistemic practices, as described in the literature on each of the aforementioned aspects. We also probed into the three approaches by which previous research has examined the practices of students as agents that construct learning communities. These approaches are (a) the investigation of students' actions as representative of the agency of an entire learning community, (b) the exploration of the effects of focused student action on the structure of activity, and (c) the investigation of interactions between students as agents. We discussed the implications of previous research on the basis of each approach to understanding the diverse features of student-centered learning communities. The present work contributes to the exploration and support of students' practices as agents in the learning communities in science classrooms.

Understanding and Designing Teachable Agent (교수가능 에이전트(Teachable Agent)의 개념적 이해와 설계방안)

  • 김성일;김원식;윤미선;소연희;권은주;최정선;김문숙;이명진;박태진
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.3
    • /
    • pp.13-21
    • /
    • 2003
  • This study presents a design of Teachable Agent(TA) and its theoretical background. TA is an intelligent agent to which students as tutors teach, pose questions, and provide feedbacks using a concept map. TA consists of four independent Modules, Teach Module, Q&A Module, Test Module, and Resource Module. In Teach Module, students teach TA by constructing concept map. In Q&A Module, both students and TA ask questions and answer questions each other through an interactive window. To assess TA's knowledge and provide feedback to students, Test Module consists of a set of predetermined questions which TA should pass. From Resource Module, students can search and look up important information to teach, ask questions, and provide feedbacks whenever they want. It is expected that TA should provide student tutors with an active role in learning and positive attitude toward the subject matter by enhancing their cognition as well as motivation.

  • PDF

Cognitive Approach for Building Intelligent Agent (지능 에이전트 구현의 인지적 접근)

  • Tae Kang-Soo
    • Journal of Internet Computing and Services
    • /
    • v.5 no.2
    • /
    • pp.97-105
    • /
    • 2004
  • The reason that an intelligent agent cannot understand the representation of its own perception or activity is caused by the traditional syntactic approach that translates a semantic feature into a simulated string, To implement an autonomously learning intelligent agent, Cohen introduces a experimentally semantic approach that the system learns a contentful representation of physical schema from physically interacting with environment using its own sensors and effectors. We propose that negation is a meta-level schema that enables an agent to recognize its own physical schema, To improve the planner's efficiency, Graphplan introduces the control rule that manipulates the inconsistency between planning operators, but it cannot cognitively understand negation and suffers from redundancy problem. By introducing a negative function not, IPP solves the problem, but its approach is still syntactic and is inefficient in terms of time and space. In this paper, we propose that, to represent a negative fact, a positive atom, which is called opposite concept, is a very efficient technique for implementing an cognitive agent, and demonstrate some empirical results supporting the hypothesis.

  • PDF

Build reinforcement learning AI process for cooperative play with users (사용자와의 협력 플레이를 위한 강화학습 인공지능 프로세스 구축)

  • Jung, Won-Joe
    • Journal of Korea Game Society
    • /
    • v.20 no.1
    • /
    • pp.57-66
    • /
    • 2020
  • The goal is to implement AI using reinforcement learning, which replaces the less favored Supporter in MOBA games. ML_Agent implements game rules, environment, observation information, rewards, and punishment. The experiment was divided into P and C group. Experiments were conducted to compare the cumulative compensation values and the number of deaths to draw conclusions. In group C, the mean cumulative compensation value was 3.3 higher than that in group P, and the total mean number of deaths was 3.15 lower. performed cooperative play to minimize death and maximize rewards was confirmed.

Emotional Intelligence System for Ubiquitous Smart Foreign Language Education Based on Neural Mechanism

  • Dai, Weihui;Huang, Shuang;Zhou, Xuan;Yu, Xueer;Ivanovi, Mirjana;Xu, Dongrong
    • Journal of Information Technology Applications and Management
    • /
    • v.21 no.3
    • /
    • pp.65-77
    • /
    • 2014
  • Ubiquitous learning has aroused great interest and is becoming a new way for foreign language education in today's society. However, how to increase the learners' initiative and their community cohesion is still an issue that deserves more profound research and studies. Emotional intelligence can help to detect the learner's emotional reactions online, and therefore stimulate his interest and the willingness to participate by adjusting teaching skills and creating fun experiences in learning. This is, actually the new concept of smart education. Based on the previous research, this paper concluded a neural mechanism model for analyzing the learners' emotional characteristics in ubiquitous environment, and discussed the intelligent monitoring and automatic recognition of emotions from the learners' speech signals as well as their behavior data by multi-agent system. Finally, a framework of emotional intelligence system was proposed concerning the smart foreign language education in ubiquitous learning.

Strategic Coalition for Improving Generalization Ability of Multi-agent with Evolutionary Learning (진화학습을 이용한 다중에이전트의 일반화 성능향상을 위한 전략적 연합)

  • 양승룡;조성배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.101-110
    • /
    • 2004
  • In dynamic systems, such as social and economic systems, complex interactions emerge among its members. In that case, their behaviors become adaptive according to Changing environment. In many cases, an individual's behaviors can be modeled by a stimulus-response system in a dynamic environment. In this paper, we use the Iterated Prisoner's Dilemma (IPD) game, which is simple yet capable of dealing with complex problems, to model the dynamic systems. We propose strategic coalition consisting of many agents and simulate their emergence in a co-evolutionary learning environment. Also we introduce the concept of confidence for agents in a coalition and show how such confidences help to improve the generalization ability of the whole coalition. Experimental results are presented to demonstrate that co-evolutionary learning with coalitions and confidence allows better performing strategies that generalize well.

Region-based Q- learning For Autonomous Mobile Robot Navigation (자율 이동 로봇의 주행을 위한 영역 기반 Q-learning)

  • 차종환;공성학;서일홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.174-174
    • /
    • 2000
  • Q-learning, based on discrete state and action space, is a most widely used reinforcement Learning. However, this requires a lot of memory and much time for learning all actions of each state when it is applied to a real mobile robot navigation using continuous state and action space Region-based Q-learning is a reinforcement learning method that estimates action values of real state by using triangular-type action distribution model and relationship with its neighboring state which was defined and learned before. This paper proposes a new Region-based Q-learning which uses a reward assigned only when the agent reached the target, and get out of the Local optimal path with adjustment of random action rate. If this is applied to mobile robot navigation, less memory can be used and robot can move smoothly, and optimal solution can be learned fast. To show the validity of our method, computer simulations are illusrated.

  • PDF