• Title/Summary/Keyword: Dynamic Learning Control

Search Result 353, Processing Time 0.029 seconds

Drives and Motion Control Teaching based on Distance Laboratory and Remote Experiments

  • Vogelsberger, Markus A.;Macheiner, Peter;Bauer, Pavol;Wolb, Thomas M.
    • Journal of Power Electronics
    • /
    • v.10 no.6
    • /
    • pp.579-586
    • /
    • 2010
  • This paper presents the organisation and the technical structure of a remote controlled laboratory in the field of high dynamic drives and motion control. It is part of the PEMCWebLab project with the goal of providing students with practical experience on real systems in the field of power electronics and drives. The whole project is based on clear targets and leading ideas. A set of experiments can be remotely performed on a real system to stepwise identify a two axes positioning system and to design different cascaded control loops. Each single experiment is defined by its goals, the content of how to achieve them, and a verification of the results as well as the achieved learning outcomes. After a short description of the PEMCWebLab project, the structure of the remote control is presented together with the hardware applied. One important point is error handling as real machines and power electronics are applied. Finally, a selection of experiments is presented to show the graphical user interface and the sequence of the laboratory.

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Analysis on Lightweight Methods of On-Device AI Vision Model for Intelligent Edge Computing Devices (지능형 엣지 컴퓨팅 기기를 위한 온디바이스 AI 비전 모델의 경량화 방식 분석)

  • Hye-Hyeon Ju;Namhi Kang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • On-device AI technology, which can operate AI models at the edge devices to support real-time processing and privacy enhancement, is attracting attention. As intelligent IoT is applied to various industries, services utilizing the on-device AI technology are increasing significantly. However, general deep learning models require a lot of computational resources for inference and learning. Therefore, various lightweighting methods such as quantization and pruning have been suggested to operate deep learning models in embedded edge devices. Among the lightweighting methods, we analyze how to lightweight and apply deep learning models to edge computing devices, focusing on pruning technology in this paper. In particular, we utilize dynamic and static pruning techniques to evaluate the inference speed, accuracy, and memory usage of a lightweight AI vision model. The content analyzed in this paper can be used for intelligent video control systems or video security systems in autonomous vehicles, where real-time processing are highly required. In addition, it is expected that the content can be used more effectively in various IoT services and industries.

The Influences of Computer-Assisted Instruction Emphasizing the Particulate Nature of Matter and Problem-Solving Strategy on High School Students' Learning in Chemistry (물질의 입자성과 문제 해결 전략을 강조한 컴퓨터 보조 수업이 고등학생들의 화학 학습에 미치는 효과)

  • Noh, Tae-Hee;Kim, Chang-Min;Cha, Jeong-Ho;Jeon, Kyung-Moon
    • Journal of The Korean Association For Science Education
    • /
    • v.18 no.3
    • /
    • pp.337-345
    • /
    • 1998
  • This study examined the influences of computer-assisted instruction(CAl) upon high school students' conceptual understanding, algorithmic problem solving ability, learning motivation, and attitudes toward chemistry instruction. CAl programs were designed to supply animated molecular motions for emphasizing the particulate dynamic nature of matter and immediate feedbacks according to students' response types at each stage of four stage problem-solving strategy(understanding, planning, solving, and reviewing). The CAl and control groups (2 classes) were selected from a girls high school in Seoul, and taught about gas law for four class hours. Data analysis indicated that the students at the CAl group scored significantly higher than those at the control group in the tests on conceptual understanding and algorithmic problem solving ability. In addition, the students at the CAl group performed significantly better in the tests on the learning motivation and attitudes toward chemistry instruction.

  • PDF

Functional Electrical Stimulation with Augmented Feedback Training Improves Gait and Functional Performance in Individuals with Chronic Stroke: A Randomized Controlled Trial

  • Yu, Kyung-Hoon;Kang, Kwon-Young
    • The Journal of Korean Physical Therapy
    • /
    • v.29 no.2
    • /
    • pp.74-79
    • /
    • 2017
  • Purpose: The purpose of this study was to compare the effects of the FES-gait with augmented feedback training to the FES alone on the gait and functional performance in individuals with chronic stroke. Methods: This study used a pretest and posttest randomized control design. The subjects who signed the agreement were randomly divided into 12 experimental groups and 12 control groups. The experimental groups performed two types of augmented feedback training (knowledge of performance and knowledge of results) together with FES, and the control group performed FES on the TA and GM without augmented feedback and then walked for 30 minutes for 40 meters. Both the experimental groups and the control groups received training five times a week for four weeks. Results: The groups that received the FES with augmented feedback training significantly showed a greater improvement in single limb support (SLS) and gait velocity than the groups that received FES alone. In addition, timed up and go (TUG) test and six minute walk test (6MWT) showed a significant improvement in the groups that received FES with augmented feedback compared to the groups that received FES alone. Conclusion: Compared with the existing FES gait training, augmented feedback showed improvements in gait parameters, walking ability, and dynamic balance. The augmented feedback will be an important method that can provide motivation for motor learning to stroke patients.

High Performance Control of IPMSM using AIPI Controller (AIPI 제어기를 이용한 IPMSM의 고성능 제어)

  • Kim, Do-Yeon;Ko, Jae-Sub;Choi, Jung-Sik;Jung, Chul-Ho;Jung, Byung-Jin;Chung, Dong-Hwa
    • Proceedings of the KIEE Conference
    • /
    • 2009.04b
    • /
    • pp.225-227
    • /
    • 2009
  • The conventional fixed gain PI controller is very sensitive to step change of command speed, parameter variation and load disturbances. The precise speed control of interior permanent magnet synchronous motor(IPMSM) drive becomes a complex issue due to nonlinear coupling among its winding currents and the rotor speed as well as the nonlinear electromagnetic developed torque. Therefore, there exists a need to tune the PI controller parameters on-line to ensure optimum drive performance over a wide range of operating conditions. This paper is proposed artificial intelligent-PI(AIPI) controller of IPMSM drive using adaptive learning mechanism(ALM) and fuzzy neural network(FNN). The proposed controller is developed to ensure accurate speed control of IPMSM drive under system disturbances and estimation of speed using artificial neural network(ANN) controller. The PI controller parameters are optimized by ALM-FNN at all possible operating condition in a closed loop vector control scheme. The validity of the proposed controller is verified by results at different dynamic operating conditions.

  • PDF

Nonlinear intelligent control systems subjected to earthquakes by fuzzy tracking theory

  • Z.Y. Chen;Y.M. Meng;Ruei-Yuan Wang;Timothy Chen
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.291-300
    • /
    • 2024
  • Uncertainty of the model, system delay and drive dynamics can be considered as normal uncertainties, and the main source of uncertainty in the seismic control system is related to the nature of the simulated seismic error. In this case, optimizing the management strategy for one particular seismic record will not yield the best results for another. In this article, we propose a framework for online management of active structural management systems with seismic uncertainty. For this purpose, the concept of reinforcement learning is used for online optimization of active crowd management software. The controller consists of a differential controller, an unplanned gain ratio, the gain of which is enhanced using an online reinforcement learning algorithm. In addition, the proposed controller includes a dynamic status forecaster to solve the delay problem. To evaluate the performance of the proposed controllers, thousands of ground motion data sets were processed and grouped according to their spectrum using fuzzy clustering techniques with spatial hazard estimation. Finally, the controller is implemented in a laboratory scale configuration and its operation is simulated on a vibration table using cluster location and some actual seismic data. The test results show that the proposed controller effectively withstands strong seismic interference with delay. The goals of this paper are towards access to adequate, safe and affordable housing and basic services, promotion of inclusive and sustainable urbanization and participation, implementation of sustainable and disaster-resilient buildings, sustainable human settlement planning and manage. Simulation results is believed to achieved in the near future by the ongoing development of AI and control theory.

Learning based relay selection for reliable content distribution in smart class application

  • Kim, Taehong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2894-2909
    • /
    • 2015
  • As the number of mobile devices such as smart phones and tablets explodes, the need for new services or applications is also rapidly increasing. Smart class application is one of the emerging applications, in which most of contents are distributed to all members of a class simultaneously. It is highly required to select relay nodes to cover shadow area of radio as well as extend coverage, but existing algorithms in a smart class environment suffer from high control packet overhead and delay for exchanging topology information among all pairs of nodes to select relay nodes. In addition, the relay selection procedure should be repeated in order to adapt to the dynamic topology changes caused by link status changes or device's movement. This paper proposes the learning based relay selection algorithm to overcome aforementioned problems. The key idea is that every node keeps track of its relay quality in a fully distributed manner, where RQI (Relay Quality Indicator) is newly defined to measure both the ability of receiving packets from content source and the ability of successfully relaying them to successors. The RQI of each node is updated whenever it receives or relays broadcast packet, and the node having the higher RQI is selected as a relay node in a distributed and run-time manner. Thus, the proposed algorithm not only removes the overhead for obtaining prior knowledge to select relay nodes, but also provides the adaptability to the dynamic topology changes. The network simulation and experimental results prove that the proposed algorithm provides efficient and reliable content distribution to all members in a smart class as well adaptability against network dynamics.

Development of Autonomous Algorithm Using an Online Feedback-Error Learning Based Neural Network for Nonholonomic Mobile Robots (온라인 피드백 에러 학습을 이용한 이동 로봇의 자율주행 알고리즘 개발)

  • Lee, Hyun-Dong;Myung, Byung-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.602-608
    • /
    • 2011
  • In this study, a method of designing a neurointerface using neural network (NN) is proposed for controlling nonholonomic mobile robots. According to the concept of virtual master-slave robots, in particular, a partially stable inverse dynamic model of the master robot is acquired online through the NN by applying a feedback-error learning method, in which the feedback controller is assumed to be based on a PD compensator for such a nonholonomic robot. The NN for the online feedback-error learning can composed that the input layer consists of six units for the inputs $x_i$, i=1~6, the hidden layer consists of two hidden units for hidden outputs $o_j$, j=1~2, and the output layer consists of two units for the outputs ${\tau}_k$, k=1~2. A tracking control problem is demonstrated by some simulations for a nonholonomic mobile robot with two-independent driving wheels. The initial q value was set to [0, 5, ${\pi}$].

An Auto Obstacle Collision Avoidance System using Reinforcement Learning and Motion VAE (강화학습과 Motion VAE 를 이용한 자동 장애물 충돌 회피 시스템 구현)

  • Zheng Si;Taehong Gu;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.4
    • /
    • pp.1-10
    • /
    • 2024
  • In the fields of computer animation and robotics, reaching a destination while avoiding obstacles has always been a difficult task. Moreover, generating appropriate motions while planning a route is even more challenging. Recently, academic circles are actively conducting research to generate character motions by modifying and utilizing VAE (Variational Auto-Encoder), a data-based generation model. Based on this, in this study, the latent space of the MVAE model is learned using a reinforcement learning method[1]. With the policy learned in this way, the character can arrive its destination while avoiding both static and dynamic obstacles with natural motions. The character can easily avoid obstacles moving in random directions, and it is experimentally shown that the performance is improved, and the learning time is greatly reduced compared to existing approach.