• Title/Summary/Keyword: multi-action

Search Result 367, Processing Time 0.034 seconds

A STUDY OF THE MULTI-ACTION FORGING DIE SET CONTROLLED BY THE SCREWS MECHANISM

  • Yang Jin-Bin;Fang Jue-Jung
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2003.10b
    • /
    • pp.198-201
    • /
    • 2003
  • The multi-action forging process is one of developing directions of forging technologies. In this study, the multi-action die is designed and developed by the screws mechanism and the forging simulation is conducted by using plasticine to investigate the optimum conditions for the design of the screws. The results show the design variables are optimum when the diameter is 30 mm and the screw angle is $60^{\circ}$ for the upper screw rod and the outer diameter is 60 mm and the screw angle is $23.4^{\circ}$ for the lower screw tube. It makes the relative velocity between the upper punch and the die to be two to one, which is the expected condition. The material flow of the plasticine forgings is uniform. Therefore, it is feasible to use the screw set as the multi-action mechanism for controlling the movement of the multi-action forging die set.

  • PDF

An Action Decision and Execution Method of Robotic Soccer System based on Neural Networks (신경회로망을 이용한 로봇축구 시스템의 행동결정 및 행동실행 방법)

  • Lee, Kyoung-Tae;Kim, Hak-Il;Kim, Choon-Woo
    • Proceedings of the KIEE Conference
    • /
    • 1998.11b
    • /
    • pp.543-545
    • /
    • 1998
  • Robotic soccer is multi-agent system playing soccer game under given rule. This system consists of three mobile robots, vision sensor, action decision module, action execution module and communication module. This paper presents new action decision method using multi-layer neural networks.

  • PDF

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Explicit Dynamic Coordination Reinforcement Learning Based on Utility

  • Si, Huaiwei;Tan, Guozhen;Yuan, Yifu;peng, Yanfei;Li, Jianping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.792-812
    • /
    • 2022
  • Multi-agent systems often need to achieve the goal of learning more effectively for a task through coordination. Although the introduction of deep learning has addressed the state space problems, multi-agent learning remains infeasible because of the joint action spaces. Large-scale joint action spaces can be sparse according to implicit or explicit coordination structure, which can ensure reasonable coordination action through the coordination structure. In general, the multi-agent system is dynamic, which makes the relations among agents and the coordination structure are dynamic. Therefore, the explicit coordination structure can better represent the coordinative relationship among agents and achieve better coordination between agents. Inspired by the maximization of social group utility, we dynamically construct a factor graph as an explicit coordination structure to express the coordinative relationship according to the utility among agents and estimate the joint action values based on the local utility transfer among factor graphs. We present the application of such techniques in the scenario of multiple intelligent vehicle systems, where state space and action space are a problem and have too many interactions among agents. The results on the multiple intelligent vehicle systems demonstrate the efficiency and effectiveness of our proposed methods.

Cooperative Action Controller of Multi-Agent System (다 개체 시스템의 협동 행동제어기)

  • Kim, Young-Back;Jang, Hong-Min;Kim, Dae-Jun;Choi, Young-Kiu;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.3024-3026
    • /
    • 1999
  • This paper presents a cooperative action controller of a multi-agent system. To achieve an object, i.e. win a game, it is necessary that a robot has its own roles, actions and work with each other. The presented incorporated action controller consists of the role selection, action selection and execution layer. In the first layer, a fuzzy logic controller is used. Each robot selects its own action and makes its own path trajectory in the second layer. In the third layer, each robot performs their own action based on the velocity information which is sent from main computer. Finally, simulation shows that each robot selects proper roles and incorporates actions by the proposed controller.

  • PDF

A Study of Cooperative Algorithm in Multi Robots by Reinforcement Learning

  • Hong, Seong-Woo;Park, Gyu-Jong;Bae, Jong-I1;Ahn, Doo-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.1-149
    • /
    • 2001
  • In multi robot environment, the action selection strategy is important for the cooperation and coordination of multi agents. However the overlap of actions selected individually by each robot makes the acquisition of cooperation behaviors less efficient. In addition to that, a complex and dynamic environment makes cooperation even more difficult. So in this paper, we propose a control algorithm which enables each robot to determine the action for the effective cooperation in multi-robot system. Here, we propose cooperative algorithm with reinforcement learning to determine the action selection In this paper, when the environment changes, each robot selects an appropriate behavior strategy intelligently. We employ ...

  • PDF

Multi Agent Multi Action system for AI care service for elderly living alone based on radar sensor (레이더 센서 기반 독거노인 AI 돌봄 서비스를 위한 다중 에이전트 다중 액션 시스템)

  • Chae-Byeol Lee;Kwon-Taeg Choi;Jung-HO Ahn;Kyu-Chang Jang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.67-68
    • /
    • 2023
  • 본 논문에서 제안한 Multi Agent Multi Action은 기존의 대화형 시스템 방식인 Single Agent Single Action 구조에 비해 확장성을 갖춘 대화 시스템을 구현하는 방식이다. 시스템을 여러 에이전트로 분할하고, 각 에이전트가 특정 액션에 대한 처리를 담당함으로써 보다 유연하고 효율적인 대화형 시스템을 구현할 수 있으며, 다양한 작업에 특화된 에이전트를 그룹화함으로써 작업의 효율성을 극대화하고, 사용자 경험을 향상 시킬 수 있다.

  • PDF

Human Action Recognition Via Multi-modality Information

  • Gao, Zan;Song, Jian-Ming;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.739-748
    • /
    • 2014
  • In this paper, we propose pyramid appearance and global structure action descriptors on both RGB and depth motion history images and a model-free method for human action recognition. In proposed algorithm, we firstly construct motion history image for both RGB and depth channels, at the same time, depth information is employed to filter RGB information, after that, different action descriptors are extracted from depth and RGB MHIs to represent these actions, and then multimodality information collaborative representation and recognition model, in which multi-modality information are put into object function naturally, and information fusion and action recognition also be done together, is proposed to classify human actions. To demonstrate the superiority of the proposed method, we evaluate it on MSR Action3D and DHA datasets, the well-known dataset for human action recognition. Large scale experiment shows our descriptors are robust, stable and efficient, when comparing with the-state-of-the-art algorithms, the performances of our descriptors are better than that of them, further, the performance of combined descriptors is much better than just using sole descriptor. What is more, our proposed model outperforms the state-of-the-art methods on both MSR Action3D and DHA datasets.

Modulation in Action Potentials of Rat Hippocampal Neurons Measured on Multi-Channel Electrodes During Ultrasound Stimulation (다채널 전극을 이용한 초음파 자극 시 쥐 해마 신경 세포의 활동 전위 검출)

  • Han, H.S.;Jeon, H.J.;Hwang, S.Y.;Lee, Y.N.;Byun, K.M.;Jun, S.B.;Kim, T.S.
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.4
    • /
    • pp.177-181
    • /
    • 2013
  • It is known that ultrasound affects action potentials in neurons, but the underlying principles of ultrasonic neural stimulation are not clearly elucidated yet. In this study, we measured the action potentials of rat hippocampal neurons cultured on multi-electrode arrays during ultrasound stimulation. From most of electrodes, it was observed that the ultrasound stimulation increased the frequencies of action potentials (i.e., spikes) during ultrasound stimulation.

Multi-Agent Reinforcement Learning Model based on Fuzzy Inference (퍼지 추론 기반의 멀티에이전트 강화학습 모델)

  • Lee, Bong-Keun;Chung, Jae-Du;Ryu, Keun-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.10
    • /
    • pp.51-58
    • /
    • 2009
  • Reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. In the case of multi-agent, especially, which state space and action space gets very enormous in compared to single agent, so it needs to take most effective measure available select the action strategy for effective reinforcement learning. This paper proposes a multi-agent reinforcement learning model based on fuzzy inference system in order to improve learning collect speed and select an effective action in multi-agent. This paper verifies an effective action select strategy through evaluation tests based on Robocup Keepaway which is one of useful test-beds for multi-agent. Our proposed model can apply to evaluate efficiency of the various intelligent multi-agents and also can apply to strategy and tactics of robot soccer system.