• 제목/요약/키워드: Environmental Adaptive Robot

검색결과 14건 처리시간 0.021초

환경 적응형 로봇의 기계식 중력보상 기반 다리 구조 (Leg Structure based on Counterbalance Mechanism for Environmental Adaptive Robot)

  • 박희창;오장석;조용준;윤해룡;홍형길;강민수;박관형;송재복
    • 한국기계가공학회지
    • /
    • 제21권8호
    • /
    • pp.9-18
    • /
    • 2022
  • As the COVID-19 continues, the demand for robotic technology that can be applied in face-to-face tasks such as delivery and transportation, is increasing. Although these technologies have been developed and applied in various industries, the robots can only be operated in a tidy indoor environment and have limitations in terms of payload. To overcome these problems, we developed a 2 degree of freedom(DOF) environmental adaptive robot leg with a double 1-DOF counterbalance mechanism (CBM) based on wire roller. The double 1-DOF CBM is applied to the two revolute joints of the proposed robot leg to compensate for the weight of the mobile robot platform and part of the payload. In addition, the link of the robot leg is designed in a parallelogram structure based on a belt pulley to enable efficient control of the mobile platform. In this study, we propose the principle and structure of the CBM that is suitable for the robot leg, and design of the counterbalance robot leg module for the environment-adaptive control. Further, we verify the performance of the proposed counterbalance robot leg by using dynamic simulations and experiments.

외력 대처 기능을 갖는 사각 보행 로보트 적응 걸음새에 관한 연구 (A study on an adaptive gait for a quadruped walking robot under external forces)

  • 강동오;이연정;이승하;홍예선
    • 전자공학회논문지B
    • /
    • 제33B권9호
    • /
    • pp.1-12
    • /
    • 1996
  • In this paper, we propose an adaptive gait by which a quadruped walking robot can walk against external disturbances. This adaptive gait mechanism makes it possible for a quadruped walking robot to change its gait and accommodate external disturbances form various external environmental factors. Under the assumption that external disturbances can be converted to an external force acting on the body of a quadruped walking robot, we propose a new criterion for the stability margin of a waling robot by using an effective mass center based on the zero moment point under unknown external force. And for a solution of an adaptive gait against external disturbances, an method of altitude control and reflexive direction control is suggested. An algorithmic search method for an optimal stride of the quadruped mehtod, the gait stability margin of a quadruped walking robot is optimized in changing its direction at any instance for and after the reflexive direction control. To verify the efficiency of the proposed approach, some simulaton results are provided.

  • PDF

An Adaptive Goal-Based Model for Autonomous Multi-Robot Using HARMS and NuSMV

  • Kim, Yongho;Jung, Jin-Woo;Gallagher, John C.;Matson, Eric T.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제16권2호
    • /
    • pp.95-103
    • /
    • 2016
  • In a dynamic environment autonomous robots often encounter unexpected situations that the robots have to deal with in order to continue proceeding their mission. We propose an adaptive goal-based model that allows cyber-physical systems (CPS) to update their environmental model and helps them analyze for attainment of their goals from current state using the updated environmental model and its capabilities. Information exchange approach utilizes Human-Agent-Robot-Machine-Sensor (HARMS) model to exchange messages between CPS. Model validation method uses NuSMV, which is one of Model Checking tools, to check whether the system can continue its mission toward the goal in the given environment. We explain a practical set up of the model in a situation in which homogeneous robots that has the same capability work in the same environment.

사용자 적응 인터페이스를 사용한 이동로봇의 원격제어 (Remote Control of a Mobile Robot Using Human Adaptive Interface)

  • 황창순;이상룡;박근영;이춘영
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.777-782
    • /
    • 2007
  • Human Robot Interaction(HRI) through a haptic interface plays an important role in controlling robot systems remotely. The augmented usage of bio-signals in the haptic interface is an emerging research area. To consider operator's state in HRI, we used bio-signals such as ECG and blood pressure in our proposed force reflection interface. The variation of operator's state is checked from the information processing of bio-signals. The statistical standard variation in the R-R intervals and blood pressure were used to adaptively adjust force reflection which is generated from environmental condition. To change the pattern of force reflection according to the state of the human operator is our main idea. A set of experiments show the promising results on our concepts of human adaptive interface.

2차원 라이다 센서 데이터 분류를 이용한 적응형 장애물 회피 알고리즘 (Adaptive Obstacle Avoidance Algorithm using Classification of 2D LiDAR Data)

  • 이나라;권순환;유혜정
    • 센서학회지
    • /
    • 제29권5호
    • /
    • pp.348-353
    • /
    • 2020
  • This paper presents an adaptive method to avoid obstacles in various environmental settings, using a two-dimensional (2D) LiDAR sensor for mobile robots. While the conventional reaction based smooth nearness diagram (SND) algorithms use a fixed safety distance criterion, the proposed algorithm autonomously changes the safety criterion considering the obstacle density around a robot. The fixed safety criterion for the whole SND obstacle avoidance process can induce inefficient motion controls in terms of the travel distance and action smoothness. We applied a multinomial logistic regression algorithm, softmax regression, to classify 2D LiDAR point clouds into seven obstacle structure classes. The trained model was used to recognize a current obstacle density situation using newly obtained 2D LiDAR data. Through the classification, the robot adaptively modifies the safety distance criterion according to the change in its environment. We experimentally verified that the motion controls generated by the proposed adaptive algorithm were smoother and more efficient compared to those of the conventional SND algorithms.

격자위상혼합지도방식과 적응제어 알고리즘을 이용한 SLAM 성능 향상 (Increasing the SLAM performance by integrating the grid-topology based hybrid map and the adaptive control method)

  • 김수현;양태규
    • 전기학회논문지
    • /
    • 제58권8호
    • /
    • pp.1605-1614
    • /
    • 2009
  • The technique of simultaneous localization and mapping is the most important research topic in mobile robotics. In the process of building a map in its available memory, the robot memorizes environmental information on the plane of grid or topology. Several approaches about this technique have been presented so far, but most of them use mapping technique as either grid-based map or topology-based map. In this paper we propose a frame of solving the SLAM problem of linking map covering, map building, localizing, path finding and obstacle avoiding in an automatic way. Some algorithms integrating grid and topology map are considered and this make the SLAM performance faster and more stable. The proposed scheme uses an occupancy grid map in representing the environment and then formulate topological information in path finding by A${\ast}$ algorithm. The mapping process is shown and the shortest path is decided on grid based map. Then topological information such as direction, distance is calculated on simulator program then transmitted to robot hardware devices. The localization process and the dynamic obstacle avoidance can be accomplished by topological information on grid map. While mapping and moving, pose of the robot is adjusted for correct localization by implementing additional pixel based image layer and tracking some features. A laser range finer and electronic compass systems are implemented on the mobile robot and DC geared motor wheels are individually controlled by the adaptive PD control method. Simulations and experimental results show its performance and efficiency of the proposed scheme are increased.

Goal-oriented Geometric Model Based Intelligent System Architecture for Adaptive Robotic Motion Generation in Dynamic Environment

  • Lee, Dong-Hun;Hwang, Kyung-Hun;Chung, Chae-Wook;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2568-2574
    • /
    • 2005
  • Control architecture of the action based robot engineering can be divided into two types of deliberate type - and reactive type- controller. Typical deliberate type, slow in reaction speed, is well suited for the realization of the higher intelligence with its capability to forecast on the basis of environmental model according to time flow, while reactive type is suitable for the lower intelligence as it fits to the realization of speedy reactive action by inputting the sensor without a complete environmental model. Looking at the environments in the application areas in which robots are actually used, we can see that they have been mostly covered by the uncertain and unknown dynamic changes depending on time and place, the previously known knowledge being existed though. It may cause, therefore, any deterioration of the robot performance as well as further happen such cases as the robots can not carry out their desired performances, when any one of these two types is solely engaged. Accordingly this paper aims at suggesting Goal-oriented Geometric Model(GGM) Based Intelligent System Architecture which leads the actions of the robots to perform their jobs under variously changing environment and applying the suggested system structure to the navigation issues of the robots. When the robots do perform navigation in human life changing in a various manner with time, they can appropriately respond to the changing environment by doing the action with the recognition of the state. Extending this concept to cover the highest hierarchy without sticking only to the actions of the robots can lead us to apply to the algorithm to perform various small jobs required for the carrying-out of a large main job.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 한국시뮬레이션학회:학술대회논문집
    • /
    • 한국시뮬레이션학회 2001년도 The Seoul International Simulation Conference
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

PSO를 이용한 인공면역계 기반 자율분산로봇시스템의 군 제어 (Swarm Control of Distributed Autonomous Robot System based on Artificial Immune System using PSO)

  • 김준엽;고광은;박승민;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제18권5호
    • /
    • pp.465-470
    • /
    • 2012
  • This paper proposes a distributed autonomous control method of swarm robot behavior strategy based on artificial immune system and an optimization strategy for artificial immune system. The behavior strategies of swarm robot in the system are depend on the task distribution in environment and we have to consider the dynamics of the system environment. In this paper, the behavior strategies divided into dispersion and aggregation. For applying to artificial immune system, an individual of swarm is regarded as a B-cell, each task distribution in environment as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. The executing process of proposed method is as follows: When the environmental condition changes, the agent selects an appropriate behavior strategy. And its behavior strategy is stimulated and suppressed by other agent using communication. Finally much stimulated strategy is adopted as a swarm behavior strategy. In order to decide more accurately select the behavior strategy, the optimized parameter learning procedure that is represented by stimulus function of antigen to antibody in artificial immune system is required. In this paper, particle swarm optimization algorithm is applied to this learning procedure. The proposed method shows more adaptive and robustness results than the existing system at the viewpoint that the swarm robots learning and adaptation degree associated with the changing of tasks.