• Title/Summary/Keyword: learning trajectory

Search Result 252, Processing Time 0.027 seconds

Reinforcement Learning-based Search Trajectory Generation and Stiffness Tuning for Connector Assembly (커넥터 조립을 위한 강화학습 기반의 탐색 궤적 생성 및 로봇의 임피던스 강성 조절 방법)

  • Kim, Yong-Geon;Na, Minwoo;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.455-462
    • /
    • 2022
  • Since electric connectors such as power connectors have a small assembly tolerance and have a complex shape, the assembly process is performed manually by workers. Especially, it is difficult to overcome the assembly error, and the assembly takes a long time due to the error correction process, which makes it difficult to automate the assembly task. To deal with this problem, a reinforcement learning-based assembly strategy using contact states was proposed to quickly perform the assembly process in an unstructured environment. This method learns to generate a search trajectory to quickly find a hole based on the contact state obtained from the force/torque data. It can also learn the stiffness needed to avoid excessive contact forces during assembly. To verify this proposed method, power connector assembly process was performed 200 times, and it was shown to have an assembly success rate of 100% in a translation error within ±4 mm and a rotation error within ±3.5°. Furthermore, it was verified that the assembly time was about 2.3 sec, including the search time of about 1 sec, which is faster than the previous methods.

Fast Motion Planning of Wheel-legged Robot for Crossing 3D Obstacles using Deep Reinforcement Learning (심층 강화학습을 이용한 휠-다리 로봇의 3차원 장애물극복 고속 모션 계획 방법)

  • Soonkyu Jeong;Mooncheol Won
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.143-154
    • /
    • 2023
  • In this study, a fast motion planning method for the swing motion of a 6x6 wheel-legged robot to traverse large obstacles and gaps is proposed. The motion planning method presented in the previous paper, which was based on trajectory optimization, took up to tens of seconds and was limited to two-dimensional, structured vertical obstacles and trenches. A deep neural network based on one-dimensional Convolutional Neural Network (CNN) is introduced to generate keyframes, which are then used to represent smooth reference commands for the six leg angles along the robot's path. The network is initially trained using the behavioral cloning method with a dataset gathered from previous simulation results of the trajectory optimization. Its performance is then improved through reinforcement learning, using a one-step REINFORCE algorithm. The trained model has increased the speed of motion planning by up to 820 times and improved the success rates of obstacle crossing under harsh conditions, such as low friction and high roughness.

A Study on Track Record and Trajectory Control of Articulated Robot Based on Monitoring Simulator for Smart Factory

  • Kim, Hee-Jin;Dong, Guen-Han;Kim, Dong-Ho;Jang, Gi-Won;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.2_1
    • /
    • pp.149-161
    • /
    • 2020
  • We describe a new approach to implement of trajectory control and track record of articulated manipulator based on monitoring simulator for smart factory. The learning control algorithm was applied in implementation real-time control to provide enhanced motion control performance for robotic manipulators. The proposed control scheme is simple in structure, fast in computation, and suitable for real-time control. Moreover, this scheme does not require any accurate dynamic modeling, or values of manipulator parameters and payload. Performance of the proposed controller is illustrated by simulation and experimental results for robot manipulator consisting of six joints at the joint space and Cartesian space.by monitoring simulator.

Sequence-to-Sequence based Mobile Trajectory Prediction Model in Wireless Network (무선 네트워크에서 시퀀스-투-시퀀스 기반 모바일 궤적 예측 모델)

  • Bang, Sammy Yap Xiang;Yang, Huigyu;Raza, Syed M.;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.517-519
    • /
    • 2022
  • In 5G network environment, proactive mobility management is essential as 5G mobile networks provide new services with ultra-low latency through dense deployment of small cells. The importance of a system that actively controls device handover is emerging and it is essential to predict mobile trajectory during handover. Sequence-to-sequence model is a kind of deep learning model where it converts sequences from one domain to sequences in another domain, and mainly used in natural language processing. In this paper, we developed a system for predicting mobile trajectory in a wireless network environment using sequence-to-sequence model. Handover speed can be increased by utilize our sequence-to-sequence model in actual mobile network environment.

Multi-Cattle Tracking Algorithm with Enhanced Trajectory Estimation in Precision Livestock Farms

  • Shujie Han;Alvaro Fuentes;Sook Yoon;Jongbin Park;Dong Sun Park
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.23-31
    • /
    • 2024
  • In precision cattle farm, reliably tracking the identity of each cattle is necessary. Effective tracking of cattle within farm environments presents a unique challenge, particularly with the need to minimize the occurrence of excessive tracking trajectories. To address this, we introduce a trajectory playback decision tree algorithm that reevaluates and cleans tracking results based on spatio-temporal relationships among trajectories. This approach considers trajectory as metadata, resulting in more realistic and accurate tracking outcomes. This algorithm showcases its robustness and capability through extensive comparisons with popular tracking models, consistently demonstrating the promotion of performance across various evaluation metrics that is HOTA, AssA, and IDF1 achieve 68.81%, 79.31%, and 84.81%.

Deep Video Stabilization via Optical Flow in Unstable Scenes (동영상 안정화를 위한 옵티컬 플로우의 비지도 학습 방법)

  • Bohee Lee;Kwangsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.115-127
    • /
    • 2023
  • Video stabilization is one of the camera technologies that the importance is gradually increasing as the personal media market has recently become huge. For deep learning-based video stabilization, existing methods collect pairs of video datas before and after stabilization, but it takes a lot of time and effort to create synchronized datas. Recently, to solve this problem, unsupervised learning method using only unstable video data has been proposed. In this paper, we propose a network structure that learns the stabilized trajectory only with the unstable video image without the pair of unstable and stable video pair using the Convolutional Auto Encoder structure, one of the unsupervised learning methods. Optical flow data is used as network input and output, and optical flow data was mapped into grid units to simplify the network and minimize noise. In addition, to generate a stabilized trajectory with an unsupervised learning method, we define the loss function that smoothing the input optical flow data. And through comparison of the results, we confirmed that the network is learned as intended by the loss function.

A Learning Controller for Gate Control of Biped Walking Robot using Fourier Series Approximation

  • Lim, Dong-cheol;Kuc, Tae-yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.85.4-85
    • /
    • 2001
  • A learning controller is presented for repetitive walking motion of biped robot. The learning control scheme learns the approximate inverse dynamics input of biped walking robot and uses the learned input pattern to generate an input profile of different walking motion from that learnt. In the learning controller, the PID feedback controller takes part in stabilizing the transient response of robot dynamics while the feedforward learning controller plays a role in computing the desired actuator torques for feedforward nonlinear dynamics compensation in steady state. It is shown that all the error signals in the learning control system are bounded and the robot motion trajectory converges to the desired one asymptotically. The proposed learning control scheme is ...

  • PDF

Discrete-time learning control for robotic manipulators

  • Suzuki, Tatsuya;Yasue, Masanori;Okuma, Shigeru;Uchikawa, Yoshiki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.1069-1074
    • /
    • 1989
  • A discrete-time learning control for robotic manipulators is studied using its pulse transfer function. Firstly, discrete-time learning stability condition which is applicable to single-input two-outputs systems is derived. Secondly, stability of learning algorithm with position signal is studied. In this case, when sampling period is small, the algorithm is not stable because of an unstable zero of the system. Thirdly, stability of algorithm with position and velocity signals is studied. In this case, we can stabilize the learning control system which is unstable in learning with only position signal. Finally, simulation results on the trajectory control of robotic manipulators using the discrete-time learning control are shown. This simulation results agree well with the analytical ones.

  • PDF

A general dynamic iterative learning control scheme with high-gain feedback

  • Kuc, Tae-Yong;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.1140-1145
    • /
    • 1989
  • A general dynamic iterative learning control scheme is proposed for a class of nonlinear systems. Relying on stabilizing high-gain feedback loop, it is possible to show the existence of Cauchy sequence of feedforward control input error with iteration numbers, which results in a uniform convergance of system state trajectory to the desired one.

  • PDF

DNN-Based Adaptive Optimal Learning Controller for Uncertain Robot Systems (동적 신경망에 기초한 불확실한 로봇 시스템의 적응 최적 학습제어기)

  • 정재욱;국태용;이택종
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.6
    • /
    • pp.1-10
    • /
    • 1997
  • This paper presents an adaptive optimal learning controller for uncertian robot systems which makes use fo simple DNN(dynamic neural network) units to estimate uncertain parameters and learn the unknown desired optimal input. With the aid of a lyapunov function, it is shown that all that error signals in the system are bounded and the robot trajectory converges to the desired one globally exponentially. The effectiveness of the proposed controller is hsown by applying the controller to a 2-DOF robot manipulator.

  • PDF