• Title/Summary/Keyword: learning trajectory

Search Result 252, Processing Time 0.028 seconds

Traffic Information Service Model Considering Personal Driving Trajectories

  • Han, Homin;Park, Soyoung
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.951-969
    • /
    • 2017
  • In this paper, we newly propose a traffic information service model that collects traffic information sensed by an individual vehicle in real time by using a smart device, and which enables drivers to share traffic information on all roads in real time using an application installed on a smart device. In particular, when the driver requests traffic information for a specific area, the proposed driver-personalized service model provides him/her with traffic information on the driving directions in advance by predicting the driving directions of the vehicle based on the learning of the driving records of each driver. To do this, we propose a traffic information management model to process and manage in real time a large amount of online-generated traffic information and traffic information requests generated by each vehicle. We also propose a road node-based indexing technique to efficiently store and manage location-based traffic information provided by each vehicle. Finally, we propose a driving learning and prediction model based on the hidden Markov model to predict the driving directions of each driver based on the driver's driving records. We analyze the traffic information processing performance of the proposed model and the accuracy of the driving prediction model using traffic information collected from actual driving vehicles for the entire area of Seoul, as well as driving records and experimental data.

A Stay Detection Algorithm Using GPS Trajectory and Points of Interest Data

  • Eunchong Koh;Changhoon Lyu;Goya Choi;Kye-Dong Jung;Soonchul Kwon;Chigon Hwang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.176-184
    • /
    • 2023
  • Points of interest (POIs) are widely used in tourism recommendations and to provide information about areas of interest. Currently, situation judgement using POI and GPS data is mainly rule-based. However, this approach has the limitation that inferences can only be made using predefined POI information. In this study, we propose an algorithm that uses POI data, GPS data, and schedule information to calculate the current speed, location, schedule matching, movement trajectory, and POI coverage, and uses machine learning to determine whether to stay or go. Based on the input data, the clustered information is labelled by k-means algorithm as unsupervised learning. This result is trained as the input vector of the SVM model to calculate the probability of moving and staying. Therefore, in this study, we implemented an algorithm that can adjust the schedule using the travel schedule, POI data, and GPS information. The results show that the algorithm does not rely on predefined information, but can make judgements using GPS data and POI data in real time, which is more flexible and reliable than traditional rule-based approaches. Therefore, this study can optimize tourism scheduling. Therefore, the stay detection algorithm using GPS movement trajectories and POIs developed in this study provides important information for tourism schedule planning and is expected to provide much value for tourism services.

An indoor localization system for estimating human trajectories using a foot-mounted IMU sensor and step classification based on LSTM

  • Ts.Tengis;B.Dorj;T.Amartuvshin;Ch.Batchuluun;G.Bat-Erdene;Kh.Temuulen
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.37-47
    • /
    • 2024
  • This study presents the results of designing a system that determines the location of a person in an indoor environment based on a single IMU sensor attached to the tip of a person's shoe in an area where GPS signals are inaccessible. By adjusting for human footfall, it is possible to accurately determine human location and trajectory by correcting errors originating from the Inertial Measurement Unit (IMU) combined with advanced machine learning algorithms. Although there are various techniques to identify stepping, our study successfully recognized stepping with 98.7% accuracy using an artificial intelligence model known as Long Short-Term Memory (LSTM). Drawing upon the enhancements in our methodology, this article demonstrates a novel technique for generating a 200-meter trajectory, achieving a level of precision marked by a 2.1% error margin. Indoor pedestrian navigation systems, relying on inertial measurement units attached to the feet, have shown encouraging outcomes.

Repetitive learning method for trajectory control of robot manipulators using disturbance observer

  • Kim, Bong-Keun;Chung, Wan-Kyun;Youm, Youngil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.99-102
    • /
    • 1996
  • A novel iterative learning control scheme comprising a unique feedforward learning controller and a disturbance observer is proposed. Disturbance observer compensates disturbance due to parameter variations, mechanical nonlinearities, unmodeled dynamics and external disturbances. The convergence and robustness of the proposed controller is proved by the method based on Lyapunov stability theorem. The results of numerical simulation are shown to verify the effectiveness of the proposed control scheme.

  • PDF

Orientation Control of Mobile Robot Using Fuzzy-Neural Control Technique (퍼지-뉴럴 제어기법에 의한 이동형 로봇의 자세 제어)

  • 김종수
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1997.10a
    • /
    • pp.82-87
    • /
    • 1997
  • This paper presents a new approach to the design of cruise control system of a mobile robot with two drive wheel. The proposed control scheme uses a Gaussian function as a unit function in the fuzzy-neural network, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learning architecture. It is proposed a learning controller consisting of two neural network-fuzzy based on independent reasoning and a connection net with fixed weights to simply the neural networks-fuzzy. The performance of the proposed controller is shown by performing the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

ANALYSIS OF LEARNING CONTROL SYSTEMS WITH FEEDBACK(Application to One Link Manipulators)

  • Hashimoto, H.;Kang, Seong-Yun;Jianxin Xu;F. Harashima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10a
    • /
    • pp.886-891
    • /
    • 1987
  • In this paper, we present a effective method to control robotic systems by an iterative learning algorithm. This method is based on the concepts of the learning control law which is introduced in this paper, that is, avoidance of using derivative of system state and ignorance of high frequency influence in system performance. By means of the betterment of performance due to the improvement of estimated unknown information, the learning control algorithm compels the system to gradually approach in desired trajectory, and eventually the tracking error asymptotically converges upon zero. In order to verify its utility, one degree of freedom of manipulator has been used in the experiments and the results illustrate this control scheme is very effective.

  • PDF

The Azimuth and Velocity Control of a Movile Robot with Two Drive Wheel by Neutral-Fuzzy Control Method (뉴럴-퍼지제어기법에 의한 두 구동휠을 갖는 이동 로봇의 자세 및 속도 제어)

  • 한성현
    • Journal of Ocean Engineering and Technology
    • /
    • v.11 no.1
    • /
    • pp.84-95
    • /
    • 1997
  • This paper presents a new approach to the design speed and azimuth control of a mobile robot with drive wheel. The proposed control scheme uses a Gaussian function as a unit function in the fuzzy-neural network, and back propagation algorithm to train the fuzzy-neural network controller in the frmework of the specialized learning architecture. It is proposed a learning controller consisting of two neural network-fuzzy based on independent reasoning and a connection net with fixed weights to simple the neural networks-fuzzy. The performance of the proposed controller is shown by performing the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

Indirect Decentralized Repetitive Control for the Multiple Dynamic Subsystems

  • Lee, Soo-Cheol
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.23 no.1
    • /
    • pp.1-22
    • /
    • 1997
  • Learning control refers to controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect decentralized learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper extends these results to apply to the indirect repetitive control problem in which a periodic (i.e., repetitive) command is given to a control system. Decentralized indirect repetitive control algorithms are presented that have guaranteed convergence to zero tracking error under very general conditions. The original motivation of the repetitive control and learning control fields was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the desired trajectory. Decentralized repetitive control is natural for this application because the feedback control for link rotations is normally implemented in a decentralized manner, treating each link as if it is independent of the other links.

  • PDF

Implementation and Performance Evaluation of RTOS-Based Dynamic Controller for Robot Manipulator (Real-Time OS 기반의 로봇 매니퓰레이터 동력학 제어기의 구현 및 성능평가)

  • Kho, Jaw-Won;Lim, Dong-Cheal
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.57 no.2
    • /
    • pp.109-114
    • /
    • 2008
  • In this paper, a dynamic learning controller for robot manipulator is implemented using real-time operating system with capabilities of multitasking, intertask communication and synchronization, event-driven, priority-driven scheduling, real-time clock control, etc. The controller hardware system with VME bus and related devices is developed and applied to implement a dynamic learning control scheme for robot manipulator. Real-time performance of the proposed dynamic learning controller is tested and evaluated for tracking of the desired trajectory and compared with the conventional servo controller.

RL-based Path Planning for SLAM Uncertainty Minimization in Urban Mapping (도시환경 매핑 시 SLAM 불확실성 최소화를 위한 강화 학습 기반 경로 계획법)

  • Cho, Younghun;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.122-129
    • /
    • 2021
  • For the Simultaneous Localization and Mapping (SLAM) problem, a different path results in different SLAM results. Usually, SLAM follows a trail of input data. Active SLAM, which determines where to sense for the next step, can suggest a better path for a better SLAM result during the data acquisition step. In this paper, we will use reinforcement learning to find where to perceive. By assigning entire target area coverage to a goal and uncertainty as a negative reward, the reinforcement learning network finds an optimal path to minimize trajectory uncertainty and maximize map coverage. However, most active SLAM researches are performed in indoor or aerial environments where robots can move in every direction. In the urban environment, vehicles only can move following road structure and traffic rules. Graph structure can efficiently express road environment, considering crossroads and streets as nodes and edges, respectively. In this paper, we propose a novel method to find optimal SLAM path using graph structure and reinforcement learning technique.