• Title/Summary/Keyword: robot tracking

Search Result 1,013, Processing Time 0.13 seconds

Implementation of Camera-Based Autonomous Driving Vehicle for Indoor Delivery using SLAM (SLAM을 이용한 카메라 기반의 실내 배송용 자율주행 차량 구현)

  • Kim, Yu-Jung;Kang, Jun-Woo;Yoon, Jung-Bin;Lee, Yu-Bin;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.687-694
    • /
    • 2022
  • In this paper, we proposed an autonomous vehicle platform that delivers goods to a designated destination based on the SLAM (Simultaneous Localization and Mapping) map generated indoors by applying the Visual SLAM technology. To generate a SLAM map indoors, a depth camera for SLAM map generation was installed on the top of a small autonomous vehicle platform, and a tracking camera was installed for accurate location estimation in the SLAM map. In addition, a convolutional neural network (CNN) was used to recognize the label of the destination, and the driving algorithm was applied to accurately arrive at the destination. A prototype of an indoor delivery autonomous vehicle was manufactured, and the accuracy of the SLAM map was verified and a destination label recognition experiment was performed through CNN. As a result, the suitability of the autonomous driving vehicle implemented by increasing the label recognition success rate for indoor delivery purposes was verified.

Study of Deep Learning Based Specific Person Following Mobility Control for Logistics Transportation (물류 이송을 위한 딥러닝 기반 특정 사람 추종 모빌리티 제어 연구)

  • Yeong Jun Yu;SeongHoon Kang;JuHwan Kim;SeongIn No;GiHyeon Lee;Seung Yong Lee;Chul-hee Lee
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.1-8
    • /
    • 2023
  • In recent years, robots have been utilized in various industries to reduce workload and enhance work efficiency. The following mobility offers users convenience by autonomously tracking specific locations and targets without the need for additional equipment such as forklifts or carts. In this paper, deep learning techniques were employed to recognize individuals and assign each of them a unique identifier to enable the recognition of a specific person even among multiple individuals. To achieve this, the distance and angle between the robot and the targeted individual are transmitted to respective controllers. Furthermore, this study explored the control methodology for mobility that tracks a specific person, utilizing Simultaneous Localization and Mapping (SLAM) and Proportional-Integral-Derivative (PID) control techniques. In the PID control method, a genetic algorithm is employed to extract the optimal gain value, subsequently evaluating PID performance through simulation. The SLAM method involves generating a map by synchronizing data from a 2D LiDAR and a depth camera using Real-Time Appearance-Based Mapping (RTAB-MAP). Experiments are conducted to compare and analyze the performance of the two control methods, visualizing the paths of both the human and the following mobility.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.