• Title/Summary/Keyword: Mobile robot recognition

Search Result 227, Processing Time 0.026 seconds

Cooperative mobile robots using fuzzy algorithm

  • Ji, Seunghwan;Kim, Hyuntae;Park, Minkee;Park, Mignon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.468-472
    • /
    • 1992
  • In recent years, lots of researches on autonomous mobile robot have been accomplished. However they focused on environment recognition and its processing to make a decision on the motion, And cooperative multi-robot, which must be able to avoid crash and to make mutual communication, has not been studied much. This paper deals with cooperative motion of two robots, 'Meari 1" and "Meari 2 " made in our laboratory, based on communication between the two. Because there is an interference on communication occurring in cooperative motion of multi-robot, many restrictive conditions are required. Therefore, we have designed these robot system so that communication between them is available and mutual interference is precluded, and we used fuzzy interference to overcome unstability of sensor data.of sensor data.

  • PDF

Active assisted-living system using a robot in WSAN (WSAN에서 로봇을 활용한 능동 생활지원 시스템)

  • Kim, Hong-Seok;Yi, Soo-Yeong;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.177-184
    • /
    • 2009
  • This paper presents an active assisted-living system in wireless sensor and actor network (WSAN) in which the mobile robot roles an actor. In order to provide assisted-living service to the elderly people, position recognition of the sensor node attached on the user and localization of the mobile robot should be performed at the same time. For the purpose, we use received signal strength indication (RSSI) to find the position of the person and ubiquitous sensor nodes including ultrasonic sensor which performs both transmission of sensor information and localization like global positioning system. Active services are moving to the elderly people by detecting activity sensor and visual tracking and voice chatting with remote monitoring system.

  • PDF

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Real-time 3D multi-pedestrian detection and tracking using 3D LiDAR point cloud for mobile robot

  • Ki-In Na;Byungjae Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.836-846
    • /
    • 2023
  • Mobile robots are used in modern life; however, object recognition is still insufficient to realize robot navigation in crowded environments. Mobile robots must rapidly and accurately recognize the movements and shapes of pedestrians to navigate safely in pedestrian-rich spaces. This study proposes real-time, accurate, three-dimensional (3D) multi-pedestrian detection and tracking using a 3D light detection and ranging (LiDAR) point cloud in crowded environments. The pedestrian detection quickly segments a sparse 3D point cloud into individual pedestrians using a lightweight convolutional autoencoder and connected-component algorithm. The multi-pedestrian tracking identifies the same pedestrians considering motion and appearance cues in continuing frames. In addition, it estimates pedestrians' dynamic movements with various patterns by adaptively mixing heterogeneous motion models. We evaluate the computational speed and accuracy of each module using the KITTI dataset. We demonstrate that our integrated system, which rapidly and accurately recognizes pedestrian movement and appearance using a sparse 3D LiDAR, is applicable for robot navigation in crowded spaces.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Navigation Control of Mobile Robot based on VFF to Avoid Local-Minimum in a Corridor Environment (복도환경의 지역최소점 회피가 가능한 VFF 기반의 이동로봇 주행제어)

  • Jin, Tae-Seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.759-764
    • /
    • 2011
  • This paper deals with the method of using the amended virtual force field technique to avoidance the front environment(wall, obstacles etc.) in navigating by using the environmental informations recognized by a ultrasonic-ring and pan/tilt CCD camera equipped on a mobile robot. we will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is proposed the rusult from the experimental run based on a virtual force field(VFF) method to support the validity of the aforementioned architecture of mobile service robot for local navigation and obstacle avoidance for autonomous mobile robots. We will conclude by discussing some possible future extensions of the project. The results show that the proposed algorithm is apt to identify obstacles in an indoor environments to guide the robot to the goal location safely.

Research Trends and Case Study on Keypoint Recognition and Tracking for Augmented Reality in Mobile Devices (모바일 증강현실을 위한 특징점 인식, 추적 기술 및 사례 연구)

  • Choi, Heeseung;Ahn, Sang Chul;Kim, Ig-Jae
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.45-55
    • /
    • 2015
  • In recent years, keypoint recognition and tracking technologies are considered as crucial task in many practical systems for markerless augmented reality. The keypoint recognition and technologies are widely studied in many research areas, including computer vision, robot navigation, human computer interaction, and etc. Moreover, due to the rapid growth of mobile market related to augmented reality applications, several effective keypoint-based matching and tracking methods have been introduced by considering mobile embedded systems. Therefore, in this paper, we extensively analyze the recent research trends on keypoint-based recognition and tracking with several core components: keypoint detection, description, matching, and tracking. Then, we also present one of our research related to mobile augmented reality, named mobile tour guide system, by real-time recognition and tracking of tour maps on mobile devices.

Detection of Faces Located at a Long Range with Low-resolution Input Images for Mobile Robots (모바일 로봇을 위한 저해상도 영상에서의 원거리 얼굴 검출)

  • Kim, Do-Hyung;Yun, Woo-Han;Cho, Young-Jo;Lee, Jae-Jeon
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.257-264
    • /
    • 2009
  • This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a mobile robot. The proposed approach can locate extremely small-sized face regions of $12{\times}12$ pixels. We solve a tiny face detection problem by organizing a system that consists of multiple detectors including a mean-shift color tracker, short- and long-rage face detectors, and an omega shape detector. The proposed method adopts the long-range face detector that is well trained enough to detect tiny faces at a long range, and limiting its operation to only within a search region that is automatically determined by the mean-shift color tracker and the omega shape detector. By focusing on limiting the face search region as much as possible, the proposed method can accurately detect tiny faces at a long distance even with a low-resolution image, and decrease false positives sharply. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications such as face recognition of non-cooperative users, human-following, and gesture recognition for long-range interaction.

  • PDF

Robot Knowledge Framework of a Mobile Robot for Object Recognition and Navigation (이동 로봇의 물체 인식과 주행을 위한 로봇 지식 체계)

  • Lim, Gi-Hyun;Suh, Il-Hong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.6
    • /
    • pp.19-29
    • /
    • 2007
  • This paper introduces a robot knowledge framework which is represented with multiple classes, levels and layers to implement robot intelligence at real environment for mobile robot. Our root knowledge framework consists of four classes of knowledge (KClass), axioms, rules, a hierarchy of three knowledge levels (KLevel) and three ontology layers (OLayer). Four KClasses including perception, model, activity and context class. One type of rules are used in a way of unidirectional reasoning. And, the other types of rules are used in a way of bi-directional reasoning. The robot knowledge framework enable a robot to integrate robot knowledge from levels of its own sensor data and primitive behaviors to levels of symbolic data and contextual information regardless of class of knowledge. With the integrated knowledge, a robot can have any queries not only through unidirectional reasoning between two adjacent layers but also through bidirectional reasoning among several layers even with uncertain and partial information. To verify our robot knowledge framework, several experiments are successfully performed for object recognition and navigation.

Sliding Active Camera-based Face Pose Compensation for Enhanced Face Recognition (얼굴 인식률 개선을 위한 선형이동 능동카메라 시스템기반 얼굴포즈 보정 기술)

  • 장승호;김영욱;박창우;박장한;남궁재찬;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.155-164
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user and is able to doface recognition, which is vital for many surveillance-based systems. The advantage of face recognition compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to the decreasing in dimension from image acquisition step and various changes associated with face pose and background. There are many factors that deteriorate performance of face recognition such as thedistance from camera to the face, changes in lighting, pose change, and change of facial expression. In this paper, we implement a new sliding active camera system to prevent various pose variation that influence face recognition performance andacquired frontal face images using PCA and HMM method to improve the face recognition. This proposed face recognition algorithm can be used for intelligent surveillance system and mobile robot system.