• Title/Summary/Keyword: human and robot tracking

Search Result 111, Processing Time 0.028 seconds

A Novel Two-Level Pitch Detection Approach for Speaker Tracking in Robot Control

  • Hejazi, Mahmoud R.;Oh, Han;Kim, Hong-Kook;Ho, Yo-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.89-92
    • /
    • 2005
  • Using natural speech commands for controlling a human-robot is an interesting topic in the field of robotics. In this paper, our main focus is on the verification of a speaker who gives a command to decide whether he/she is an authorized person for commanding. Among possible dynamic features of natural speech, pitch period is one of the most important ones for characterizing speech signals and it differs usually from person to person. However, current techniques of pitch detection are still not to a desired level of accuracy and robustness. When the signal is noisy or there are multiple pitch streams, the performance of most techniques degrades. In this paper, we propose a two-level approach for pitch detection which in compare with standard pitch detection algorithms, not only increases accuracy, but also makes the performance more robust to noise. In the first level of the proposed approach we discriminate voiced from unvoiced signals based on a neural classifier that utilizes cepstrum sequences of speech as an input feature set. Voiced signals are then further processed in the second level using a modified standard AMDF-based pitch detection algorithm to determine their pitch periods precisely. The experimental results show that the accuracy of the proposed system is better than those of conventional pitch detection algorithms for speech signals in clean and noisy environments.

  • PDF

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

People Tracking and Accompanying Algorithm for Mobile Robot Using Kinect Sensor and Extended Kalman Filter (키넥트센서와 확장칼만필터를 이용한 이동로봇의 사람추적 및 사람과의 동반주행)

  • Park, Kyoung Jae;Won, Mooncheol
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.4
    • /
    • pp.345-354
    • /
    • 2014
  • In this paper, we propose a real-time algorithm for estimating the relative position and velocity of a person with respect to a robot using a Kinect sensor and an extended Kalman filter (EKF). Additionally, we propose an algorithm for controlling the robot in the proximity of a person in a variety of modes. The algorithm detects the head and shoulder regions of the person using a histogram of oriented gradients (HOG) and a support vector machine (SVM). The EKF algorithm estimates the relative positions and velocities of the person with respect to the robot using data acquired by a Kinect sensor. We tested the various modes of proximity movement for a human in indoor situations. The accuracy of the algorithm was verified using a motion capture system.

Vision Chip for Edge and Motion Detection with a Function of Output Offset Cancellation (출력옵셋의 제거기능을 가지는 윤곽 및 움직임 검출용 시각칩)

  • Park, Jong-Ho;Kim, Jung-Hwan;Suh, Sung-Ho;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.3
    • /
    • pp.188-194
    • /
    • 2004
  • With a remarkable advance in CMOS (complimentary metal-oxide-semiconductor) process technology, a variety of vision sensors with signal processing circuits for complicated functions are actively being developed. Especially, as the principles of signal processing in human retina have been revealed, a series of vision chips imitating human retina have been reported. Human retina is able to detect the edge and motion of an object effectively. The edge detection among the several functions of the retina is accomplished by the cells called photoreceptor, horizontal cell and bipolar cell. We designed a CMOS vision chip by modeling cells of the retina as hardwares involved in edge and motion detection. The designed vision chip was fabricated using $0.6{\mu}m$ CMOS process and the characteristics were measured. Having reliable output characteristics, this chip can be used at the input stage for many applications, like targe tracking system, fingerprint recognition system, human-friendly robot system and etc.

Tracking Control of 3-Wheels Omni-Directional Mobile Robot Using Fuzzy Azimuth Estimator (퍼지 방위각 추정기를 이용한 세 개의 전 방향 바퀴 구조의 이동로봇시스템의 개발)

  • Kim, Sang-Dae;Kim, Seung-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.10
    • /
    • pp.3873-3879
    • /
    • 2010
  • Home service robot are not working in the fixed task such as industrial robot, because they are together with human in the same indoor space, but have to do in much more flexible and various environments. Most of them are developed on the base of the wheel-base mobile robot in the same method as a vehicle robot for factory automation. In these days, for holonomic system characteristics, omni-directional wheels are used in the mobile robot. A holonomicrobot, using omni-directional wheels, is capable of driving in any direction. But trajectory control for omni-directional mobile robot is not easy. Especially, azimuth control which sensor uncertainty problem is included is much more difficult. This paper develops trajectory controller of 3-wheels omni-directional mobile robot using fuzzy azimuth estimator. A trajectory controller for an omni-directional mobile robot, which each motor is controlled by an individual PID law to follow the speed command from inverse kinematics, needs a precise sensing data of its azimuth and exact estimation of reference azimuth value. It has imprecision and uncertainty inherent to perception sensors for azimuth. In this paper, they are solved by using fuzzy logic inference which can be used straightforward to perform the control of the mobile robot by means of the fuzzy behavior-based scheme already existent in literature. Finally, the good performance of the developed mobile robot is confirmed through live tests of path control task.

A Study of 3D World Reconstruction and Dynamic Object Detection using Stereo Images (스테레오 영상을 활용한 3차원 지도 복원과 동적 물체 검출에 관한 연구)

  • Seo, Bo-Gil;Yoon, Young Ho;Kim, Kyu Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.10
    • /
    • pp.326-331
    • /
    • 2019
  • In the real world, there are both dynamic objects and static objects, but an autonomous vehicle or mobile robot cannot distinguish between them, even though a human can distinguish them easily. It is important to distinguish static objects from dynamic objects clearly to perform autonomous driving successfully and stably for an autonomous vehicle or mobile robot. To do this, various sensor systems can be used, like cameras and LiDAR. Stereo camera images are used often for autonomous driving. The stereo camera images can be used in object recognition areas like object segmentation, classification, and tracking, as well as navigation areas like 3D world reconstruction. This study suggests a method to distinguish static/dynamic objects using stereo vision for an online autonomous vehicle and mobile robot. The method was applied to a 3D world map reconstructed from stereo vision for navigation and had 99.81% accuracy.

The Basic Position Tracking Technology of Power Connector Receptacle based on the Image Recognition (영상인식 기반 파워 컨넥터 리셉터클의 위치 확인을 위한 기초 연구)

  • Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.2
    • /
    • pp.309-314
    • /
    • 2017
  • Recently, the fields such as the service robot, the autonomous driving electric car, and the torpedo ladle cars operated autonomously to enhance the efficiency of management of the steel mill are receiving great attention. But development of automatic power supply that doesn't need human intervention be a problem. In this paper, a position tracking technology of power connector receptacle based on the computer vision is studied which can recognize and identify the position of the power connector receptacle, and finally its possibility is verified using OpenCV program.

Shape Prediction Method for Electromagnet-Embedded Soft Catheter Robot (전자석 내장형 소프트 카테터 로봇 형상 예측 방법)

  • Sanghyun Lee;Donghoon Son
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.39-44
    • /
    • 2024
  • This study introduces a novel method for predicting the shape of soft catheter robots embedded with electromagnets. As an advancement in the realm of soft robotics, these catheter robots are crafted from flexible and pliable materials, ensuring enhanced safety and adaptability during interactions with human tissues. Given the pivotal role of catheters in minimally invasive surgeries (MIS), our design stands out by facilitating active control over the orientation and intensity of the inbuilt electromagnets. This ensures precise targeting and manipulation of the catheter segments. The research encompasses a comprehensive breakdown of the magnetic modeling, tracking algorithms, experimental layout, and analytical techniques. Both simulation and experimental results validate the efficacy of our method, underscoring its potential to augment accuracy in MIS and revolutionize healthcare-oriented soft robotics.

Estimation of a Gaze Point in 3D Coordinates using Human Head Pose (휴먼 헤드포즈 정보를 이용한 3차원 공간 내 응시점 추정)

  • Shin, Chae-Rim;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.177-179
    • /
    • 2021
  • This paper proposes a method of estimating location of a target point at which an interactive robot gazes in an indoor space. RGB images are extracted from low-cost web-cams, user head pose is obtained from the face detection (Openface) module, and geometric configurations are applied to estimate the user's gaze direction in the 3D space. The coordinates of the target point at which the user stares are finally measured through the correlation between the estimated gaze direction and the plane on the table plane.

  • PDF

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.