• Title/Summary/Keyword: Robot Interaction

Search Result 485, Processing Time 0.029 seconds

Metabolic Rate Estimation for ECG-based Human Adaptive Appliance in Smart Homes (인간 적응형 가전기기를 위한 거주자 심박동 기반 신체활동량 추정)

  • Kim, Hyun-Hee;Lee, Kyoung-Chang;Lee, Suk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.486-494
    • /
    • 2014
  • Intelligent homes consist of ubiquitous sensors, home networks, and a context-aware computing system. These homes are expected to offer many services such as intelligent air-conditioning, lighting control, health monitoring, and home security. In order to realize these services, many researchers have worked on various research topics including smart sensors with low power consumption, home network protocols, resident and location detection, context-awareness, and scenario and service control. This paper presents the real-time metabolic rate estimation method that is based on measured heart rate for human adaptive appliance (air-conditioner, lighting etc.). This estimation results can provide valuable information to control smart appliances so that they can adjust themselves according to the status of residents. The heart rate based method has been experimentally compared with the location-based method on a test bed.

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model (바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발)

  • Cho, Ye Ri;Pae, Dong Sung;Lee, Yun Kyu;Ahn, Woo Jin;Lim, Myo Taeg;Kang, Tae Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

An integrate information technology model during earthquake dynamics

  • Chen, Chen-Yuan;Chen, Ying-Hsiu;Yu, Shang-En;Chen, Yi-Wen;Li, Chien-Chung
    • Structural Engineering and Mechanics
    • /
    • v.44 no.5
    • /
    • pp.633-647
    • /
    • 2012
  • Applying Information Technology (IT) in practical engineering has become one of the most important issues in the past few decades, especially on internal solitary wave, intelligent robot interaction, artificial intelligence, fuzzy Lyapunov, tension leg platform (TLP), consumer and service quality. Other than affecting the traditional teaching mode or increasing the inter-relation with users, IT can also be connected with the current society by collecting the latest information from the internet. It is apparently a fashion-catching-up technology. Therefore, the learning of how to use IT facilities is becoming one of engineers' skills nowadays. In addition to studying how well engineers learn to operate IT facilities and apply them into teaching, how engineers' general capacity of information effects the results of learning IT are also discussed. This research introduces the "Combined TAM and TPB mode," to understand the situation of engineers using IT facilities.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Measurement on range of two degrees of freedom motion for analytic generation of workspace (작업영역의 해석적 생성을 위한 2자유도 동작의 동작범위 측정)

  • 기도형
    • Journal of the Ergonomics Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.15-24
    • /
    • 1996
  • To generate workspace analytically using the robot kinematics, data on range of human joints motion, especially range of two degrees of freedom motion, are needed. However, these data have not been investigated up to now. Therefore, in this research, we are to investigate an interaction effect of motions with two degrees of freedom occurred simultaneously at the shoulder, virtual hip(L5/S1) and hip joints, respectively, for 47 young male students. When motion with two degrees of freedom occurred at a joint such as shoulder, virtual hip and hip joints, it was found from the results of ANOVA that the action of a degree of freedom motion may either decrease or increase the effective functioning of the other degree of freedom motion. In other words, the shoulder flexion was decreased as the shoulder was adducted or abducted to $60^{\circ}C$TEX>or abducted from $60^{\circ}C$TEX>to maximum degree of abduction, while the shoulder flexion increased as the joint was abducted from $60^{\circ}C$TEX> to $60^{\circ}C$TEX> The flexion was decreased as the virtual hip was bent laterally at the virtual hip joint, and also did as the hip was adducted or abducted from the neutral position. It is expected that workspace can be generated more precisely based the data on the range of two degrees of joint motion measured in this study.

  • PDF

Emotion Recognition Based on Human Gesture (인간의 제스쳐에 의한 감정 인식)

  • Song, Min-Kook;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.46-51
    • /
    • 2007
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills fo the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.

Shared Vehicle Teleoperation using a Virtual Driving Interface (가상 운전 인터페이스를 활용한 자동차 협력 원격조종)

  • Kim, Jae-Seok;Lee, Kwang-Hyun;Ryu, Jee-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.243-249
    • /
    • 2015
  • In direct vehicle teleoperation, a human operator drives a vehicle at a distance through a pair of master and slave device. However, if there is time delay, it is difficult to remotely drive the vehicle due to slow response. In order to address this problem, we introduced a novel methodology of shared vehicle teleoperation using a virtual driving interface. The methodology was developed with four components: 1) virtual driving environment, 2) interface for virtual driving environment, 3) path generator based on virtual driving trajectory, 4) path following controller. Experimental results showed the effectiveness of the proposed approach in simple and cluttered driving environment as well. In the experiments, we compared two sampling methods, fixed sampling time and user defined instant, and finally merged method showed best remote driving performance in term of completion time and number of collision.