• Title/Summary/Keyword: facial robot

Search Result 84, Processing Time 0.035 seconds

Real Time Face Detection and Recognition based on Embedded System (임베디드 시스템 기반 실시간 얼굴 검출 및 인식)

  • Lee, A-Reum;Seo, Yong-Ho;Yang, Tae-Kyu
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.11 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we proposed and developed a fast and efficient real time face detection and recognition which can be run on embedded system instead of high performance desktop. In the face detection process, we detect a face by finding eye part which is one of the most salient facial features after applying various image processing methods, then in the face recognition, we finally recognize the face by comparing the current face with the prepared face database using a template matching algorithm. Also we optimized the algorithm in our system to be successfully used in the embedded system, and performed the face detection and recognition experiments on the embedded board to verify the performance. The developed method can be applied to automatic door, mobile computing environment and various robot.

  • PDF

Gaze Direction Estimation Method Using Support Vector Machines (SVMs) (Support Vector Machines을 이용한 시선 방향 추정방법)

  • Liu, Jing;Woo, Kyung-Haeng;Choi, Won-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.379-384
    • /
    • 2009
  • A human gaze detection and tracing method is importantly required for HMI(Human-Machine-Interface) like a Human-Serving robot. This paper proposed a novel three-dimension (3D) human gaze estimation method by using a face recognition, an orientation estimation and SVMs (Support Vector Machines). 2,400 images with the pan orientation range of $-90^{\circ}{\sim}90^{\circ}$ and tilt range of $-40^{\circ}{\sim}70^{\circ}$ with intervals unit of $10^{\circ}$ were used. A stereo camera was used to obtain the global coordinate of the center point between eyes and Gabor filter banks of horizontal and vertical orientation with 4 scales were used to extract the facial features. The experiment result shows that the error rate of proposed method is much improved than Liddell's.

Performance Evaluation Method of User Identification and User Tracking for Intelligent Robots Using Face Images (얼굴영상을 이용한 지능형 로봇의 개인식별 및사용자 추적 성능평가 방법)

  • Kim, Dae-Jin;Park, Kwang-Hyun;Hong, Ji-Man;Jeong, Young-Sook;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.201-209
    • /
    • 2009
  • In this paper, we deal with the performance evaluation method of user identification and user tracking for intelligent robots using face images. This paper shows general approaches for standard evaluation methods to improve intelligent robot systems as well as their algorithms. The evaluation methods proposed in this paper can be combined with the evaluation methods for detection algorithms of face region and facial components to measure the overall performance of face recognition in intelligent robots.

  • PDF

Toothache Caused by Sialolithiasis of the Submandibular Gland

  • Kim, Jae-Jeong;Lee, Hee Jin;Kim, Young-Gun;Kwon, Jeong-Seung;Choi, Jong-Hoon;Ahn, Hyung-Joon
    • Journal of Oral Medicine and Pain
    • /
    • v.43 no.3
    • /
    • pp.87-91
    • /
    • 2018
  • Sialolithiasis is the most frequent disease of the salivary glands, causing swelling and/or pain of the affected site. We report a 44-year-old woman who presented with severe pain in the lower left second molar region without swelling. Sialoliths on her left submandibular gland were confirmed by radiographic examinations. After robot-assisted sialoadenectomy, the pain did not recur but remained facial paralysis and unaesthetic scar.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Development of An Autonomous Medicine Delivery Robot Using Facial Recognition for Unlocking Mechanisms (얼굴인식 알고리즘을 활용한 잠금해제 및 자율주행 약제배송로봇 개발)

  • Yu-Kyeong Kim;Ye-Rin Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.874-875
    • /
    • 2023
  • 본 논문은 COVID-19와 같은 전염병 확산 방지를 위해 비대면 약제배송로봇을 제안한다. 제안한 로봇은 OpenCV와 Q-Learning기반의 모델을 사용하여 실시간 영상처리로 사람의 얼굴을 식별한다. 환자의 얼굴, 나이, 전달 약제 등을 환자 데이터베이스에 등록한다. 카메라로 인식된 환자의 얼굴과 데이터베이스 내 환자의 얼굴이 일치할 경우 잠금장치를 해제시켜 환자의 약제 수령을 허용한다. 또한 어플리케이션을 통해 약제가 올바르게 전달되었는지 2차적으로 확인한다. 따라서 본 논문에서 제안한 로봇은 비대면으로 환자에게 약을 전달함으로써 입원병동에서 발생할 수 있는 전염병 확상의 방지에 효과적으로 기여할 수 있을 것이다.

Application of Calibration Techniques to Enhance Accuracy of Markerless Surgical Robotic System for Intracerebral Hematoma Surgery (뇌혈종 제거 수술을 위한 무마커 수술 유도 로봇 시스템의 정확도 향상을 위한 캘리브레이션 기법)

  • Park, Kyusic;Yoon, Hyon Min;Shin, Sangkyun;Cho, Hyunchul;Kim, Youngjun;Kim, Laehyun;Lee, Deukhee
    • Korean Journal of Computational Design and Engineering
    • /
    • v.20 no.3
    • /
    • pp.246-253
    • /
    • 2015
  • In this paper, we propose calibration methods that can be applied to the markerless surgical robotic system for Intracerebral Hematoma (ICH) Surgery. This surgical robotic system does not require additional process of patient imaging but only uses CT images that are initially taken for a diagnosis purpose. Furthermore, the system applies markerless registration method other than using stereotactic frames. Thus, in overall, our system has many advantages when compared to other conventional ICH surgeries in that they are non-invasive, much less exposed to radiation exposure, and most importantly reduces a total operation time. In the paper, we specifically focus on the application of calibration methods and their verification which is one of the most critical factors that determine the accuracy of the system. We implemented three applications of calibration methods between the coordinates of robot's end-effector and the coordinates of 3D facial surface scanner, based on the hand-eye calibration method. Phantom tests were conducted to validate the feasibility and accuracy of our proposed calibration methods and the surgical robotic system.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.