• Title/Summary/Keyword: Head and face

Search Result 462, Processing Time 0.029 seconds

Emotional Head Robot System Using 3D Character (3D 캐릭터를 이용한 감정 기반 헤드 로봇 시스템)

  • Ahn, Ho-Seok;Choi, Jung-Hwan;Baek, Young-Min;Shamyl, Shamyl;Na, Jin-Hee;Kang, Woo-Sung;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.328-330
    • /
    • 2007
  • Emotion is getting one of the important elements of the intelligent service robots. Emotional communication can make more comfortable relation between humans and robots. We developed emotional head robot system using 3D character. We designed emotional engine for making emotion of the robot. The results of face recognition and hand recognition is used for the input data of emotional engine. 3D character expresses nine emotions and speaks about own emotional status. The head robot has memory of a degree of attraction. It can be chaIU!ed by input data. We tested the head robot and conform its functions.

  • PDF

3D Head Pose Estimation Using The Stereo Image (스테레오 영상을 이용한 3차원 포즈 추정)

  • 양욱일;송환종;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

Characteristics and Outcomes of Patients with Bicycle-Related Injuries at a Regional Trauma Center in Korea

  • Lee, Yoonhyun;Lee, Min Ho;Lee, Dae Sang;Kim, Maru;Jo, Dae Hyun;Park, Hyosun;Cho, Hangjoo
    • Journal of Trauma and Injury
    • /
    • v.34 no.3
    • /
    • pp.147-154
    • /
    • 2021
  • Purpose: We analyzed the characteristics and outcomes of patients with bicycle-related injuries at a regional trauma center in northern Gyeonggi Province as a first step toward the development of improved prevention measures and treatments. Methods: The records of 239 patients who were injured in different types of bicycle-related accidents and transported to a single regional trauma center between January 2017 and December 2018 were examined. This retrospective single-center study used data from the Korea Trauma Database. Results: In total, 239 patients experienced bicycle-related accidents, most of whom were males (204, 85.4%), and 46.9% of the accidents were on roads for automobiles. Forty patients (16.7%) had an Injury Severity Score (ISS) of 16 or more. There were 125 patients (52.3%) with head/neck/face injuries, 97 patients (40.6%) with injuries to the extremities, 59 patients (24.7%) with chest injuries, and 21 patients (8.8%) with abdominal injuries. Patients who had head/neck/face injuries and an Abbreviated Injury Score (AIS) ≥3 were more likely to experience severe trauma (ISS ≥16). In addition, only 13 of 125 patients (10.4%) with head/neck/face injuries were wearing helmets, and patients with injuries in this region who were not wearing helmets had a 3.9-fold increased odds ratio of severe injury (AIS ≥2). Conclusions: We suggest that comprehensive accident prevention measures, including safety training and expansion of safety facilities, should be implemented at the governmental level, and that helmet wearing should be more strictly enforced to prevent injuries to the head, neck, and face.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Automatic 3D Head Pose-Normalization using 2D and 3D Interaction (자동 3차원 얼굴 포즈 정규화 기법)

  • Yu, Sun-Jin;Kim, Joong-Rock;Lee, Sang-Youn
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.211-212
    • /
    • 2007
  • Pose-variation factors present a significant problem in 2D face recognition. To solve this problem, there are various approaches for a 3D face acquisition system which was able to generate multi-view images. However, this created another pose estimation problem in terms of normalizing the 3D face data. This paper presents a 3D head pose-normalization method using 2D and 3D interaction. The proposed method uses 2D information with the AAM(Active Appearance Model) and 3D information with a 3D normal vector. In order to verify the performance of the proposed method, we designed an experiment using 2.5D face recognition. Experimental results showed that the proposed method is robust against pose variation.

  • PDF

Development of the Photogrammetric Method of Head Through 3-Dimensional Approach (3차원적 접근 방식을 통한 머리 부위 사진 측정법의 개발)

  • Kim, Woong;Nam, Yun-Ja;Kim, Min-Hyo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.24 no.4
    • /
    • pp.7-13
    • /
    • 2005
  • We developed an accurate and reliable photogrammetric method available instead of the direct measurement method and the three-dimensional scanning method. Our research was restricted to a head on the body. Approaching three-dimensionally, we calibrated a distorted image of a photograph and got linear equations of camera beams. Then we assigned z values of landmarks in the head and obtained three-dimensional coordinates for each landmark putting those z values in linear equations of camera beams and finally could calculate measurement results from those three-dimensional coordinates. When we compared results obtained by a program, 'Venus Face Measurement(VFM)' that we had developed applying our method with results obtained by the direct measurement method, VFM showed very accurate and reliable results. In conclusion the photogrammetric method developed in this study was testified to an outstanding measurement method as a substitute for the direct measurement method and the three-dimensional scanning method.

Head Detection based on Foreground Pixel Histogram Analysis (전경픽셀 히스토그램 분석 기반의 머리영역 검출 기법)

  • Choi, Yoo-Joo;Son, Hyang-Kyoung;Park, Jung-Min;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.179-186
    • /
    • 2009
  • In this paper, we propose a head detection method based on vertical and horizontal pixel histogram analysis in order to overcome drawbacks of the previous head detection approach using Haar-like feature-based face detection. In the proposed method, we create the vertical and horizontal foreground pixel histogram images from the background subtraction image, which represent the number of foreground pixels in the same vertical or horizontal position. Then we extract feature points of a head region by applying Harris corner detection method to the foreground pixel histogram images and by analyzing corner points. The proposal method shows robust head detection results even in the face image covering forelock by hairs or the back view image in which the previous approaches cannot detect the head regions.

A PROPORTIONAL ANALYSIS OF SOFT TISSUE PROFILE IN KOREAN YOUNG ADULTS (성인 정상 교합자의 연조직 비율에 관한 두부 X-선 계즉학적 분석)

  • Lee, Jeong-Hwa;Nahm, Dong-Seok
    • The korean journal of orthodontics
    • /
    • v.24 no.2
    • /
    • pp.405-417
    • /
    • 1994
  • The purpose of this study was to investigate proportional characteristics of soft tissue profile in Korean young adults. The sample consisted of 50 young adults(25 males and 25 females) who had pleasing profile and normal occlusion. Soft tissue proportional analysis was performed on lateral cephalograms taken in natural head position. The results were as follows : 1. Mean and standard deviation of proportional analysis were obtained. 2. Horizontal and vertical dimensions were larger in male. But facial proportion had no sexual difference except upper/lower face height (p<0.05). Upper/lower face height was larger in female than in male. 3. Vertical dimensions, except SN-ST, had high correlation with horizontal dimensions. 4. Head positioning error of natural head position was smaller than inter -individual variability of SN line.

  • PDF

Detection Method of Face Rotation Angle for Crosstalk Cancellation (크로스토크 제거를 위한 얼굴 방위각 검출 기법)

  • Han, Sang-Il;Cha, Hyung-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.58-65
    • /
    • 2007
  • The method of 3D sound realization using 2 speakers provides two advantages: cheap and easy to build. In the case, crosstalk between 2 speakers has to be eliminated. To calculate and remove the effect of the crosstalk it is essential to find a rotation angle of human head correctly. In the paper, we suggest an algorithm to find the head angle of 2 channel system. We first detect a face area of the given image using Haar-like feature. After that, the eve detection using pre-processor and morphology method. Finally, we calculate the face rotation angle with the face andi the eye location. As a result of the experiment on various face images, the proposed method improves the efficiency much better than the conventional methods.