• Title/Summary/Keyword: 3차원 얼굴모델

Search Result 126, Processing Time 0.024 seconds

Improvement of Face Components Detection using Neck Removal (목 부분의 제거를 통한 얼굴 검출 향상 기법)

  • Yoon, Ga-Rim;Yoon, Yo-Sup;Kim, Young-Bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.321-326
    • /
    • 2004
  • Many researchers have been studied texturing the 3D face model with front and side pictures of ordinary person. It is very important to exactly detect the psition of eyes, nose, mouth of a human from the side pictures. Previous results first found the position of eye, nose, or mouth and then extract the other face components using their positional correlation. The detection results greatly depend on the correct extraction of the neck from the images. Therefore, we present a new algorithm that remove the neck completely and thus improve the detection rates of face components. To do this, we will use the RGB values and its differences.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.783-788
    • /
    • 2002
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face Image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives md vowels.

Face Detection Algorithm using Kinect-based Skin Color and Depth Information for Multiple Faces Detection (Kinect 디바이스에서 피부색과 깊이 정보를 융합한 여러 명의 얼굴 검출 알고리즘)

  • Yun, Young-Ji;Chien, Sung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.137-144
    • /
    • 2017
  • Face detection is still a challenging task under severe face pose variations in complex background. This paper proposes an effective algorithm which can detect single or multiple faces based on skin color detection and depth information. We introduce Gaussian mixture model(GMM) for skin color detection in a color image. The depth information is from three dimensional depth sensor of Kinect V2 device, and is useful in segmenting a human body from the background. Then, a labeling process successfully removes non-face region using several features. Experimental results show that the proposed face detection algorithm can provide robust detection performance even under variable conditions and complex background.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Morphable Model to Interpolate Difference between Number of Pixels and Number of Vertices (픽셀 수와 정점들 간의 차이를 보완하는 Morphable 모델)

  • Ko, Bang-Hyun;Moon, Hyeon-Joon;Kim, Yong-Guk;Moon, Seung-Bin;Lee, Jong-Weon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.1-8
    • /
    • 2007
  • The images, which were acquired from various systems such as CCTV and Robot, include many human faces. Because of a rapid increase in visual data, we cannot process these manually; rather we need to do these automatically. Furthermore, companies require automatic security systems to protect their new technology. There are various options available to us, including face recognition, iris recognition and fingerprint recognition. Face recognition is preferable since it does not require direct contact. However, the standard 2-Dimensional method is limited, so Morphable Models may be recommended as an alternative. The original morphable model, made by MPI, contains a large quantity of data such as texture and geometry data. This paper presents a Geometrix-based morphable model designed to reduce this data capacity.

Performance Improvement of Facial Gesture-based User Interface Using MediaPipe Face Mesh (MediaPipe Face Mesh를 이용한 얼굴 제스처 기반의 사용자 인터페이스의 성능 개선)

  • Jinwang Mok;Noyoon Kwak
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.125-134
    • /
    • 2023
  • The purpose of this paper is to propose a method to improve the performance of the previous research is characterized by recognizing facial gestures from the 3D coordinates of seven landmarks selected from the MediaPipe Face Mesh model, generating corresponding user events, and executing corresponding commands. The proposed method applied adaptive moving average processing to the cursor positions in the process to stabilize the cursor by alleviating microtremor, and improved performance by blocking temporary opening/closing discrepancies between both eyes when opening and closing both eyes simultaneously. As a result of the usability evaluation of the proposed facial gesture interface, it was confirmed that the average recognition rate of facial gestures was increased to 98.7% compared to 95.8% in the previous research.

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 김동수;남기환;한준희;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.181-185
    • /
    • 1998
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels.

  • PDF

Human Face Recognition and 3-D Human Face Modelling (얼굴 영상 인식 및 3차원 얼굴 모델 구현 알고리즘)

  • 이효종;이지항
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.113-116
    • /
    • 2000
  • Human face recognition and 3D human face reconstruction has been studied in this paper. To find the facial feature points, find edge from input image and analysis the accumulated histogram of edge information. This paper use a Generic Face Model to display the 3D human face model which was implement with OpenGL and generated with 500 polygons. For reality of 3D human face model, we propose Group matching mapping method between facial feature points and the one of Generic Face Model. The personalized 3D human face model which resembles real human face can be generated automatically in less than 5 seconds on Pentium PC.

  • PDF

Study of Model Based 3D Facial Modeling for Virtual Reality (가상현실에 적용을 위한 모델에 근거한 3차원 얼굴 모델링에 관한 연구)

  • 한희철;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.193-196
    • /
    • 2000
  • In this paper, we present a model based 3d facial modeling method for virtual reality application using only one front of face photography. We extract facial feature using facial photography and modify mesh of the basic 3D model by the facial feature. After this , We use texture mapping for more similarity. By experiment, we know that the modeling technic is useful method for Movie, Virtual Reality Application, Game , Clothing Industry , 3D Video Conference.

  • PDF

Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot (3차원 정서 공간에서 마스코트 형 얼굴 로봇에 적용 가능한 동적 감정 모델)

  • Park, Jeong-Woo;Lee, Hui-Sung;Jo, Su-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.282-287
    • /
    • 2007
  • Humanoid and android robots are emerging as a trend shifts from industrial robot to personal robot. So human-robot interaction will increase. Ultimate objective of humanoid and android would be a robot like a human. In this aspect, implementation of robot's facial expression is necessary in making a human-like robot. This paper proposes a dynamic emotion model for a mascot-type robot to display similar facial and more recognizable expressions.

  • PDF