• Title/Summary/Keyword: Face Contour Extraction

Search Result 19, Processing Time 0.018 seconds

Extraction of Face and Components Using Color, Contour, and Structural Information of Face (얼굴의 색상, 윤곽선, 구조적 정보를 이용한 얼굴 및 구성요소 추출)

  • 선영범;김진태
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.142-145
    • /
    • 2001
  • 본 논문에서는 얼굴추출을 하는데 있어서 빠른 속도로 얼굴의 구성요소들을 분할하고 추출한다. 효율적인 분할과 추출물 위해서 3가지의 정보를 사용한다. 첫 번째는 얼굴의 색상정보로써 배경 속의 얼굴을 찾는데 이용한다. 두 번째는 얼굴의 윤곽선 정보로 얼굴의 구성요소를 추출해 내는데 사용한다. 세 번째는 얼굴의 구조적인 정보를 이용하여 색상 및 윤곽선 정보를 이용하여 추출된 요소에 대해 얼굴의 다른 구성요소를 추출하는데 이용한다.

  • PDF

Tree-inspired Chair Modeling (나무 성장 시뮬레이션을 이용한 의자 모델링 기법)

  • Zhang, Qimeng;Byun, Hae Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.29-38
    • /
    • 2017
  • We propose a method for tree-inspired chair modeling that can generate a tree-branch pattern in the skeleton of an arbitrary chair shape. Unlike existing methods that merge multiple-input models, the proposed method requires only one mesh as input, namely the contour mesh of the user's desired part, to model the chair with a branch pattern generated by tree-growth simulation. We propose a new method for the efficient extraction of the contour-mesh region in the tree-branch pattern. First, we extract the contour mesh based on the face area of the input mesh. We then use the front and back mesh information to generate a skeleton mesh that reconstructs the connection information. In addition, to obtain the tree-branch pattern matching the shape of the input model, we propose a three-way tree-growth simulation method that considers the tangent vector of the shape surface. The proposed method reveals a new type of furniture modeling by using an existing furniture model and simple parameter values to model tree branches shaped appropriately for the input model skeleton. Our experiments demonstrate the performance and effectiveness of the proposed method.

Facial Contour Extraction in Moving Pictures by using DCM mask and Initial Curve Interpolation of Snakes (DCM 마스크와 스네이크의 초기곡선 보간에 의한 동영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.58-66
    • /
    • 2006
  • In this paper, we apply DCM(Dilation of Color and Motion information) mask and Active Contour Models(Snakes) to extract facial outline in moving pictures with complex background. First, we propose DCM mask which is made by applying morphology dilation and AND operation to combine facial color and motion information, and use this mask to detect facial region without complex background and to remove noise in image energy. Also, initial curves are automatically set according to rotational degree estimated with geometric ratio of facial elements to overcome the demerit of Active Contour Models which is sensitive to initial curves. And edge intensity and brightness are both used as image energy of snakes to extract contour at parts with weak edges. For experiments, we acquired total 480 frames with various head-poses of sixteen persons with both eyes shown by taking pictures in inner space and also by capturing broadcasting images. As a result, it showed that more elaborate facial contour is extracted at average processing time of 0.28 seconds when using interpolated initial curves according to facial rotation degree and using combined image energy of edge intensity and brightness.

TAD driven whole dentition distalization with special considerations for incisal/gingival display and occlusal canting (전치부 및 치은의 노출량과 교합평면의 캔팅을 고려한 미니스크류를 이용한 전치열의 원심이동)

  • Paik, Cheol-Ho
    • The Journal of the Korean dental association
    • /
    • v.57 no.6
    • /
    • pp.333-343
    • /
    • 2019
  • Many orthodontists face difficulties in aligning incisors in an esthetically critical position, because the individual perception of beauty fluctuates with time and trend. Temporary anchorage device (TAD) can aid in attaining this critical incisor position, which determines an attractive smile, the amount of incisor display, and lip contour. Borderline cases can be treated without extraction and the capricious minds of patients can be satisfied with regard to the incisor position through whole dentition distalization using TAD. Mild to moderate bimaxillary protrusion cases can be treated with TAD-driven en masse retraction without premolar extraction. Patients with Angle's Class III malocclusion can be the biggest beneficiaries because both sufficient maxillary incisal display, through intrusion of mandibular incisors, and distalization of the mandibular dentition are successfully achieved. In addition, TAD can be used to correct various other malocclusions, such as canting of the occlusal plane and dental/alveolus asymmetry.

  • PDF

A Study on the Improvement of the Facial Image Recognition by Extraction of Tilted Angle (기울기 검출에 의한 얼굴영상의 인식의 개선에 관한 연구)

  • 이지범;이호준;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.935-943
    • /
    • 1993
  • In this paper, robust recognition system for tilted facial image was developed. At first, standard facial image and lilted facial image are captured by CCTV camera and then transformed into binary image. The binary image is processed in order to obtain contour image by Laplacian edge operator. We trace and delete outermost edge line and use inner contour lines. We label four inner contour lines in order among the inner lines, and then we extract left and right eye with known distance relationship and with two eyes coordinates, and calculate slope information. At last, we rotate the tilted image in accordance with slope information and then calculate the ten distance features between element and element. In order to make the system invariant to image scale, we normalize these features with distance between left and righ eye. Experimental results show 88% recognition rate for twenty five face images when tilted degree is considered and 60% recognition rate when tilted degree is not considered.

  • PDF

Extraction of Facial Feature Parameters by Pixel Labeling (화소 라벨링에 의한 얼굴 특징 인수 추출)

  • 김승업;이우범;김욱현;강병욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.47-54
    • /
    • 2001
  • The main purpose of this study is to propose the algorithm about the extraction of the facial feature. To achieve the above goal, first of all, this study produces binary image for input color image. It calculates area after pixel labeling by variant block-units. Secondly, by contour following, circumference have been calculated. So the proper degree of resemblance about area, circumference, the proper degree of a circle and shape have been calculated using the value of area and circumference. And Third, the algorithm about the methods of extracting parameters which are about the feature of eyes, nose, and mouse using the proper degree of resemblance, general structures and characteristics(symmetrical distance) in face have been accomplished. And then the feature parameters of the front face have been extracted. In this study, twelve facial feature parameters have been extracted by 297 test images taken from 100 people, and 92.93 % of the extracting rate has been shown.

  • PDF

Facial Feature Extraction for Face Expression Recognition (얼굴 표정인식을 위한 얼굴요소 추출)

  • 이경희;고재필;변혜란;이일병;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.33-40
    • /
    • 1998
  • 본 논문은 얼굴인식 분야에 있어서 필수 과정인 얼굴 및 얼굴의 주요소인 눈과 입의 추출에 관한 방법을 제시한다. 얼굴 영역 추출은 복잡한 배경하에서 움직임 정보나 색상정보를 사용하지 않고 통계적인 모델에 기반한 일종의 형찬정합 방법을 사용하였다. 통계적인 모델은 입력된 얼굴 영상들의 Hotelling변환 과정에서 생성되는 고유 얼굴로, 복잡한 얼굴 영상을 몇 개의 주성분 갑으로 나타낼 수 있게 한다. 얼굴의 크기, 영상의 명암, 얼굴의 위치에 무관하게 얼굴을 추출하기 위해서, 단계적인 크기를 가지는 탐색 윈도우를 이용하여 영상을 검색하고 영상 강화 기법을 적용한 후, 영상을 고유얼굴 공간으로 투영하고 복원하는 과정을 통해 얼굴을 추출한다. 얼굴 요소의 추출은 각 요소별 특성을 고려한 엣지 추출과 이진화에 따른 프로젝션 히스토그램 분석에 의하여 눈과 입의 경계영역을 추출한다. 얼굴 영상에 관련된 윤곽선 추출에 관한 기존의 연구에서 주로 기하학적인 모양을 갖는 눈과 입의 경우에는 주로 가변 템플릿(Deformable Template)방법을 사용하여 특징을 추출하고, 비교적 다양한 모양을 갖는 눈썹, 얼굴 윤곽선 추출에는 스네이크(Snakes: Active Contour Model)를 이용하는 연구들이 이루어지고 있는데, 본 논문에서는 이러한 기존의 연구와는 달리 스네이크를 이용하여 적절한 파라미터의 선택과 에너지함수를 정의하여 눈과 입의 윤곽선 추출을 실험하였다. 복잡한 배경하에서 얼굴 영역의 추출, 추출된 얼굴 영역에서 눈과 입의 영역 추출 및 윤곽선 추출이 비교적 좋은 결과를 보이고 있다.

  • PDF

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.