• Title/Summary/Keyword: Facial Pose

Search Result 102, Processing Time 0.024 seconds

Automatic Face Extraction with Unification of Brightness Distribution in Candidate Region and Triangle Structure among Facial Features (후보영역의 밝기 분산과 얼굴특징의 삼각형 배치구조를 결합한 얼굴의 자동 검출)

  • 이칠우;최정주
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.1
    • /
    • pp.23-33
    • /
    • 2000
  • In this paper, we describe an algorithm which can extract human faces with natural pose from complex backgrounds. This method basically adopts the concept that facial region has the nearly same gray level for all pixels within appropriately scaled blocks. Based on the idea, we develop a hierarchial process that first, a block image data with pyramid structure of input image is generated, and some candidate regions for facial regions in the block image are Quickly determined, then finally the detailed facial features; organs are decided. To find the features easily, we introduce a local gray level transform which emphasizes dark and small regions, and estimate the geometrical triangle constraints among the facial features. The merit of our method is that we can be freed from the parameter assignment problem since the algorithm utilize a simple brightness computation, consequently robust systems not being depended on specific parameter values can be easily constructed.

  • PDF

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Design of RBFNNs Pattern Classifier Realized with the Aid of PSO and Multiple Point Signature for 3D Face Recognition (3차원 얼굴 인식을 위한 PSO와 다중 포인트 특징 추출을 이용한 RBFNNs 패턴분류기 설계)

  • Oh, Sung-Kwun;Oh, Seung-Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.797-803
    • /
    • 2014
  • In this paper, 3D face recognition system is designed by using polynomial based on RBFNNs. In case of 2D face recognition, the recognition performance reduced by the external environmental factors such as illumination and facial pose. In order to compensate for these shortcomings of 2D face recognition, 3D face recognition. In the preprocessing part, according to the change of each position angle the obtained 3D face image shapes are changed into front image shapes through pose compensation. the depth data of face image shape by using Multiple Point Signature is extracted. Overall face depth information is obtained by using two or more reference points. The direct use of the extracted data an high-dimensional data leads to the deterioration of learning speed as well as recognition performance. We exploit principle component analysis(PCA) algorithm to conduct the dimension reduction of high-dimensional data. Parameter optimization is carried out with the aid of PSO for effective training and recognition. The proposed pattern classifier is experimented with and evaluated by using dataset obtained in IC & CI Lab.

Analysis of Advertisement Types of Global Fashion Brands : A study focused on the trends of photo image components and styles of expression in global fashion advertisements. (글로벌 패션브랜드 광고의 유형 분석 - 패션광고 사진이미지 구성요소와 표현형식을 중심으로 -)

  • Chang, Gyeong-Hae
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.19 no.4
    • /
    • pp.17-27
    • /
    • 2017
  • This study analyzes the trends of photo image components and forms of expression in global fashion advertising photos. First, photo image components are classified into seven categories: location (indoor-outdoor), the model's movement, pose, facial expression, gender, race and number of models. The forms of expression are classified into six categories: direct expression, sensual expression, symbolic expression, storytelling expression, dramatic expression, and sexual expression. With the aforementioned classifications, the trends were studied for three years from 2013 to 2015. The analysis result indicates the following: for the details of photo image components, the portion of indoor photos, static poses and conscious facial expressions was over 60% of the total for every season of the 3 years, while there was a slight increase in the number of models and the diversity of races. For the forms of expression, the sensual expression showed the largest portion accounting for over 50% of the total, followed by direct expression and storytelling expression. The findings from this study show that the trends of photo image components and forms of expression in global fashion advertisements are changing. Therefore, domestic companies will need to develop photo image components and forms of expression in line with the changing global fashion advertisement trends.

  • PDF

Facial Boundary Detection using an Active Contour Model (활성 윤곽선 모델을 이용한 얼굴 경계선 추출)

  • Chang Jae Sik;Kim Eun Yi;Kim Hang Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.1
    • /
    • pp.79-87
    • /
    • 2005
  • This paper presents an active contour model for extracting accurate facial regions in complex environments. In the model, a contour is represented by a zero level set of level function φ, and evolved via level set partial differential equations. Then, unlike general active contours, skin color information that is represented by 2D Gaussian model is used for evolving and slopping a curve, which allows the proposed method to be robust to noise and varying pose. To assess the effectiveness of the proposed method it was tested with several natural scenes, and the results were compared with those of geodesic active contours. Experimental results demonstrate the superior performance of the proposed method.

Face Detection for Cast Searching in Video (비디오 등장인물 검색을 위한 얼굴검출)

  • Paik Seung-ho;Kim Jun-hwan;Yoo Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.983-991
    • /
    • 2005
  • Human faces are commonly found in a video such as a drama and provide useful information for video content analysis. Therefore, face detection plays an important role in applications such as face recognition, and face image database management. In this paper, we propose a face detection algorithm based on pre-processing of scene change detection for indexing and cast searching in video. The proposed algorithm consists of three stages: scene change detection stage, face region detection stage, and eyes and mouth detection stage. Experimental results show that the proposed algorithm can detect faces successfully over a wide range of facial variations in scale, rotation, pose, and position, and the performance is improved by $24\%$with profile images comparing with conventional methods using color components.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

A Simple Eye Detection Algorithm for Embedded System (임베디드 시스템을 위한 눈 찾기 알고리즘)

  • Lee Yung-Jae;Kim Ik-Dong;Choi Mi-Soon;Shim Jae-Chang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.883-886
    • /
    • 2004
  • Many of facial feature extracting applications and systems have been developed in the field of face recognition systems and its application, and most of them use the eyes as a key-feature of human face. In this paper we show a simple and fast eye detection algorithm for embedded systems. The eyes are very important facial features because of the attribution they have. For example, we know the darkest regions in a face are the pair of pupils, and the eyes are always a pair and parallel. Using such attributors, our algorithm works well under various light conditions, size of face in image, and various pose such as panning and tilting. The main keys to develop this algorithm are the eyes' attribution that we can usually contemplate and easily find when we think about what is the attribution that the eyes have. With some constraints of the eyes and knowledge of the anthropometric human face, we detect human eye in an image, and the experimental results demonstrate successful eye detection.

  • PDF

Fast and Robust Face Detection based on CNN in Wild Environment (CNN 기반의 와일드 환경에 강인한 고속 얼굴 검출 방법)

  • Song, Junam;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1310-1319
    • /
    • 2016
  • Face detection is the first step in a wide range of face applications. However, detecting faces in the wild is still a challenging task due to the wide range of variations in pose, scale, and occlusions. Recently, many deep learning methods have been proposed for face detection. However, further improvements are required in the wild. Another important issue to be considered in the face detection is the computational complexity. Current state-of-the-art deep learning methods require a large number of patches to deal with varying scales and the arbitrary image sizes, which result in an increased computational complexity. To reduce the complexity while achieving better detection accuracy, we propose a fully convolutional network-based face detection that can take arbitrarily-sized input and produce feature maps (heat maps) corresponding to the input image size. To deal with the various face scales, a multi-scale network architecture that utilizes the facial components when learning the feature maps is proposed. On top of it, we design multi-task learning technique to improve detection performance. Extensive experiments have been conducted on the FDDB dataset. The experimental results show that the proposed method outperforms state-of-the-art methods with the accuracy of 82.33% at 517 false alarms, while improving computational efficiency significantly.