• Title/Summary/Keyword: 포즈인식

Search Result 115, Processing Time 0.02 seconds

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Development of Learning Algorithm using Brain Modeling of Hippocampus for Face Recognition (얼굴인식을 위한 해마의 뇌모델링 학습 알고리즘 개발)

  • Oh, Sun-Moon;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose the face recognition system using HNMA(Hippocampal Neuron Modeling Algorithm) which can remodel the cerebral cortex and hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature-vector of the face images very fast and construct the optimized feature each image. The system is composed of two parts. One is feature-extraction and the other is teaming and recognition. In the feature extraction part, it can construct good-classified features applying PCA(Principal Component Analysis) and LDA(Linear Discriminants Analysis) in order. In the learning part, it cm table the features of the image data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in the dentate gyrus region and remove the noise through the associate memory in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term memory learned by neuron. Experiments confirm the each recognition rate, that are face changes, pose changes and low quality image. The experimental results show that we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to existing methods.

Local Prominent Directional Pattern for Gender Recognition of Facial Photographs and Sketches (Local Prominent Directional Pattern을 이용한 얼굴 사진과 스케치 영상 성별인식 방법)

  • Makhmudkhujaev, Farkhod;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.2
    • /
    • pp.91-104
    • /
    • 2019
  • In this paper, we present a novel local descriptor, Local Prominent Directional Pattern (LPDP), to represent the description of facial images for gender recognition purpose. To achieve a clearly discriminative representation of local shape, presented method encodes a target pixel with the prominent directional variations in local structure from an analysis of statistics encompassed in the histogram of such directional variations. Use of the statistical information comes from the observation that a local neighboring region, having an edge going through it, demonstrate similar gradient directions, and hence, the prominent accumulations, accumulated from such gradient directions provide a solid base to represent the shape of that local structure. Unlike the sole use of gradient direction of a target pixel in existing methods, our coding scheme selects prominent edge directions accumulated from more samples (e.g., surrounding neighboring pixels), which, in turn, minimizes the effect of noise by suppressing the noisy accumulations of single or fewer samples. In this way, the presented encoding strategy provides the more discriminative shape of local structures while ensuring robustness to subtle changes such as local noise. We conduct extensive experiments on gender recognition datasets containing a wide range of challenges such as illumination, expression, age, and pose variations as well as sketch images, and observe the better performance of LPDP descriptor against existing local descriptors.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

A Study on the RFID Biometrics System Based on Hippocampal Learning Algorithm Using NMF and LDA Mixture Feature Extraction (NMF와 LDA 혼합 특징추출을 이용한 해마 학습기반 RFID 생체 인증 시스템에 관한 연구)

  • Oh Sun-Moon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.46-54
    • /
    • 2006
  • Recently, the important of a personal identification is increasing according to expansion using each on-line commercial transaction and personal ID-card. Although a personal ID-card embedded RFID(Radio Frequency Identification) tag is gradually increased, the way for a person's identification is deficiency. So we need automatic methods. Because RFID tag is vary small storage capacity of memory, it needs effective feature extraction method to store personal biometrics information. We need new recognition method to compare each feature. In this paper, we studied the face verification system using Hippocampal neuron modeling algorithm which can remodel the hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature vector of the face images very fast. and construct the optimized feature each image. The system is composed of two parts mainly. One is feature extraction using NMF(Non-negative Matrix Factorization) and LDA(Linear Discriminants Analysis) mixture algorithm and the other is hippocampal neuron modeling and recognition simulation experiments confirm the each recognition rate, that are face changes, pose changes and low-level quality image. The results of experiments, we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to the existing method.

Development of the Hippocampal Learning Algorithm Using Associate Memory and Modulator of Neural Weight (연상기억과 뉴런 연결강도 모듈레이터를 이용한 해마 학습 알고리즘 개발)

  • Oh Sun-Moon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.37-45
    • /
    • 2006
  • In this paper, we propose the development of MHLA(Modulatory Hippocampus Learning Algorithm) which remodel a principle of brain of hippocampus. Hippocampus takes charge auto-associative memory and controlling functions of long-term or short-term memory strengthening. We organize auto-associative memory based 3 steps system(DG, CA3, CAl) and improve speed of learning by addition of modulator to long-term memory learning. In hippocampal system, according to the 3 steps order, information applies statistical deviation on Dentate Gyrus region and is labelled to responsive pattern by adjustment of a good impression. In CA3 region, pattern is reorganized by auto-associative memory. In CAI region, convergence of connection weight which is used long-term memory is learned fast by neural networks which is applied modulator. To measure performance of MHLA, PCA(Principal Component Analysis) is applied to face images which are classified by pose, expression and picture quality. Next, we calculate feature vectors and learn by MHLA. Finally, we confirm cognitive rate. The results of experiments, we can compare a proposed method of other methods, and we can confirm that the proposed method is superior to the existing method.

Collaborative Local Active Appearance Models for Illuminated Face Images (조명얼굴 영상을 위한 협력적 지역 능동표현 모델)

  • Yang, Jun-Young;Ko, Jae-Pil;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.816-824
    • /
    • 2009
  • In the face space, face images due to illumination and pose variations have a nonlinear distribution. Active Appearance Models (AAM) based on the linear model have limits to the nonlinear distribution of face images. In this paper, we assume that a few clusters of face images are given; we build local AAMs according to the clusters of face images, and then select a proper AAM model during the fitting phase. To solve the problem of updating fitting parameters among the models due to the model changing, we propose to build in advance relationships among the clusters in the parameter space from the training images. In addition, we suggest a gradual model changing to reduce improper model selections due to serious fitting failures. In our experiment, we apply the proposed model to Yale Face Database B and compare it with the previous method. The proposed method demonstrated successful fitting results with strongly illuminated face images of deep shadows.

Design of Dual Mode Amplifying Block Using Frequency Doubler (주파수 체배기를 이용한 이중 모우드 증폭부 설계)

  • Kang, Sung-Min;Choi, Jae-Hong;Koo, Kyung-Heon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.1 s.343
    • /
    • pp.127-132
    • /
    • 2006
  • This paper presents a dual-mode amplifier which operates as amplifier or frequency multiplier according to the input frequency. It satisfies the 802.11a/b/g frequency band of wireless LAN standard. A conventional dual-band wireless LAN transmitter consists of the separating power amplifiers operating at each frequency band, but the proposed dual-mode amplifier operates as an amplifier for the 802.11b/g signal and as a frequency multiplier for the 802.11a signal according to each LAN bias condition. The amplifier mode shows the gain of 13dB, the PldB of 17dBm and second harmonic suppression of below -37dBc. And the frequency-doubler mode shows the gain of 3.3dB, the output power of 7.3dBm and third harmonic suppression of below -50dBr.

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF