• Title/Summary/Keyword: Facial feature Extraction

Search Result 160, Processing Time 0.024 seconds

Local Context based Feature Extraction for Efficient Face Detection (효율적인 얼굴 검출을 위한 지역적 켄텍스트 기반의 특징 추출)

  • Rhee, Phill-Kyu;Xu, Yong Zhe;Shin, Hak-Chul;Shen, Yan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.1
    • /
    • pp.185-191
    • /
    • 2011
  • Recently, the surveillance system is highly being attention. Various Technologies as detecting object from image than determining and recognizing if the object are person are universally being used. Therefore, In this paper shows detecting on this kind of object and local context based facial feather detection algorithm is being advocated. Detect using Gabor Bunch in the same time Bayesian detection method for revision to find feather point is being described. The entire system to search for object area from image, context-based face detection, feature extraction methods applied to improve the performance of the system.

An image analysis system Design using Arduino sensor and feature point extraction algorithm to prevent intrusion

  • LIM, Myung-Jae;JUNG, Dong-Kun;KWON, Young-Man
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.23-28
    • /
    • 2021
  • In this paper, we studied a system that can efficiently build security management for single-person households using Arduino, ESP32-CAM and PIR sensors, and proposed an Android app with an internet connection. The ESP32-CAM is an Arduino compatible board that supports both Wi-Fi, Bluetooth, and cameras using an ESP32-based processor. The PCB on-board antenna may be used independently, and the sensitivity may be expanded by separately connecting the external antenna. This system has implemented an Arduino-based Unauthorized intrusion system that can significantly help prevent crimes in single-person households using the combination of PIR sensors, Arduino devices, and smartphones. unauthorized intrusion system, showing the connection between Arduino Uno and ESP32-CAM and with smartphone applications. Recently, if daily quarantine is underway around us and it is necessary to verify the identity of visitors, it is expected that it will help maintain a safety net if this system is applied for the purpose of facial recognition and restricting some access. This technology is widely used to verify that the characters in the two images entered into the system are the same or to determine who the characters in the images are most similar to among those previously stored in the internal database. There is an advantage that it may be implemented in a low-power, low-cost environment through image recognition, comparison, feature point extraction, and comparison.

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

A Study on Automatic Detection of The Face and Facial Features for Face Recognition System in Real Time (실시간 얼굴인식 시스템을 위한 얼굴의 위치 및 각 부위 자동 검출에 관한 연구)

  • 구자일;홍준표
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.4
    • /
    • pp.379-388
    • /
    • 2002
  • In this paper, the real-time algorithm is proposed for automatic detection of the face and facial features. In the face region, we extracted eyes, nose, mouth and so forth. There are two methods to extract them; one is the method of using the location information of them, other is the method of using Gaussian second derivatives filters. This system have high speed and accuracy because the facial feature extraction is processed only by detected face region, not by whole image. There are some kinds of good experimental result for the proposed algorithm; high face detection rate of 95%, high speed of lower than 1sec. the reduction of illumination effect, and the compensation of face tilt.

Eye Location Algorithm For Natural Video-Conferencing (화상 회의 인터페이스를 위한 눈 위치 검출)

  • Lee, Jae-Jun;Choi, Jung-Il;Lee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3211-3218
    • /
    • 1997
  • This paper addresses an eye location algorithm which is essential process of human face tracking system for natural video-conferencing. In current video-conferencing systems, user's facial movements are restricted by fixed camera, therefore it is inconvenient to users. We Propose an eye location algorithm for automatic face tracking. Because, locations of other facial features guessed from locations of eye and scale of face in the image can be calculated using inter-ocular distance. Most previous feature extraction methods for face recognition system are approached under assumption that approximative face region or location of each facial feature is known. The proposed algorithm in this paper uses no prior information on the given image. It is not sensitive to backgrounds and lighting conditions. The proposed algorithm uses the valley representation as major information to locate eyes. The experiments have been performed for 213 frames of 17 people and show very encouraging results.

  • PDF

Head Pose Estimation with Accumulated Historgram and Random Forest (누적 히스토그램과 랜덤 포레스트를 이용한 머리방향 추정)

  • Mun, Sung Hee;Lee, Chil woo
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.38-43
    • /
    • 2016
  • As smart environment is spread out in our living environments, the needs of an approach related to Human Computer Interaction(HCI) is increases. One of them is head pose estimation. it related to gaze direction estimation, since head has a close relationship to eyes by the body structure. It's a key factor in identifying person's intention or the target of interest, hence it is an essential research in HCI. In this paper, we propose an approach for head pose estimation with pre-defined several directions by random forest classifier. We use canny edge detector to extract feature of the different facial image which is obtained between input image and averaged frontal facial image for extraction of rotation information of input image. From that, we obtain the binary edge image, and make two accumulated histograms which are obtained by counting the number of pixel which has non-zero value along each of the axes. This two accumulated histograms are used to feature of the facial image. We use CAS-PEAL-R1 Dataset for training and testing to random forest classifier, and obtained 80.6% accuracy.

Face Detection System Based on Candidate Extraction through Segmentation of Skin Area and Partial Face Classifier (피부색 영역의 분할을 통한 후보 검출과 부분 얼굴 분류기에 기반을 둔 얼굴 검출 시스템)

  • Kim, Sung-Hoon;Lee, Hyon-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.11-20
    • /
    • 2010
  • In this paper we propose a face detection system which consists of a method of face candidate extraction using skin color and a method of face verification using the feature of facial structure. Firstly, the proposed extraction method of face candidate uses the image segmentation and merging algorithm in the regions of skin color and the neighboring regions of skin color. These two algorithms make it possible to select the face candidates from the variety of faces in the image with complicated backgrounds. Secondly, by using the partial face classifier, the proposed face validation method verifies the feature of face structure and then classifies face and non-face. This classifier uses face images only in the learning process and does not consider non-face images in order to use less number of training images. In the experimental, the proposed method of face candidate extraction can find more 9.55% faces on average as face candidates than other methods. Also in the experiment of face and non-face classification, the proposed face validation method obtains the face classification rate on the average 4.97% higher than other face/non-face classifiers when the non-face classification rate is about 99%.

Development of Emotion Recongition System Using Facial Image (얼굴 영상을 이용한 감정 인식 시스템 개발)

  • Kim, M.H.;Joo, Y.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.191-196
    • /
    • 2005
  • Although the technology for emotion recognition is important one which was demanded in various fields, it still remains as the unsolved problems. Especially, there is growing demand for emotion recognition technology based on racial image. The facial image based emotion recognition system is complex system comprised of various technologies. Therefore, various techniques such that facial image analysis, feature vector extraction, pattern recognition technique, and etc, are needed in order to develop this system. In this paper, we propose new emotion recognition system based un previously studied facial image analysis technique. The proposed system recognizes the emotion by using the fuzzy classifier. The facial image database is built up and the performance of the proposed system is verified by using built database.

Development of a Web-based Presentation Attitude Correction Program Centered on Analyzing Facial Features of Videos through Coordinate Calculation (좌표계산을 통해 동영상의 안면 특징점 분석을 중심으로 한 웹 기반 발표 태도 교정 프로그램 개발)

  • Kwon, Kihyeon;An, Suho;Park, Chan Jung
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.10-21
    • /
    • 2022
  • In order to improve formal presentation attitudes such as presentation of job interviews and presentation of project results at the company, there are few automated methods other than observation by colleagues or professors. In previous studies, it was reported that the speaker's stable speech and gaze processing affect the delivery power in the presentation. Also, there are studies that show that proper feedback on one's presentation has the effect of increasing the presenter's ability to present. In this paper, considering the positive aspects of correction, we developed a program that intelligently corrects the wrong presentation habits and attitudes of college students through facial analysis of videos and analyzed the proposed program's performance. The proposed program was developed through web-based verification of the use of redundant words and facial recognition and textualization of the presentation contents. To this end, an artificial intelligence model for classification was developed, and after extracting the video object, facial feature points were recognized based on the coordinates. Then, using 4000 facial data, the performance of the algorithm in this paper was compared and analyzed with the case of facial recognition using a Teachable Machine. Use the program to help presenters by correcting their presentation attitude.

Preprocessing and Facial Feature Robust to Illumination Variations (조명변화에 강인한 전처리 및 얼굴특징)

  • Kim, Dong-Ju;Lee, Sang-Heon;Kim, Hyun-Duk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.7
    • /
    • pp.503-506
    • /
    • 2013
  • In this paper, we propose the face recognition method combining the ECSP preprocessing technique which is modified version of previous CS-LBP and the illumination-robust D2D-PCA feature. The performance evaluation of proposed method was carried out using various binary pattern operators and feature extraction algorithms such as well-known PCA and 2D-PCA on the Yale B database. As a results, the proposed method showed the best recognition accuracy compared to different approaches, and we confirmed that the proposed approach is robust to illumination variation.