• Title/Summary/Keyword: facial recognition technology

Search Result 170, Processing Time 0.031 seconds

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Face Recognition Under Ubiquitous Environments (유비쿼터스 환경을 이용한 얼굴인식)

  • Go, Hyoun-Joo;Kim, Hyung-Bae;Yang, Dong-Hwa;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.431-437
    • /
    • 2004
  • This paper propose a facial recognition method based on an ubiquitous computing that is one of next generation intelligence technology fields. The facial images are acquired by a mobile device so-called cellular phone camera. We consider a mobile security using facial feature extraction and recognition process. Facial recognition is performed by the PCA and fuzzy LDA algorithm. Applying the discrete wavelet based on multi-resolution analysis, we compress the image data for mobile system environment. Euclidean metric is applied to measure the similarity among acquired features and then obtain the recognition rate. Finally we use the mobile equipment to show the efficiency of method. From various experiments, we find that our proposed method shows better results, even though the resolution of mobile camera is lower than conventional camera.

Automatic Face Identification System Using Adaptive Face Region Detection and Facial Feature Vector Classification

  • Kim, Jung-Hoon;Do, Kyeong-Hoon;Lee, Eung-Joo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1252-1255
    • /
    • 2002
  • In this paper, face recognition algorithm, by using skin color information of HSI color coordinate collected from face images, elliptical mask, fratures of face including eyes, nose and mouth, and geometrical feature vectors of face and facial angles, is proposed. The proposed algorithm improved face region extraction efficacy by using HSI information relatively similar to human's visual system along with color tone information about skin colors of face, elliptical mask and intensity information. Moreover, it improved face recognition efficacy with using feature information of eyes, nose and mouth, and Θ1(ACRED), Θ2(AMRED) and Θ 3(ANRED), which are geometrical face angles of face. In the proposed algorithm, it enables exact face reading by using color tone information, elliptical mask, brightness information and structural characteristic angle together, not like using only brightness information in existing algorithm. Moreover, it uses structural related value of characteristics and certain vectors together for the recognition method.

  • PDF

Facial Image Analysis Algorithm for Emotion Recognition (감정 인식을 위한 얼굴 영상 분석 알고리즘)

  • Joo, Y.H.;Jeong, K.H.;Kim, M.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.801-806
    • /
    • 2004
  • Although the technology for emotion recognition is important one which demanded in various fields, it still remains as the unsolved problem. Especially, it needs to develop the algorithm based on human facial image. In this paper, we propose the facial image analysis algorithm for emotion recognition. The proposed algorithm is composed as the facial image extraction algorithm and the facial component extraction algorithm. In order to have robust performance under various illumination conditions, the fuzzy color filter is proposed in facial image extraction algorithm. In facial component extraction algorithm, the virtual face model is used to give information for high accuracy analysis. Finally, the simulations are given in order to check and evaluate the performance.

Case Study of Short Animation with Facial Capture Technology Using Mobile

  • Jie, Gu;Hwang, Juwon;Choi, Chulyoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.56-63
    • /
    • 2020
  • The Avengers film produced by Marvel Comics shows visual effects that were impossible to produce in the past. Companies that produce film special effects were initially equipped with large personnel and equipment, but technology is gradually evolving to be feasible for smaller companies that do not have high-priced equipment and a large workforce. The development of hardware and software is becoming increasingly available to the general public as well as to experts. Equipment and software which were difficult for individuals to purchase before quickly popularized high-performance computers as the game industry developed. The development of the cloud has been the driving force behind software costs. As augmented reality (AR) performance of mobile devices improves, advanced technologies such as motion tracking and face recognition technology are no longer implemented by expensive equipment. Under these circumstances, after implementing mobile-based facial capture technology in animation projects, we have identified the pros and the cons and suggest better solutions to improve the problem.

Greedy Learning of Sparse Eigenfaces for Face Recognition and Tracking

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.162-170
    • /
    • 2014
  • Appearance-based subspace models such as eigenfaces have been widely recognized as one of the most successful approaches to face recognition and tracking. The success of eigenfaces mainly has its origins in the benefits offered by principal component analysis (PCA), the representational power of the underlying generative process for high-dimensional noisy facial image data. The sparse extension of PCA (SPCA) has recently received significant attention in the research community. SPCA functions by imposing sparseness constraints on the eigenvectors, a technique that has been shown to yield more robust solutions in many applications. However, when SPCA is applied to facial images, the time and space complexity of PCA learning becomes a critical issue (e.g., real-time tracking). In this paper, we propose a very fast and scalable greedy forward selection algorithm for SPCA. Unlike a recent semidefinite program-relaxation method that suffers from complex optimization, our approach can process several thousands of data dimensions in reasonable time with little accuracy loss. The effectiveness of our proposed method was demonstrated on real-world face recognition and tracking datasets.

Design of an IOT System based on Face Recognition Technology using ESP32-CAM

  • Mahmoud, Ines;Saidi, Imen;bouzazi, Chadi
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.1-6
    • /
    • 2022
  • In this paper, we will present the realization of a facial recognition system using the ESP32-CAM board controlled by an Arduino board. The goal is to monitor a remote location in real time via a camera that is integrated into the ESP32 IOT board. The acquired images will be recorded on a memory card and at the same time transmitted to a pc (a web server). The development of this remote monitoring system is to create an alternative between security, reception, and transmission of information to act accordingly. The simulation results of our proposed application of the facial recognition domain are very efficient and satisfying in real time.

Implementation of Multi Channel Network Platform based Augmented Reality Facial Emotion Sticker using Deep Learning (딥러닝을 이용한 증강현실 얼굴감정스티커 기반의 다중채널네트워크 플랫폼 구현)

  • Kim, Dae-Jin
    • Journal of Digital Contents Society
    • /
    • v.19 no.7
    • /
    • pp.1349-1355
    • /
    • 2018
  • Recently, a variety of contents services over the internet are becoming popular, among which MCN(Multi Channel Network) platform services have become popular with the generalization of smart phones. The MCN platform is based on streaming, and various factors are added to improve the service. Among them, augmented reality sticker service using face recognition is widely used. In this paper, we implemented the MCN platform that masks the augmented reality sticker on the face through facial emotion recognition in order to further increase the interest factor. We analyzed seven facial emotions using deep learning technology for facial emotion recognition, and applied the emotional sticker to the face based on it. To implement the proposed MCN platform, emotional stickers were applied to the clients and various servers that can stream the servers were designed.

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.