• Title/Summary/Keyword: Voice and Face Recognition

Search Result 47, Processing Time 0.021 seconds

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

A Study on the Recognition of Face Based on CNN Algorithms (CNN 알고리즘을 기반한 얼굴인식에 관한 연구)

  • Son, Da-Yeon;Lee, Kwang-Keun
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.2
    • /
    • pp.15-25
    • /
    • 2017
  • Recently, technologies are being developed to recognize and authenticate users using bioinformatics to solve information security issues. Biometric information includes face, fingerprint, iris, voice, and vein. Among them, face recognition technology occupies a large part. Face recognition technology is applied in various fields. For example, it can be used for identity verification, such as a personal identification card, passport, credit card, security system, and personnel data. In addition, it can be used for security, including crime suspect search, unsafe zone monitoring, vehicle tracking crime.In this thesis, we conducted a study to recognize faces by detecting the areas of the face through a computer webcam. The purpose of this study was to contribute to the improvement in the accuracy of Recognition of Face Based on CNN Algorithms. For this purpose, We used data files provided by github to build a face recognition model. We also created data using CNN algorithms, which are widely used for image recognition. Various photos were learned by CNN algorithm. The study found that the accuracy of face recognition based on CNN algorithms was 77%. Based on the results of the study, We carried out recognition of the face according to the distance. Research findings may be useful if face recognition is required in a variety of situations. Research based on this study is also expected to improve the accuracy of face recognition.

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Multi-Modal Biometries System for Ubiquitous Sensor Network Environment (유비쿼터스 센서 네트워크 환경을 위한 다중 생체인식 시스템)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.36-44
    • /
    • 2007
  • In this paper, we implement the speech & face recognition system to support various ubiquitous sensor network application services such as switch control, authentication, etc. using wireless audio and image interface. The proposed system is consist of the H/W with audio and image sensor and S/W such as speech recognition algorithm using psychoacoustic model, face recognition algorithm using PCA (Principal Components Analysis) and LDPC (Low Density Parity Check). The proposed speech and face recognition systems are inserted in a HOST PC to use the sensor energy effectively. And improve the accuracy of speech and face recognition, we implement a FEC (Forward Error Correction) system Also, we optimized the simulation coefficient and test environment to effectively remove the wireless channel noises and correcting wireless channel errors. As a result, when the distance that between audio sensor and the source of voice is less then 1.5m FAR and FRR are 0.126% and 7.5% respectively. The face recognition algorithm step is limited 2 times, GAR and FAR are 98.5% and 0.036%.

Implementation of Human and Computer Interface for Detecting Human Emotion Using Neural Network (인간의 감정 인식을 위한 신경회로망 기반의 휴먼과 컴퓨터 인터페이스 구현)

  • Cho, Ki-Ho;Choi, Ho-Jin;Jung, Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.825-831
    • /
    • 2007
  • In this paper, an interface between a human and a computer is presented. The human and computer interface(HCI) serves as another area of human and machine interfaces. Methods for the HCI we used are voice recognition and image recognition for detecting human's emotional feelings. The idea is that the computer can recognize the present emotional state of the human operator, and amuses him/her in various ways such as turning on musics, searching webs, and talking. For the image recognition process, the human face is captured, and eye and mouth are selected from the facial image for recognition. To train images of the mouth, we use the Hopfield Net. The results show 88%$\sim$92% recognition of the emotion. For the vocal recognition, neural network shows 80%$\sim$98% recognition of voice.

Voice Coding Using Only the Features of the Face Image

  • Cho, Youn-Soo;Jang, Jong-Whan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3E
    • /
    • pp.26-29
    • /
    • 1999
  • In this paper, we propose a new voice coding using only the features of the face image such as mouth height(H), width(W), rate(R=W/H), area(S), and ellipse's feature(P). It provides high security and is not affected by acoustic noise because we use only the features of face image for speech. In the proposed algorithm, the mean recognition rate for the vowels approximately rises between 70% and 96% after many tests.

  • PDF

2D Face Image Recognition and Authentication Based on Data Fusion (데이터 퓨전을 이용한 얼굴영상 인식 및 인증에 관한 연구)

  • 박성원;권지웅;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.302-306
    • /
    • 2001
  • Because face Images have many variations(expression, illumination, orientation of face, etc), there has been no popular method which has high recognition rate. To solve this difficulty, data fusion that fuses various information has been studied. But previous research for data fusion fused additional biological informationUingerplint, voice, del with face image. In this paper, cooperative results from several face image recognition modules are fused without using additional biological information. To fuse results from individual face image recognition modules, we use re-defined mass function based on Dempster-Shafer s fusion theory.Experimental results from fusing several face recognition modules are presented, to show that proposed fusion model has better performance than single face recognition module without using additional biological information.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 김동수;남기환;한준희;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.181-185
    • /
    • 1998
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.783-788
    • /
    • 2002
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face Image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives md vowels.

Design of an Visitor Identification system for the Front Door of an Apartment using Deep learning (딥러닝 기반 이용한 공동주택현관문의 출입자 식별 시스템 설계)

  • Lee, Min-Hye;Mun, Hyung-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.45-51
    • /
    • 2022
  • Fear of contact exists due to the prevention of the spread of infectious diseases such as COVID-19. When using the common entrance door of an apartment, access is possible only if the resident enters a password or obtains the resident's permission. There is the inconvenience of having to manually enter the number and password for the common entrance door to enter. Also, contactless entry is required due to COVID-19. Due to the development of ICT, users can be easily identified through the development of face recognition and voice recognition technology. The proposed method detects a visitor's face through a CCTV or camera attached to the common entrance door, recognizes the face, and identifies it as a registered resident. Then, based on the registered information of the resident, it is possible to operate without contact by interworking with the elevator on the server. In particular, if face recognition fails with a hat or mask, the visitor is identified by voice or additional authentication of the visitor is performed based on the voice message. It is possible to block the spread of contagiousness without leaving any contactless function and fingerprint information when entering and exiting the front door of an apartment house, and without the inconvenience of access.