• 제목/요약/키워드: Model recognition

Search Result 3,389, Processing Time 0.032 seconds

Low Resolution Rate Face Recognition Based on Multi-scale CNN

  • Wang, Ji-Yuan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1467-1472
    • /
    • 2018
  • For the problem that the face image of surveillance video cannot be accurately identified due to the low resolution, this paper proposes a low resolution face recognition solution based on convolutional neural network model. Convolutional Neural Networks (CNN) model for multi-scale input The CNN model for multi-scale input is an improvement over the existing "two-step method" in which low-resolution images are up-sampled using a simple bi-cubic interpolation method. Then, the up sampled image and the high-resolution image are mixed as a model training sample. The CNN model learns the common feature space of the high- and low-resolution images, and then measures the feature similarity through the cosine distance. Finally, the recognition result is given. The experiments on the CMU PIE and Extended Yale B datasets show that the accuracy of the model is better than other comparison methods. Compared with the CMDA_BGE algorithm with the highest recognition rate, the accuracy rate is 2.5%~9.9%.

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.1-7
    • /
    • 2021
  • Recently, since most of the research on correcting speech recognition errors is based on English, there is not enough research on Korean speech recognition. Compared to English speech recognition, however, Korean speech recognition has many errors due to the linguistic characteristics of Korean language, such as Korean Fortis and Korean Liaison, thus research on Korean speech recognition is needed. Furthermore, earlier works primarily focused on editorial distance algorithms and syllable restoration rules, making it difficult to correct the error types of Korean Fortis and Korean Liaison. In this paper, we propose a context-sensitive post-processing model of speech recognition using a LSTM-based sequence-to-sequence model and Bahdanau attention mechanism to correct Korean speech recognition errors caused by the pronunciation. Experiments showed that by using the model, the speech recognition performance was improved from 64% to 77% for Fortis, 74% to 90% for Liaison, and from 69% to 84% for average recognition than before. Based on the results, it seems possible to apply the proposed model to real-world applications based on speech recognition.

Speech Recognition using MSHMM based on Fuzzy Concept

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2E
    • /
    • pp.55-61
    • /
    • 1997
  • This paper proposes a MSHMM(Multi-Section Hidden Markov Model) recognition method based on Fuzzy Concept, as a method on the speech recognition of speaker-independent. In this recognition method, training data are divided into several section and multi-observation sequences given proper probabilities by fuzzy rule according to order of short distance from MSVQ codebook per each section are obtained. Thereafter, the HMM per each section using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. In this paper, other experiments to compare with the results of these experiments are implemented by the various conventional recognition methods(DP, MSVQ, DMS, general HMM) under the same data. Through results of all-round experiment, it is proved that the proposed MSHMM based on fuzzy concept is superior to DP method, MSVQ method, DMS model and general HMM model in recognition rate and computational time, and does not decreases recognition rate as 92.91% in spite of increment of speaker number.

  • PDF

Vocabulary Recognition Performance Improvement using a convergence of Bayesian Method for Parameter Estimation and Bhattacharyya Algorithm Model (모수 추정을 위한 베이시안 기법과 바타차랴 알고리즘을 융합한 어휘 인식 성능 향상)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.353-358
    • /
    • 2015
  • The Vocabulary Recognition System made by recognizing the standard vocabulary is seen as a decline of recognition when out of the standard or similar words. In this case, reconstructing the system in order to add or extend a range of vocabulary is a way to solve the problem. This paper propose configured Bhattacharyya algorithm standing by speech recognition learning model using the Bayesian methods which reflect parameter estimation upon the model configuration scalability. It is recognized corrected standard model based on a characteristic of the phoneme using the Bayesian methods for parameter estimation of the phoneme's data and Bhattacharyya algorithm for a similar model. By Bhattacharyya algorithm to configure recognition model evaluates a recognition performance. The result of applying the proposed method is showed a recognition rate of 97.3% and a learning curve of 1.2 seconds.

Implementation of Face Recognition Pipeline Model using Caffe (Caffe를 이용한 얼굴 인식 파이프라인 모델 구현)

  • Park, Jin-Hwan;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.5
    • /
    • pp.430-437
    • /
    • 2020
  • The proposed model implements a model that improves the face prediction rate and recognition rate through learning with an artificial neural network using face detection, landmark and face recognition algorithms. After landmarking in the face images of a specific person, the proposed model use the previously learned Caffe model to extract face detection and embedding vector 128D. The learning is learned by building machine learning algorithms such as support vector machine (SVM) and deep neural network (DNN). Face recognition is tested with a face image different from the learned figure using the learned model. As a result of the experiment, the result of learning with DNN rather than SVM showed better prediction rate and recognition rate. However, when the hidden layer of DNN is increased, the prediction rate increases but the recognition rate decreases. This is judged as overfitting caused by a small number of objects to be recognized. As a result of learning by adding a clear face image to the proposed model, it is confirmed that the result of high prediction rate and recognition rate can be obtained. This research will be able to obtain better recognition and prediction rates through effective deep learning establishment by utilizing more face image data.

A Variable Parameter Model based on SSMS for an On-line Speech and Character Combined Recognition System (음성 문자 공용인식기를 위한 SSMS 기반 가변 파라미터 모델)

  • 석수영;정호열;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.528-538
    • /
    • 2003
  • A SCCRS (Speech and Character Combined Recognition System) is developed for working on mobile devices such as PDA (Personal Digital Assistants). In SCCRS, the feature extraction is separately carried out for speech and for hand-written character, but the recognition is performed in a common engine. The recognition engine employs essentially CHMM (Continuous Hidden Markov Model), which consists of variable parameter topology in order to minimize the number of model parameters and to reduce recognition time. For generating contort independent variable parameter model, we propose the SSMS(Successive State and Mixture Splitting), which gives appropriate numbers of mixture and of states through splitting in mixture domain and in time domain. The recognition results show that the proposed SSMS method can reduce the total number of GOPDD (Gaussian Output Probability Density Distribution) up to 40.0% compared to the conventional method with fixed parameter model, at the same recognition performance in speech recognition system.

Vector Quantization based Speech Recognition Performance Improvement using Maximum Log Likelihood in Gaussian Distribution (가우시안 분포에서 Maximum Log Likelihood를 이용한 벡터 양자화 기반 음성 인식 성능 향상)

  • Chung, Kyungyong;Oh, SangYeob
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.335-340
    • /
    • 2018
  • Commercialized speech recognition systems that have an accuracy recognition rates are used a learning model from a type of speaker dependent isolated data. However, it has a problem that shows a decrease in the speech recognition performance according to the quantity of data in noise environments. In this paper, we proposed the vector quantization based speech recognition performance improvement using maximum log likelihood in Gaussian distribution. The proposed method is the best learning model configuration method for increasing the accuracy of speech recognition for similar speech using the vector quantization and Maximum Log Likelihood with speech characteristic extraction method. It is used a method of extracting a speech feature based on the hidden markov model. It can improve the accuracy of inaccurate speech model for speech models been produced at the existing system with the use of the proposed system may constitute a robust model for speech recognition. The proposed method shows the improved recognition accuracy in a speech recognition system.

Noise Robust Speech Recognition Based on Parallel Model Combination Adaptation Using Frequency-Variant (주파수 변이를 이용한 Parallel Model Combination 모델 적응에 기반한 잡음에 강한 음성인식)

  • Choi, Sook-Nam;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.3
    • /
    • pp.252-261
    • /
    • 2013
  • The common speech recognition system displays higher recognition performance in a quiet environment, while its performance declines sharply in a real environment where there are noises. To implement a speech recognizer that is robust in different speech settings, this study suggests the method of Parallel Model Combination adaptation using frequency-variant based on environment-awareness (FV-PMC), which uses variants in frequency; acquires the environmental data for speech recognition; applies it to upgrading the speech recognition model; and promotes its performance enhancement. This FV-PMC performs the speech recognition with the recognition model which is generated as followings: i) calculating the average frequency variant in advance among the readily-classified noise groups and setting it as a threshold value; ii) recalculating the frequency variant among noise groups when speech with unknown noises are input; iii) regarding the speech higher than the threshold value of the relevant group as the speech including the noise of its group; and iv) using the speech that includes this noise group. When noises were classified with the proposed FV-PMC, the average accuracy of classification was 56%, and the results from the speech recognition experiments showed the average recognition rate of Set A was 79.05%, the rate of Set B 79.43%m, and the rate of Set C 83.37% respectively. The grand mean of recognition rate was 80.62%, which demonstrates 5.69% more improved effects than the recognition rate of 74.93% of the existing Parallel Model Combination with a clear model, meaning that the proposed method is effective.

Covariance-based Recognition Using Machine Learning Model

  • Osman, Hassab Elgawi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.223-228
    • /
    • 2009
  • We propose an on-line machine learning approach for object recognition, where new images are continuously added and the recognition decision is made without delay. Random forest (RF) classifier has been extensively used as a generative model for classification and regression applications. We extend this technique for the task of building incremental component-based detector. First we employ object descriptor model based on bag of covariance matrices, to represent an object region then run our on-line RF learner to select object descriptors and to learn an object classifier. Experiments of the object recognition are provided to verify the effectiveness of the proposed approach. Results demonstrate that the propose model yields in object recognition performance comparable to the benchmark standard RF, AdaBoost, and SVM classifiers.

  • PDF