• 제목/요약/키워드: recognition

검색결과 21,198건 처리시간 0.048초

남편의 가정내 역할인지와 역할수행에 관한 연구 (Husband's Role Recognition and Role Performance in Family)

  • 이정우
    • 가정과삶의질연구
    • /
    • 제17권4호
    • /
    • pp.1-14
    • /
    • 1999
  • The purpose of this study was to probe the role recognition and role performance of husbands if family and to find out the variables which influence role recognition and role performance. the sample were selected from 503 husbands with children more than one living in Seoul The major findings were : 1) husbands` role recognition was relatively higher than role performance 2) Influential variables on the husbands` role recognition were the degree of home-orientedness communication-satisfaction and stage of life cycle. 3) Husbands` role performance was affected by communication-satisfaction degree of social support job satisfactions sex-role attitudes flexibility of job and jusbands` role recognition.

  • PDF

A Study on the Isolated word Recognition Using One-Stage DMS/DP for the Implementation of Voice Dialing System

  • Seong-Kwon Lee
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 FIFTH WESTERN PACIFIC REGIONAL ACOUSTICS CONFERENCE SEOUL KOREA
    • /
    • pp.1039-1045
    • /
    • 1994
  • The speech recognition systems using VQ have usually the problem decreasing recognition rate, MSVQ assigning the dissimilar vectors to a segment. In this paper, applying One-stage DMS/DP algorithm to the recognition experiments, we can solve these problems to what degree. Recognition experiment is peformed for Korean DDD area names with DMS model of 20 sections and word unit template. We carried out the experiment in speaker dependent and speaker independent, and get a recognition rates of 97.7% and 81.7% respectively.

  • PDF

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • 한국통신학회논문지
    • /
    • 제35권12C호
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.

잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델 (Multi-resolution DenseNet based acoustic models for reverberant speech recognition)

  • 박순찬;정용원;김형순
    • 말소리와 음성과학
    • /
    • 제10권1호
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.

잡음음성인식을 위한 음성개선 방식들의 성능 비교 (Performance Comparison of the Speech Enhancement Methods for Noisy Speech Recognition)

  • 정용주
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.9-14
    • /
    • 2009
  • Speech enhancement methods can be generally classified into a few categories and they have been usually compared with each other in terms of speech quality. For the successful use of speech enhancement methods in speech recognition systems, performance comparisons in terms of speech recognition accuracy are necessary. In this paper, we compared the speech recognition performance of some of the representative speech enhancement algorithms which are popularly cited in the literature and used widely. We also compared the performance of speech enhancement methods with other noise robust speech recognition methods like PMC to verify the usefulness of speech enhancement approaches in noise robust speech recognition systems.

  • PDF

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

Speech Recognition using MSHMM based on Fuzzy Concept

  • Ann, Tae-Ock
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권2E호
    • /
    • pp.55-61
    • /
    • 1997
  • This paper proposes a MSHMM(Multi-Section Hidden Markov Model) recognition method based on Fuzzy Concept, as a method on the speech recognition of speaker-independent. In this recognition method, training data are divided into several section and multi-observation sequences given proper probabilities by fuzzy rule according to order of short distance from MSVQ codebook per each section are obtained. Thereafter, the HMM per each section using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. In this paper, other experiments to compare with the results of these experiments are implemented by the various conventional recognition methods(DP, MSVQ, DMS, general HMM) under the same data. Through results of all-round experiment, it is proved that the proposed MSHMM based on fuzzy concept is superior to DP method, MSVQ method, DMS model and general HMM model in recognition rate and computational time, and does not decreases recognition rate as 92.91% in spite of increment of speaker number.

  • PDF

음성인식용 인터페이스의 사용편의성 평가 방법론 (A Usability Evaluation Method for Speech Recognition Interfaces)

  • 한성호;김범수
    • 대한인간공학회지
    • /
    • 제18권3호
    • /
    • pp.105-125
    • /
    • 1999
  • As speech is the human being's most natural communication medium, using it gives many advantages. Currently, most user interfaces of a computer are using a mouse/keyboard type but the interface using speech recognition is expected to replace them or at least be used as a tool for supporting it. Despite the advantages, the speech recognition interface is not that popular because of technical difficulties such as recognition accuracy and slow response time to name a few. Nevertheless, it is important to optimize the human-computer system performance by improving the usability. This paper presents a set of guidelines for designing speech recognition interfaces and provides a method for evaluating the usability. A total of 113 guidelines are suggested to improve the usability of speech-recognition interfaces. The evaluation method consists of four major procedures: user interface evaluation; function evaluation; vocabulary estimation; and recognition speed/accuracy evaluation. Each procedure is described along with proper techniques for efficient evaluation.

  • PDF

Applying Mobile Agent for Internet-based Distributed Speech Recognition

  • Saaim, Emrul Hamide Md;Alias, Mohamad Ashari;Ahmad, Abdul Manan;Ahmad, Jamal Nasir
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.134-138
    • /
    • 2005
  • There are several application have been developed on internet-based speech recognition. Internet-based speech recognition is a distributed application and there were various techniques and methods have been using for that purposed. Currently, client-server paradigm was one of the popular technique that been using for client-server communication in web application. However, there is a new paradigm with the same purpose: mobile agent technology. Mobile agent technology has several advantages working on distributed internet-based system. This paper presents, applying mobile agent technology in internet-based speech recognition which based on client-server processing architecture.

  • PDF

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF