• Title/Summary/Keyword: Speech Processing

Search Result 960, Processing Time 0.032 seconds

Design of new CNN structure with internal FC layer (내부 FC층을 갖는 새로운 CNN 구조의 설계)

  • Park, Hee-mun;Park, Sung-chan;Hwang, Kwang-bok;Choi, Young-kiu;Park, Jin-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.466-467
    • /
    • 2018
  • Recently, artificial intelligence has been applied to various fields such as image recognition, image recognition speech recognition, and natural language processing, and interest in Deep Learning technology is increasing. Many researches on Convolutional Neural Network(CNN), which is one of the most representative algorithms among Deep Learning, have strong advantages in image recognition and classification and are widely used in various fields. In this paper, we propose a new network structure that transforms the general CNN structure. A typical CNN structure consists of a convolution layer, ReLU layer, and a pooling layer. Therefore in this paper, We intend to construct a new network by adding fully connected layer inside a general CNN structure. This modification is intended to increase the learning and accuracy of the convoluted image by including the generalization which is an advantage of the neural network.

  • PDF

Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques (서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류)

  • Nguyen, Ngoc;Kang, Myeong-Su;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.19-26
    • /
    • 2012
  • The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.

Development of an Embedded System for Ship선s Steering Gear using Voice Recognition Module (음성인식모듈을 이용한 선박조타용 임베디드 시스템 개발)

  • Park, Gyei-Kark;Seo, Ki-Yeol;Hong, Tae-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.604-609
    • /
    • 2004
  • Recently, various studies had been made for automatic control system of small ships, in order to improve maneuvering and to reduce labor and working on board. To achieve efficient operation of small ships, it had been accomplished to rapid development of automatic technique, but the ship operation had been more complicated because of the need to handle various gauges and instruments. To solve these problems, there are examples to be applied to the speech information processing technologies which is one of the human interface methods in the system operation of ship, but the implementation of definite system is still incomplete. Therefore, the purpose of this paper is to implement the control system for ship steering using the voice recognition module.

Difficulty in Facial Emotion Recognition in Children with ADHD (주의력결핍 과잉행동장애의 이환 여부에 따른 얼굴표정 정서 인식의 차이)

  • An, Na Young;Lee, Ju Young;Cho, Sun Mi;Chung, Young Ki;Shin, Yun Mi
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.24 no.2
    • /
    • pp.83-89
    • /
    • 2013
  • Objectives : It is known that children with attention-deficit hyperactivity disorder (ADHD) experience significant difficulty in recognizing facial emotion, which involves processing of emotional facial expressions rather than speech, compared to children without ADHD. This objective of this study is to investigate the differences in facial emotion recognition between children with ADHD and normal children used as control. Methods : The children for our study were recruited from the Suwon Project, a cohort comprising a non-random convenience sample of 117 nine-year-old ethnic Koreans. The parents of the study participants completed study questionnaires such as the Korean version of Child Behavior Checklist, ADHD Rating Scale, Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. Facial Expression Recognition Test of the Emotion Recognition Test was used for the evaluation of facial emotion recognition and ADHD Rating Scale was used for the assessment of ADHD. Results : ADHD children (N=10) were found to have impaired recognition when it comes to Emotional Differentiation and Contextual Understanding compared with normal controls (N=24). We found no statistically significant difference in the recognition of positive facial emotions (happy and surprise) and negative facial emotions (anger, sadness, disgust and fear) between the children with ADHD and normal children. Conclusion : The results of our study suggested that facial emotion recognition may be closely associated with ADHD, after controlling for covariates, although more research is needed.

Pitch Estimation Method in an Integrated Time and Frequency Domain by Applying Linear Interpolation (선형 보간법을 이용한 시간과 주파수 조합영역에서의 피치 추정 방법)

  • Kim, Ki-Chul;Park, Sung-Joo;Lee, Seok-Pil;Kim, Moo-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.100-108
    • /
    • 2010
  • An autocorrelation method is used in pitch estimation. Autocorrelation values in time and frequency domains, which have different characteristics, correspond to the pitch period and fundamental frequency, respectively. We utilize an integrated autocorrelation method in time and frequency domains. It can remove the errors of pitch doubling and having. In the time and frequency domains, pitch period and fundamental frequency have reciprocal relation to each other. Especially, fundamental frequency estimation ends up as an error because of the resolution of FFT. To reduce these artifacts, interpolation methods are applied in the integrated autocorrelation domain, which decreases pitch errors. Moreover, only for the pitch candidates found in a time domain, the corresponding frequency-domain autocorrelation values are calculated with reduced computational complexity. Using linear interpolation, we can decrease the required number of FFT coefficients by 8 times. Thus, compared to the conventional methods, computational complexity can be reduced by 9.5 times.

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

Development of medical/electrical convergence software for classification between normal and pathological voices (장애 음성 판별을 위한 의료/전자 융복합 소프트웨어 개발)

  • Moon, Ji-Hye;Lee, JiYeoun
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.187-192
    • /
    • 2015
  • If the software is developed to analyze the speech disorder, the application of various converged areas will be very high. This paper implements the user-friendly program based on CART(Classification and regression trees) analysis to distinguish between normal and pathological voices utilizing combination of the acoustical and HOS(Higher-order statistics) parameters. It means convergence between medical information and signal processing. Then the acoustical parameters are Jitter(%) and Shimmer(%). The proposed HOS parameters are means and variances of skewness(MOS and VOS) and kurtosis(MOK and VOK). Database consist of 53 normal and 173 pathological voices distributed by Kay Elemetrics. When the acoustical and proposed parameters together are used to generate the decision tree, the average accuracy is 83.11%. Finally, we developed a program with more user-friendly interface and frameworks.

Some Characteristics of Hanmal and Hangul from the viewpoint of Processing Hangul Information on Computers

  • Kim, Kyong-Sok
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.456-463
    • /
    • 1996
  • In this paper, we discussed three cases to see the effects of the characteristics of Hangul writing system. In applications such as computer Hangul shorthands for ordinary people and pushbuttons with Hangul characters engraved, we found that there is much advantage in using Hangul. In case of Hangul Transliteration, we discussed some problems which are related with the characteristics of Hangul writing system. Shorthands use 3-set keyboards in England, America, and Korea. We saw how ordinary people can do computer Hangul shorthands, whereas only experts can do computer shorthands in other countries. Specifically, the facts that 1) Hangul characters are grouped into syllables (syllabic blocks) and that 2) there is already a 3-set Hangul keyboard for ordinary people allow ordinary people to do computer Hangul shorthands without taking special training as with English shorthands. This study was done by the author under the codename of 'Sejong 89'. In contrast like QWERTY or DVORAK, a 2-set Hangul keyboard cannot be used for shorthands. In case of English pushbuttons, one digit is associated with only one character. However, by engraving only syllable-initial characters on the phone pushbuttons, we can associate one Hangul "syllable" with one digit. Therefore, for a given number of digits, we can associate longer words or more meaningful words in Hangul than in English. We discussed the problems of the Hangul Transliteration system proposed by South Korea and suggested their solutions, if available. 1) We are incorrectly using the framework of transcription for transliteration. To solve the problem, the author suggests that a) we include all complex characters in the transliteration table, and that b) we specify syllable-initial and -final characters separately in the table. 2) The proposed system cannot represent independent characters and incomplete syllables. 3) The proposed system cannot distinguish between syllable-initial and -final characters.

  • PDF

The Analysis and Recognition of Korean Speech Signal using the Phoneme (음소에 의한 한국어 음성의 분석과 인식)

  • Kim, Yeong-Il;Lee, Geon-Gi;Lee, Mun-Su
    • The Journal of the Acoustical Society of Korea
    • /
    • v.6 no.2
    • /
    • pp.38-47
    • /
    • 1987
  • As Korean language can be phonemically classified according to the characteristic and structure of its pronunciation, Korean syllables can be divided into the phonemes such as consonant and vowel. The divided phonemes are analyzed by using the method of partial autocorrelation, and the order of partial autocorelation coefficient is 15. In analysis, it is shown that each characteristic of the same consonants, vowels, and end consonant in syllables in similar. The experiments is carried out by dividing 675 syllables into consonants, vowels, and end consonants. The recognition rate of consonants, vowels, end-consonants, and syllables are $85.0(\%)$, $90.7(\%)$, $85.5(\%)$and $72.1(\%)$ respectively. In conclusion, it is shown that Korean syllables, divided by the phonemes, are analyzed and recognized with minimum data and short processing time. Furthermore, it is shown that Korean syllables, words and sentences are recognized in the same way.

  • PDF

HMM-based Korean Named Entity Recognition (HMM에 기반한 한국어 개체명 인식)

  • Hwang, Yi-Gyu;Yun, Bo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.229-236
    • /
    • 2003
  • Named entity recognition is the process indispensable to question answering and information extraction systems. This paper presents an HMM based named entity (m) recognition method using the construction principles of compound words. In Korean, many named entities can be decomposed into more than one word. Moreover, there are contextual relationships among nouns in an NE, and among an NE and its surrounding words. In this paper, we classify words into a word as an NE in itself, a word in an NE, and/or a word adjacent to an n, and train an HMM based on NE-related word types and parts of speech. Proposed named entity recognition (NER) system uses trigram model of HMM for considering variable length of NEs. However, the trigram model of HMM has a serious data sparseness problem. In order to solve the problem, we use multi-level back-offs. Experimental results show that our NER system can achieve an F-measure of 87.6% in the economic articles.