• Title/Summary/Keyword: Parts of Speech

Search Result 135, Processing Time 0.026 seconds

A Real-time Implementation of G.729.1 Codec on an ARM Processor for the Improvement of VoWiFi Voice Quality (VoWiFi 음질 향상을 위한 G.729.1 광대역 코덱의 ARM 프로세서에의 실시간 구현)

  • Park, Nam-In;Kang, Jin-Ah;Kim, Hong-Kook
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.230-235
    • /
    • 2008
  • This paper addresses issues associated with the real-time implementation of a wideband speech codec such as ITU-T G. 729. 1 on an ARM processor in order to provide an improved voice quality of a VoWiFi service. The real-time implementation features in optimizing the C-source code of G.729. 1 and replacing several parts of the codec algorithm with faster ones. The performance of the implementation is measured by the CPU time spent for G.729.1 on the ARM926EJ processor that is used for a VoWiFi phone. It is shown from the experiments that the G.729.1 codec works in real-time with better voice quality than G 729 codec that is conventionally used for VoIP or VoWiFi phones.

  • PDF

Transcoding Algorithm for SMV and G.723.1 Vocoders via Direct Parameter Transformation (SMV와 G.723.1 음성부호화기를 위한 파라미터 직접 변환 방식의 상호부호화 알고리듬)

  • 서성호;장달원;이선일;유창동
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.61-70
    • /
    • 2003
  • In this paper, a transcoding algorithm for the Selectable Mode Vocoder (SMV) and the G.723.1 speech coder via direct parameter transformation is proposed. In contrast to the conventional tandem transcoding algorithm, the proposed algorithm converts the parameters of one coder to the other without going through the decoding and encoding process. The proposed algorithm is composed of four parts: the parameter decoding, line spectral pair (LSP) conversion, pitch period conversion, excitation conversion and rate selection. The evaluation results show that the proposed algorithm achieves equivalent speech quality to that of tandem transcoding with reduced computational complexity and delay.

A Study of Segmental and Syllabic Intervals of Canonical Babbling and Early Speech

  • Chen, Xiaoxiang;Xiao, Yunnan
    • Cross-Cultural Studies
    • /
    • v.28
    • /
    • pp.115-139
    • /
    • 2012
  • Interval or duration of segments, syllables, words and phrases is an important acoustic feature which influences the naturalness of speech. A number of cross-sectional studies regarding acoustic characteristics of children's speech development found that intervals of segments, syllables, words and phrases tend to change with the growing age. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991), it has been supported by quite a number of researches on the basis of cross-sectional studies (Tingley & Allen,1975; Kent & Forner,1980; Chermak & Schneiderman, 1986), but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). Researchers seem to come up with conflicting postulations and inconsistent results about the change trends concerning intervals of segments, syllables, words and phrases, leaving it as an issue unresolved. Most acoustic investigations of children's speech production have been conducted via cross-sectional designs, which involves studying several groups of children. So far, there are only a few longitudinal studies. This issue needs more longitudinal investigations; moreover, the acoustic measures of the intervals of child speech are hardly available. All former studies focus on word stages excluding the babbling stages especially the canonical babbling stage, but we need to find out when concrete changes of intervals begin to occur and what causes the changes. Therefore, we conducted an acoustic study of interval characteristics of segments and words concerning Canonical Babble ( CB) and early speech in an infant aged from 0;9 to 2;4 acquiring Mandarin Chinese. The current research addresses the following two questions: 1. Whether decreases in interval would be greater when children were younger and smaller when they were older or vice versa? 2. Whether the child speech concerning the acoustic features of interval drifts in the direction of the language they are exposed to? The female infant whose L1 was Southern Mandarin living in Changsha was audio- and video-taped at her home for about one hour almost on a weekly basis during her age range from 0;9 to 2;4 under natural observation by us investigators. The recordings were digitized. Parts of the digitized material were labeled. All the repetitions were excluded. The utterances were extracted from 44 sessions ranging from 30 minutes to one hour. The utterances were divided into segments as well as syllable-sized units. Age stages are 0;9-1;0,1;1-1;5, 1;6-2;0, 2;1-2;4. The subject was a monolingual normal child from parents with a good education. The infant was audio-and video-taped in her home almost every week. The data were digitized, segments and syllables from 44 sessions spanning the transition from babble to speech were transcribed in narrow IPA and coded for analysis. Babble was coded from age 0;9-1;0, and words were coded from 1;0 to 2;4, the data has been checked by two professionally trained persons who majored in phonetics. The present investigation is a longitudinal analysis of some temporal characteristics of the child speech during the age periods of 0;9-1;0, 1;1-1;5, 1;6-2;0, 2;1-2;4. The answer to Research Question 1 is that our results are in agreement with neither of the hypotheses. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991); but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). On the whole, there is a tendency of decrease in segmental and syllabic duration with the growing age, but the changes are not drastic and abrupt. For example, /a/ after /k/ in Table 1 has greater decrease during 1;1-1;5, while /a/ after /p/, /t/ and /w/ has greater decrease during 2;1-2;4. /ka/ has greater decrease during 1;1-1;5, while /ta/ and /na/ has greater decrease during 2;1-2;4.Across the age periods, interval change experiences lots of fluctuation all the time. The answer to Research Question 2 is yes. Babbling stage is a period in which the children's acoustic features of intervals of segments, syllables, words and phrases is shifted in the direction of the language to be learned, babbling and children's speech emergence is greatly influenced by ambient language. The phonetic changes in terms of duration would go on until as late as 10-12 years of age before reaching adult-like levels. Definitely, with the increase of exposure to ambient language, the variation would be less and less until they attain the adult-like competence. Via the analysis of the SPSS 15.0, the decrease of segmental and syllabic intervals across the four age periods proves to be of no significant difference (p>0.05). It means that the change of segmental and syllabic intervals is continuous. It reveals that the process of child speech development is gradual and cumulative.

Development of Differential Diagnosis Scale Items for Adductor Spasmodic Dysphonia and Evaluation of Clinical Availability (내전형 연축성 발성장애 감별진단 문항 개발과 임상적 유용성 평가)

  • Cho, Jae Kyung;Choi, Seong Hee;Lee, Sang Hyuk;Jin, Sung Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.30 no.2
    • /
    • pp.112-117
    • /
    • 2019
  • Background and Objectives The purpose of this study was to develop the differential diagnosis scale containing items from adductor spasmodic dysphonia (ADSD) to muscle tension dysphonia (MTD) and the determine clinical utility of newly developed items. Materials and Method The four parts of pitch, redirected phonation, automatic speech and voiced sound were selected for analyzing the characteristics of ADSD in the literature. One part of tense voiceless sound was developed according to the Korean manner of articulation. The content validity was evaluated based on 5 scales (1-5 point) analysis from 30 experts. One hundred patients (50 ADSD and 50 MTD) were recorded in reading a sentence and sustained phonation. The two speech language pathologist evaluated recorded voices through a blind test using 4 scales (0-3 point) for newly developed items. Results As a result of verifying the content validity of items with experts, it was identified that the differentiated items were valid with 4.2 out of 5. Through the differential diagnosis between two groups according to the items, the correlation between sub-domains and total scores was shown as higher than 0.710. The result of analyzing the reliability on each diagnosis domain was 0.840-0.893, which showed the internal consistency of items was great. Newly developed five parts of ADSD were significantly higher than those of MTD with strong correlation (p<0.01). The reliability among the evaluators was analyzed as high with 0.892. Conclusion In this study, the differential diagnosis scale of ADSD was revealed as having validity and reliability. It is considered that it will be useful for differentiating ADSD and MTD in the clinical field.

Electromyographic evidence for a gestural-overlap analysis of vowel devoicing in Korean

  • Jun, Sun-A;Beckman, M.;Niimi, Seiji;Tiede, Mark
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.153-200
    • /
    • 1997
  • In languages such as Japanese, it is very common to observe that short peripheral vowel are completely voiceless when surrounded by voiceless consonants. This phenomenon has been known as Montreal French, Shanghai Chinese, Greek, and Korean. Traditionally this phenomenon has been described as a phonological rule that either categorically deletes the vowel or changes the [+voice] feature of the vowel to [-voice]. This analysis was supported by Sawashima (1971) and Hirose (1971)'s observation that there are two distinct EMG patterns for voiced and devoiced vowel in Japanese. Close examination of the phonetic evidence based on acoustic data, however, shows that these phonological characterizations are not tenable (Jun & Beckman 1993, 1994). In this paper, we examined the vowel devoicing phenomenon in Korean using data from ENG fiberscopic and acoustic recorders of 100 sentences produced by one Korean speaker. The results show that there is variability in the 'degree of devoicing' in both acoustic and EMG signals, and in the patterns of glottal closing and opening across different devoiced tokens. There seems to be no categorical difference between devoiced and voiced tokens, for either EMG activity events or glottal patterns. All of these observations support the notion that vowel devoicing in Korean can not be described as the result of the application of a phonological rule. Rather, devoicing seems to be a highly variable 'phonetic' process, a more or less subtle variation in the specification of such phonetic metrics as degree and timing of glottal opening, or of associated subglottal pressure or intra-oral airflow associated with concurrent tone and stricture specifications. Some of token-pair comparisons are amenable to an explanation in terms of gestural overlap and undershoot. However, the effect of gestural timing on vocal fold state seems to be a highly nonlinear function of the interaction among specifications for the relative timing of glottal adduction and abduction gestures, of the amplitudes of the overlapped gestures, of aerodynamic conditions created by concurrent oral tonal gestures, and so on. In summary, to understand devoicing, it will be necessary to examine its effect on phonetic representation of events in many parts of the vocal tracts, and at many stages of the speech chain between the motor intent and the acoustic signal that reaches the hearer's ear.

  • PDF

A Study on the reestablishment of English Part of Speech and Sentence Structural Elements (영어 품사 및 문장요소 용어 재확립에 대한 고찰)

  • Yi, Jae-Il
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.43-48
    • /
    • 2019
  • This study examines the problem of incorrect usage of grammatical terms that are quite common in English grammar teaching process and suggests ways to revise and improve the errors. Parts of speech and sentence elements are indispensable for any grammatical explanation. These grammatical terms are a core part of the grammar, but they are frequently used without being verified correctly and interchangeably with no distinction. These terms refer to different things, and when they are used interchangeably, they cause confusion in the establishment of grammar cognition. In result, there is a crucial need to discuss and improve the definitions of the grammatical terms defined in the English teaching process for proper improvement in effective English education.

Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition (음성 인식을 위한 sequence-to-sequence 심층 신경망의 이중 attention 기법)

  • Yook, Dongsuk;Lim, Dan;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.476-482
    • /
    • 2020
  • Sequence-to-sequence deep neural networks with attention mechanisms have shown superior performance across various domains, where the sizes of the input and the output sequences may differ. However, if the input sequences are much longer than the output sequences, and the characteristic of the input sequence changes within a single output token, the conventional attention mechanisms are inappropriate, because only a single context vector is used for each output token. In this paper, we propose a double-attention mechanism to handle this problem by using two context vectors that cover the left and the right parts of the input focus separately. The effectiveness of the proposed method is evaluated using speech recognition experiments on the TIMIT corpus.

A Study of Facial Deformity in the Patient with Bilateral Cleft Lip before the Primary Cheiolplasty (양측성 구순열 환자의 안모 변형에 대한 연구)

  • Yoon Bo-Keun;Soh Byung-Soo;Baik Jin-Ah;Shin Hyo-Keun
    • Korean Journal of Cleft Lip And Palate
    • /
    • v.4 no.2
    • /
    • pp.51-68
    • /
    • 2001
  • Midfacial hypoplasia in patients with clefts of the lip and palate is considered to be the result of congenital dysmorphogenesis. And cleft lip and palate developes facial deformity, jaw abnormality, speech problem, which is most frequent hereditary deformity in maxillofacial region. So cleft lip and palate is characterized by midface deformity which shaws maxillary anterior nasal septal deviation and deformity. Our study describes congenital correlates of midfacial hypoplasia by examining the displacement of a normal complement of parts, a triangular tissue deficiency low on the lip border on the columellar side, and a linear deficiency and displacement in the line of the bilateral cleft lip. 15 patients with bilateral cleft lip and palate were taken impression before operation, but the patient who had other abnormalities and complications were excluded. Average age is 3.4 months and they were classified into both complete, both incomplete and complete & incomplete group. The obtained results were as follows 1. There were no differences on intercanthal width and canthal width between each of the groups. 2. Both complete group had longer lateral ala length than both incomplete group, but there were no differences between both complete group and complete side of com. & incom. group and both incomplete group and incomplete side of com. & incom. group. 3. Columella length was greater in both incomplete group than in both complete group, but there was no difference between both complete group and complete side of com. & incom. group and both incomplete group and incomplete side of com. & incom. group. 4. Both complete group had longer ala width & ala base width than both incomplete group had. But there were no differences between both complete group and complete side of com. & incom. group and both incomplete group and incomplete side of com. & incom. group. 5. There were no differences between each of the groups on upper lip length, but nose/mouth width ratio was greater in both complete group than in both incomplete group. 6. Pronasale(pm), subnasle(sn), la~rale superioris(ls), stomion(sto) points were located around the central vertical line of face but deviated to incomplete side in com. & incom. group. 7. Nasal tip protrusion was greater in both incomplete group and com. & incom. group than both complete group, but there was no difference between both incomplete group and com. & incom. group.

  • PDF

Real-Time Implementation of the G.729.1 Using ARM926EJ-S Processor Core (ARM926EJ-S 프로세서 코어를 이용한 G.729.1의 실시간 구현)

  • So, Woon-Seob;Kim, Dae-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.8C
    • /
    • pp.575-582
    • /
    • 2008
  • In this paper we described the process and the results of real-time implementation of G.729.1 wideband speech codec which is standardized in SG15 of ITU-T. To apply the codec on ARM926EJ-S(R) processor core. we transformed some parts of the codec C program including basic operations and arithmetic functions into assembly language to operate the codec in real-time. G.729.1 is the standard wideband speech codec of ITU-T having variable bit rates of $8{\sim}32kbps$ and inputs quantized 16 bits PCM signal per sample at the rate of 8kHz or 16kHz sampling. This codec is interoperable with the G.729 and G.729A and the bandwidth extended wideband($50{\sim}7,000Hz$) version of existing narrowband($300{\sim}3,400Hz$) codec to enhance voice quality. The implemented G.729.1 wideband speech codec has the complexity of 31.2 MCPS for encoder and 22.8 MCPS for decoder and the execution time of the codec takes 11.5ms total on the target with 6.75ms and 4.76ms respectively. Also this codec was tested bit by bit exactly against all set of test vectors provided by ITU-T and passed all the test vectors. Besides the codec operated well on the Internet phone in real-time.

Study on the Recognition of Spoken Korean Continuous Digits Using Phone Network (음성망을 이용한 한국어 연속 숫자음 인식에 관한 연구)

  • Lee, G.S.;Lee, H.J.;Byun, Y.G.;Kim, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.624-627
    • /
    • 1988
  • This paper describes the implementation of recognition of speaker - dependent Korean spoken continuous digits. The recognition system can be divided into two parts, acoustic - phonetic processor and lexical decoder. Acoustic - phonetic processor calculates the feature vectors from input speech signal and the performs frame labelling and phone labelling. Frame labelling is performed by Bayesian classification method and phone labelling is performed using labelled frame and posteriori probability. The lexical decoder accepts segments (phones) from acoustic - phonetic processor and decodes its lexical structure through phone network which is constructed from phonetic representation of ten digits. The experiment carried out with two sets of 4continuous digits, each set is composed of 35 patterns. An evaluation of the system yielded a pattern accuracy of about 80 percent resulting from a word accuracy of about 95 percent.

  • PDF