• Title/Summary/Keyword: Voice, Sound

Search Result 336, Processing Time 0.029 seconds

A Study on the Experimental Sound Transmission Loss of Wall-Body in Song Practice Room (노래연습실 벽체의 투과손실에 관한 실험적 연구)

  • Yun, Jae-Hyun;Ju, Duck-Hoon;Kim, Jae-Soo
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.221-225
    • /
    • 2007
  • Recently, on account of stress relief and settlement of a healthy drinking culture, many people are visiting a song practice room. In case of such song practice room, since the mechanical voice through a loud accompaniment-music and microphone is on using, there has made a very loud noise, and it influences a lot to the adjacent other space and song practice room booth. Consequently, from the designing stage an efficient measures for soundproof and sound-insulation should be arranged. However, as most of the song practice room wall-bodies that already built up or under construction step were constructed merely with consideration on an interior-wise factor only, it is suffering at the soundproof and sound-insulation. Reflecting such viewpoint, this Study has measured Sound Transmission Loss on the subject for the song practice room wall-body recently built up, and based on the actually measured data, by practice of comparison?analysis on it using various evaluation methods, would intends to present a fundamental material for establishment an efficient sound-insulation measure.

  • PDF

Study on Listening Diagnosis to Vocal Sound and Speech (문진(聞診) 중 성음(聲音).언어(言語)에 대한 연구)

  • Kim, Yong-Chan;Kang, Jung-Soo
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.20 no.2
    • /
    • pp.320-327
    • /
    • 2006
  • This study was written in order to help understanding of listening diagnosis to vocal sound and speech. The purpose of listening diagnosis is that we know states of essence(精), Qi(氣) and spirit(神). Vocal sound and speech are made by Qi and spirit. Vocal sound originates from the center of the abdominal region(丹田) and comes out through vocal organs, for example lung, larynx, nose, tongue, tooth, lip and so on. Speech is expressed by vocal sound and spirit. They are controled by the Five Vital organs(五臟). Various changes of vocal sound and speech observe the rules of yinyang. For example, if we consider patient likes to say or not, we can diagnose heat and coldness of illness. If we consider he speaks loudly or quietly, we can diagnose weak and severe of illness. If we consider he speaks clearly or thick, we can diagnose inside and outside of illness. If we consider he speaks damp or dry, we can diagnose yin and yang of illness. If we consider change of voice, we can diagnose new and old illness. Symptoms of changes of five voices, five sounds, dumbness and huskiness are due to abnormal vocal sound, and symptoms of changes of mad talk, mumble, sleep talking and so on are due to abnormal speech.

Voice Driven Sound Sketch for Animation Authoring Tools (애니메이션 저작도구를 위한 음성 기반 음향 스케치)

  • Kwon, Soon-Il
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.1-9
    • /
    • 2010
  • Authoring tools for sketching the motion of characters to be animated have been studied. However the natural interface for sound editing has not been sufficiently studied. In this paper, I present a novel method that sound sample is selected by speaking sound-imitation words(onomatopoeia). Experiment with the method based on statistical models, which is generally used for pattern recognition, showed up to 97% in the accuracy of recognition. In addition, to address the difficulty of data collection for newly enrolled sound samples, the GLR Test based on only one sample of each sound-imitation word showed almost the same accuracy as the previous method.

A study imitating human auditory system for tracking the position of sound source (인간의 청각 시스템을 응용한 음원위치 추정에 관한 연구)

  • Bae, Jeen-Man;Cho, Sun-Ho;Park, Chong-Kuk
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.878-881
    • /
    • 2003
  • To acquire an appointed speaker's clear voice signal from inspect-camera, picture-conference or hands free microphone eliminating interference noises needs to be preceded speaker's position automatically. Presumption of sound source position's basic algorithm is about measuring TDOA(Time Difference Of Arrival) from reaching same signals between two microphones. This main project uses ADF(Adaptive Delay Filter) [4] and CPS(Cross Power Spectrum) [5] which are one of the most important analysis of TDOA. From these analysis this project proposes presumption of real time sound source position and improved model NI-ADF which makes possible to presume both directions of sound source position. NI-ADF noticed that if auditory sense of humankind reaches above to some specified level in specified frequency, it will accept sound through activated nerve. NI-ADF also proposes practicable algorithm, the presumption of real time sound source position including both directions, that when microphone loads to some specified system, it will use sounds level difference from external system related to sounds of diffraction phenomenon. In accordance with the project, when existing both direction adaptation filter's algorithm measures sound source, it increases more than twice number by measuring one way. Preserving this weak point, this project proposes improved algorithm to presume real time in both directions.

  • PDF

Comparison of Sound Pressure Level and Speech Intelligibility of Emergency Broadcasting System at T-junction Corridor Space (T자형 복도 공간의 비상 방송용 확성기 배치별 음압 레벨과 음성 명료도 비교)

  • Jeong, Jeong-Ho;Lee, Sung-Chan
    • Fire Science and Engineering
    • /
    • v.33 no.1
    • /
    • pp.105-112
    • /
    • 2019
  • In this study, an architectural acoustics simulation was conducted to examine the clear and uniform transmission of emergency broadcasting sound in a T junction corridor space. The sound absorption performance of the corridor space and the location and spacing of the loudspeaker for emergency broadcasting were varied. The distribution of the sound pressure level and the distribution of sound transmission indices (STI, RASTI) were compared. The simulation showed that the loudspeaker for emergency broadcasting should be installed approximately 10 m from the center of the T junction corridor connection for clear voice transmission. Narrowing the 25 m installation interval of the NFSC shows that an even clearer and sufficient volume of emergency broadcast sound can be delivered evenly.

How to Use EVT Figures for Actor Voice Training II (배우 음성 훈련을 위한 EVT 구조연습 활용방안 II)

  • Lee, Young-Su
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.647-664
    • /
    • 2022
  • This study explores the possibility that the figure of the Estill Voice Training model, which is based on speech science, can contribute to the expansion of vocal expertise in the acting art where an actor creates a character. The purpose of this study is to examine the usage plan. The training model through the fluidity and structural functionality of the voice production organ is differentiated from the existing voice training that focuses only on the results of sound due to its ambiguous abstraction. Developing the voluntary coordination ability of the occipital region and vocal tract, such as False Vocal Folds, Cricoid Cartilage, Velum, AES, and Anchoring, has scientific efficiency that makes it easier to produce artistic target sounds, and it is a technical skill that can creatively overcome the functional limitations faced by actors. It can be used as a methodology. The Estill model Figure, which is a principle training for harmony and coordination between the elements of voice production, has a practical value that can be used as an alternative training model for the voice education of actors in Korea, where images and abstractions are the mainstream.

L1-L2 Transfer in VOT and f0 Production by Korean English Learners: L1 Sound Change and L2 Stop Production

  • Kim, Mi-Ryoung
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.31-41
    • /
    • 2012
  • Recent studies have shown that the stop system of Korean is undergoing a sound change in terms of the two acoustic parameters, voice onset time (VOT) and fundamental frequency (f0). Because of a VOT merger of a consonantal opposition and onset-f0 interaction, the relative importance of the two parameters has been changing in Korean where f0 is a primary cue and VOT is a secondary cue in distinguishing lax from aspirated stops in speech production as well as perception. In English, however, VOT is a primary cue and f0 is a secondary cue in contrasting voiced and voiceless stops. This study examines how Korean English learners use the two acoustic parameters of L1 in producing L2 English stops and whether the sound change of acoustic parameters in L1 affects L2 speech production. The data were collected from six adult Korean English learners. Results show that Korean English learners use not only VOT but also f0 to contrast L2 voiced and voiceless stops. However, unlike VOT variations among speakers, the magnitude effect of onset consonants on f0 in L2 English was steady and robust, indicating that f0 also plays an important role in contrasting the [voice] contrast in L2 English. The results suggest that the important role of f0 in contrasting lax and aspirated stops in L1 Korean is transferred to the contrast of voiced and voiceless stops in L2 English. The results imply that, for Korean English learners, f0 rather than VOT will play an important perceptual cue in contrasting voiced and voiceless stops in L2 English.

Efficient Implementation of IFFT and FFT for PHAT Weighting Speech Source Localization System (PHAT 가중 방식 음성신호방향 추정시스템의 FFT 및 IFFT의 효율적인 구현)

  • Kim, Yong-Eun;Hong, Sun-Ah;Chung, Jin-Gyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.71-78
    • /
    • 2009
  • Sound source localization systems in service robot applications estimate the direction of a human voice. Time delay information obtained from a few separate microphones is widely used for the estimation of the sound direction. Correlation is computed in order to calculate the time delay between two signals. In addition, PHAT weighting function can be applied to significantly improve the accuracy of the estimation. However, FFT and IFFT operations in the PHAT weighting function occupy more than half of the area of the sound source localization system. Thus efficient FFT and IFFT designs are essential for the IP implementation of sound source localization system. In this paper, we propose an efficient FFT/IFFT design method based on the characteristics of human voice.

Case Study of a Dog Vocalizing Human's Words (사람의 말을 발성하는 개의 사례 연구)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.235-243
    • /
    • 2012
  • This paper studies characteristics and causes of sound, and many others by distinguishing passivity and activity of the cases of a dog vocalizing human's words. As a result of the previous cases of vocalization of human's words, the dog was able to understand characteristics of a host's voice and imitate the sound using his own vocal organs. This is the case of passive vocalization accompanied by temporary voice imitation without a function of communication. On the contrary, as a consequence of the recently reported case in which a dog vocalizes such words as "Um-ma" and "Nu-na-ya," it shows the vocalization pattern clearly distinguished from the prior cases. The given dog repeatedly vocalizes pertaining words in an active manner according to circumstances and plays a role of fundamental communication and interaction with its host. The reason why the dog can vocalize the man's words actively is determined to be that the dog has a high level of intelligence and intimacy with its host, that people react actively to its pertaining pronunciation, and so forth. The following results can be used for the study that investigates animals' sound with vocalization possibility and language learning feasibility.

A survey on noise generation and conversation interruption in cafes (카페 공간의 소음과 대화 방해에 대한 설문조사)

  • Jeong, Jeong-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.6
    • /
    • pp.660-670
    • /
    • 2021
  • As various people use the cafe for various purposes, it is difficult to hear conversations with the accompanying people due to the noise and background music of people around the respondents. In addition, there is a need for improvement related to the noise and sound inside the cafe, such as making it easier to hear the conversations of nearby users. 212 adult men and women participated in the questionnaire on the survey on cafe acoustics and noise conditions. As a result of the survey, about two-thirds of the respondents said that they did not prefer noisy cafes, and that the noise of cafes had a negative effect. The major source of noise in cafes is the sound of people around users, and more than 40 % of the respondents said that they could not hear well the sound of conversations with their accompanying people due to the sounds of those around them, or that they were concerned about their own conversations being transmitted to those around them. As a result of the survey on cafe sound and noise, it was found that improvements were needed to secure the voice privacy of cafe users as well as the voice intelligibility.