• Title/Summary/Keyword: Sounds Effects

검색결과 134건 처리시간 0.024초

선행음 및 후속음이 음원의 방향지각에 미치는 영향에 관한 연구 (A study on the effect of leading sound and following sound on sound localization)

  • 이채봉
    • 융합신호처리학회논문지
    • /
    • 제16권2호
    • /
    • pp.40-43
    • /
    • 2015
  • 본 연구에서는 연속 마스킹과 같은 효과가 기준음보다도 선행되는 음(선행음)과 후속되는 음(후속음)이 음상정위에 미치는 영향을 레벨차 및 시간차(Inter Stimuli Interval : ISI)에 의한 변화를 청취실험을 하고 그 결과에 대하여 검토를 하였다. 기준음의 제시시간은 2ms이며 후속음 및 선행음의 제시시간은 10ms로 두었다. 제시음은 1kHz 정현파이며 좌우 귀에 도달하는 시간차(Interaural Time Difference : ITD)는 0.5ms로 두었다. 레벨차는 0, -10, -15, -20dB로 두어 제시하였다. 실험결과 선행음이 후속음보다도 음상정위에 크게 영향을 미치고 있음을 알았다. 그리고 선행음의 영향은 ISI의 값에 의존하고 ISI의 값이 작을 때 음상정위에 미치는 영향의 차가 있음을 알았다.

산모의 심장소리가 미숙아의 체중, 생리적 반응 및 행동상태에 미치는 효과 (The Effects of Maternal Heart Sound on the Weight, Physiologic Responses and Behavioral States of Premature Infants)

  • 염미경;안영미;서화숙;전용훈
    • Child Health Nursing Research
    • /
    • 제16권3호
    • /
    • pp.211-219
    • /
    • 2010
  • Purpose: The study was done to measure the effects of maternal heart sound on body weight, physiologic reactions (heart rate [HR] and cortisol) and behavioral states of preterm infants. Methods: Thirty-five preterm infants were recruited from a neonatal intensive care unit at a university hospital. Institutional Review Board approval and informed consent were obtained. The infants were assigned to an experimental group (n=18) with an auditory stimulation for 7 days of life, a continuous delivery of maternal heart sound using MP3 attached inside the incubator, or to a control (n=17) without any auditory stimulation. The outcome variables, daily variations in weight, HR and behavioral states, and differences in cortisol were analyzed. Results: There were differences in variations of daily weights (F=3.431, p=.011) and in cortisol (t=3.184, p=.006) between groups, but no difference in variations of daily HR (F=0.331, p=.933) and behavioral states (F=1.842, p=.323). Conclusion: The findings support the safety of continuous maternal heart sound as no changes in HR and behavioral states occurred, and the efficacy as weight increased and cortisol decreased. This auditory simulation may lead to more efficient utilization of energy in preterm infants by consistently providing familiar sounds from intrauterine life and blocking noxious sounds from NICU environments.

동화를 이용한 음운인식활동이 저소득층 초등 방과후 교실 1, 2 학년 아동의 읽기, 학습동기 및 자아개념에 미치는 영향 (Phonological Awareness Activities Using Story Books : Effects on Reading, Self-Concept, and Learning Motivation in an After-School Program for 1st and 2nd Grade Low Income Children)

  • 이지현;김유정;이정아
    • 아동학회지
    • /
    • 제27권5호
    • /
    • pp.123-141
    • /
    • 2006
  • The phonemic awareness program included construction of 45 activities emphasizing various sounds in speech and letter names using a storybook. The subjects were thirty 1st and 2nd grade low-income(15 experimental and 15 control group) children attending an after-school program in Seoul. Pre- and post-tests assessed children's reading, self-concept, and learning motivation. The experimental group children had rich opportunity to deal with and discuss sounds, syllables, phonemes, and the Korean alphabet names during storybook reading, games, and play over a 12 week period, while the control group children were provided with worksheets, subject tutoring, and homework guidance. Results showed that the phonemic activities were an effective and useful way to enhance children's reading ability, self-concept, and learning motivation.

  • PDF

단어빈도와 단어규칙성 효과에 기초한 합성음 평가 (The text-to-speech system assessment based on word frequency and word regularity effects)

  • 남기춘;최원일;이동훈;구민모;김종진
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.105-108
    • /
    • 2002
  • In the present study, the intelligibility of the synthesized speech sounds was evaluated by using the psycholinguistic and fMRI techniques, In order to see the difference in recognizing words between the natural and synthesized speech sounds, word regularity and word frequency were varied. The results of Experiment1 and Experiment2 showed that the intelligibility difference of the synthesized speech comes from word regularity. There were smaller activation of the auditory areas in brain and slower recognition time for the regular words.

  • PDF

공공장소의 음풍경 재현을 위한 가상음장현장재현시스템 개발 (Virtual Acoustics Field Simulation System for the Soundscape Reproduction in Public)

  • 송혁;국찬;장길수
    • 한국소음진동공학회논문집
    • /
    • 제14권4호
    • /
    • pp.319-326
    • /
    • 2004
  • The soundscape is a novel attempt to offer comfortable sound environments at the urban public spaces by adding pleasant sounds and removing unagreeable ones. Most important factors to be considered therein are to determine what kind of sounds to offer and how to adjust them to the changing circumstances. But, installing, maintaining and adjusting the soundscape system directly in the field will ensue numerous problems as well as high costs. Thus, it is essential to devise a test method to analyse and verify the outcome before the actual installation in the field takes place. This study aims at devising the instrument system that enables to control. with a great ease, numerous variables. reproduce most agreeable sound sources, and can verify the effects on the virtual simulatory settings, which is named as Virtual Acoustic Field Simulation System (VAFSS).

단어빈도와 단어규칙성 효과에 기초한 합성음 평가 (The Text-to-Speech System Assessment Based on Word Frequency and Word Regularity Effects)

  • 남기춘;최원일;김충명;최양규;김종진
    • 대한음성학회지:말소리
    • /
    • 제53호
    • /
    • pp.61-74
    • /
    • 2005
  • In the present study, the intelligibility of the synthesized speech sounds was evaluated by using the psycholinguistic and fMRI techniques. In order to see the difference in recognizing words between the natural and synthesized speech sounds, word regularity and word frequency were varied. The results of Experiment1 and Experiment2 showed that the intelligibility difference of the synthesized speech comes from word regularity. In the case of the synthesized speech, the regular words were recognized slower than the irregular words, and there was smaller activation of the auditory areas in brain for the regular words than for the irregular words.

  • PDF

도로변 방음 수림대 유형별 시뮬레이션 모형개발 및 평가 (Model Development and Appraisal by Visual Simulation about Soundproof Grove Types of Street Side)

  • 김성균;정태영
    • 농촌계획
    • /
    • 제11권2호
    • /
    • pp.59-69
    • /
    • 2005
  • Because of increasing numbers of cars many highways are being constructed lively, and the noise of passing cars has influenced surrounding areas. In consideration of this, some alternatives and researches for soundproof facilities are proceeding, but aesthetic approach hasn't been considered. Therefore, this research is focused on soundproof effects for each types, effectual simulation methods, visual assessment and estimation between the landscape before simulation and the landscape after. Soundproof facilities are divided largely by the soundproof barrier, the soundproof mounding, the soundproof grove. The soundproof grove has three main function. First, leaves and branches absorbs sound vibrations. Second, leaves absorbs sound, and branches obstruct sounds. Third, by means of sounds of shaking leaves, forest can offset noises. This research was proceeded by means of classification of soundproof grove types and investigation of visual simulation methods. We made visual simulation for each types, and estimated the landscape for each types.

Teaching English Pronunciation and Listening Skills

  • Choi, Jae-Oh
    • 영어어문교육
    • /
    • 제13권2호
    • /
    • pp.1-23
    • /
    • 2007
  • The purpose of this research is to explore the effects of systematic teaching English pronunciation and listening in English. Focusing on phonemes and words in pairs and sentences, the sound systems of the English and Korean languages are dealt with in conjunction with the test data. This paper first discusses the systemic, or primary interference and the habitual, or secondary interference that hinder comprehension of certain English sounds. Second, the analysis of input and output test data on the contrasting vowels and consonants shows statistic significance in terms of the probability (p value) of t-test. Third, the comparative data by means of percentile of right answers on contrasting vowel and consonant sounds expound the different sound systems of the English and Korean languages. With this data, problems in pronunciation of and listening to English, and the factors that may cause these problems are analyzed so that they can be used as a guideline for a systematic approach in teaching English learners, thus leading to more satisfactory performance.

  • PDF

Immediate Side Shift가 Pantographic Reproducibility Index에 끼치는 영향에 관한 연구 (A Study on the Effects of Immediate Side Shift to the Pantographic Reproducibility Index)

  • 남천우;한경수
    • Journal of Oral Medicine and Pain
    • /
    • 제12권1호
    • /
    • pp.75-83
    • /
    • 1987
  • This study was designed to investigate the effects of TMJ incoordination to condylar movements, especially, the ISS. The sounds are one of the symptoms in TMJ incoordinated disorder, and it may cause the changes of mandibular movement trajectory. 19 students with only TMJ sounds and 16 students with no TMJ problems participated in this study. The subject performed Rt. lateral, Lt. lateral and protrusive movements, and repeated 3 times on each movement. Pantronic was used to record the measures of condylar movement paths. The obtained results were as follows : 1. The mean values of RISS and LISS in control group were 0.29mm, 0.36mm respectively, and those in experimental group were 0.49mm, 0.41mm repectively. The mean values of RISS was higher in experimental group than that of RISS in control group. 2. Correlation coefficients between PRI and RISS, LISS were slightly higher in experimental group than those in control group, therefore, PRI was more likely to be affected by ISS in experimental group. 3. In control group PRI was correlated to RISS, LORB, RPRO and LPRO, but in experimental group PRI was not correlated to those items. From the study, the author knew that the condylar movements was stable in control group.

  • PDF

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • 제16권4호
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.