• Title/Summary/Keyword: Auditory Analysis

Search Result 328, Processing Time 0.035 seconds

A Study on the Communication System Design for Auditory Disabled (청각장애인을 위한 의사소통 시스템 디자인 연구)

  • Yang, Sung-Ho;Song, Ji-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1172-1175
    • /
    • 2009
  • This study aims to develop communication devices and interfaces to address the communication needs of the hearing impaired. Through three FGI (Focused Group Interview)s with deaf persons and sign-language interpreters, we studied the communication methods, devices, and needs of the deaf. On the basis of our analysis, we propose a communication framework to improve their means of communication with normal or other deaf persons. We have designed a communication system that is based on the proposed framework: this system suggests functions for remote sign-language interpretation services for conversations in close-range and over the phone. Details are presented regarding the design of interfaces for video calls, text messages, and digital memos, addressing the conversation patterns of the deaf. The system includes hardware form factors for video phones that facilitate sign language conversations and also mitigate other auditory problems in daily life, such as problems with door bells. The design concept has been verified through a test with six deaf users.

  • PDF

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Effects of Rhythmic Auditory Stimulation Using Music on Gait With Stroke Patients

  • Oh, Yong-seop;Kim, Hee-soo;Woo, Young-keun
    • Physical Therapy Korea
    • /
    • v.22 no.3
    • /
    • pp.81-90
    • /
    • 2015
  • This study aimed to determine the effects of Rhythmic Auditory Stimulation (RAS) using music and a metronome on the gait of stroke patients. 13 female and 15 male volunteers were randomly allocated to two groups: namely a group to receive RAS using music and a metronome group (the experimental group; $n_1=14$) and a group to receive RAS using a metronome only (the control group; $n_2=14$). The affected side was the left side in 15 subjects and the right side in 13 subjects. The mean age of the subjects was 56.6 years, and the mean onset duration of stroke was 8.6 months. Intervention was applied for 30 minutes per session, once a day, 5 times a week for 4 weeks. To measure the patients' gait improvement, we measured gait velocity, cadence, stride length, double limb support using GAITRite, body center sway angle using an accelerometer, and Timed Up-and-Go test. Functional Gait Assessment were conducted before and after the experiment. The paired t-test was used for comparisons before and after the interventions in each group. Analysis of covariance was used for comparisons between the groups after the interventions. Statistical significance was set at ${\alpha}=.05$. Within each of the two groups, significant differences in all of the dependent variables before and after the experiment (p<.05) were observed. However, in the comparison between the two groups, the experimental group showed more significant improvements in all dependent variables than the control group (p<.05). Our results also suggest that in applying RAS in stroke patients, the combination of music and a metronome is more effective than using a metronome alone in improving patients' gait.

The Effects of Habituation and Sensitization on Psychophysiological Differentiation of Responses to Auditory Stimulation with Automobile Horns

  • Estate M. Sokhadze;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.3 no.2
    • /
    • pp.17-28
    • /
    • 2000
  • Psychoacoustic characteristics of automobile horns play significant role in resulting subjective evaluation and psychphysiological reactions. However, comparison and differentiation of physiological responses to commercially available horns is a complicated task due to the small contrast in technical features of horns and the influence of such processes as habituation on physiological outcome with the increased number of auditory stimulation trials. In a study on 10 college students, there was performed comparative analysis of reactivity of physiological responses mediated by central and autonomic nervous systems in order to identify the role of habituation on decrement of psychophysiological responsivity and assess the ability to differentiate subjectively most and least preferred, as well as most and least appropriate horns according to physiological manifestations. The EEG and autonomic responses to 7 automobile horns were analyzed during 3 blocks of trials, with varying order of stimuli and changed acoustic parameters of horns in each block. Thus, responses were analyzed for totally 21 trials of auditory stimulation. It was shown that electrodermal and cardiovascular responses have different reactivity patterns to repeated stimulation: skin conductance measures habituated, cardiac reactivity showed no signs of habituation, and the vascular response demonstrated sensitization. The temporal EEG exhibited marked habituation of fast beta band power, while alpha-blocking effect did not habituate during the course of experiment. Differentiation of physiological responses of most and least preferred and appropriate horns was possible in our study, however, some cardiovascular reactivity measures differentiated during the entire course of the experiment, while EEG and electrodermal parameters showed significant differences only during first block of trials, and were later affected by the habituation.

  • PDF

Korean Native Speakers Auditory Cognitive Reactions to Chinese Korean-learners' Pronunciation: Centered on the utterance of consonants in the Korean Language (중국인 학습자의 한국어 발음에 대한 한국인 모어 화자의 청각 인지 반응 -중국인 학습자의 자음 발음을 중심으로-)

  • Kim, Ji-hyung
    • Journal of Korean language education
    • /
    • v.28 no.2
    • /
    • pp.37-60
    • /
    • 2017
  • This research has its basis with focus on the way Korean native speakers recognize Chinese Korean-learners' pronunciation. The objective of the study is to lay the cornerstone for establishing effective teaching-learning strategies for the education of the Korean phonetic system. In this study, the results of the experiment are presented which shows how native speakers of Korean identify Chinese Korean-learners' pronunciation of consonants. In the first place, stimulation tones were created from the original utterances of Chinese Korean-learners and seven scripts were made through the Pratt program. In addition, the subjects were asked to choose what the phonetic materials sounded like. The results of the research are represented as the ratio of frequency of Korean native speakers' response to each utterance to the total frequency. In addition, the paired t-test was taken in order to explore any relatedness to the changes in the level of proficiency of the Korean phonetic system, ranging from beginners to advanced learners. The outcome shows that the mistakes which Chinese Korean-learners make in pronouncing the consonants of Korean are relatively well-reflected in Korean native speakers' auditory cognitive reactions. To put it concretely, there is some difficulty in differentiating lax consonants from aspirates in the cases of plosives and affricates, but relatively little trouble with fortes. However, it is revealed that there is also a slight difference in relation to articulatory positions in detailed aspects. To provide an effective teaching method for the Korean phonetic system, it is essential to comprehend learners' phonetic mistakes through the precise analysis of data in terms of 'production.' Also, a more meticulous observation of 'phenomena' must be made through verification from the view of 'reception,' as attempted in this study. A more thorough diagnosis by applying methodology makes it possible to lay the foundation for developing effective teaching-learning strategies for the instruction of the Korean phonetic system. This study has its significance in making such attempts.

Prognostic Factors Affecting Surgical Outcomes in Squamous Cell Carcinoma of External Auditory Canal

  • Nam, Gi-Sung;Moon, In Seok;Kim, Ji Hyung;Kim, Sung Huhn;Choi, Jae Young;Son, Eun Jin
    • Clinical and Experimental Otorhinolaryngology
    • /
    • v.11 no.4
    • /
    • pp.259-266
    • /
    • 2018
  • Objectives. Carcinomas of the external auditory canal (EAC) are rare, and management remains challenging. Previous studies seeking prognostic factors for EAC cancers included cancers other than carcinomas. In this study, we analyzed the treatment outcomes of, prognostic factors for, and survival rates associated with specifically squamous cell carcinoma (SCC) of the EAC. Methods. A retrospective review of 26 consecutive patients diagnosed with SCCs of the EAC in a 10-year period was performed in terms of clinical presentation, stage, choice of surgical procedure, and adjunct therapy. Overall survival (OS) and recurrence-free survival (RFS) were calculated and univariate analysis of prognostic factors was performed. Results. The median age of the 26 patients with SCCs of the EAC was 63 years (range, 40 to 72 years), and 16 males and 10 females were included. According to the modified University of Pittsburgh staging system, the T stages were T1 in 11, T2 in six, T3 in four, and T4 in five cases. The surgical procedures employed were wide excision in three cases, lateral temporal bone resection (LTBR) in 17, and extended LTBR in four, and subtotal temporal bone resection in two. Two patients underwent neoadjuvant chemotherapy, and two underwent adjuvant chemotherapy. One patient received preoperative radiation therapy, and eleven received postoperative radiation therapy. Of the possibly prognostic factors examined, advanced preoperative T stage and advanced overall stage were significant predictors of RFS, but not of OS. Conclusion. The advanced T stage and overall stage were associated with decreased survival after surgical treatment in patients with SCC of the EAC, highlighting the importance of clinical vigilance and early detection.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Journal of Audiology & Otology
    • /
    • v.23 no.1
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Korean Journal of Audiology
    • /
    • v.23 no.1
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

The Effect of Auditory Condition on Voice Parameter of Teacher (청각 환경이 교사의 음성 파라미터에 미치는 영향)

  • Lee Ju-Young;Baek Kwang-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.207-212
    • /
    • 2006
  • The purpose of this study was to compare voice parameters in auditory conditions (normal/noise/music) between a teacher group and a control group. Results of statistical analysis showed that the teacher group had higher jitter (%) and shimmer (%) values than the control group. It indicated that the teacher group had larger variations in pitch and dynamic of their voice. In the teacher group, the voice under noisy condition showed a higher value of fundamental frequency than that under normal condition. though its fundamental frequency did not show any significant difference between the noisy condition and the musical condition. In the control group, however, although the voice under noisy condition also showed a higher value of fundamental frequency than that under normal condition, its fundamental frequency was significantly different between the noisy condition and the musical condition.

Comparative Study of Functional Magnetic Resonance Imaging by Global Scaling Analysis (Global Scaling 분석방법에 따른 기능적 자기공명영상의 비교 연구)

  • Yoo, Dong-Soo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.10 no.1
    • /
    • pp.26-31
    • /
    • 2006
  • Purpose : To evaluate the effect of global scaling analysis on brain activation for sensory and motor functional MR imaging study. Materials and methods : Four normal subjects without abnormal neurological history were included. Arm extension-flexion movement was used for motor function and 1KHz pure tone stimulation was used for auditory function. Functional magnetic resonance imaging was performed at 3T MRI (GE, Milwaukee, USA) using BOLD-EPI technique and SPM2 was employed for data analysis. On data analysis, the brain activation images were obtained with and without global scaling by fixing other parameters such as motion correction and realignment. Results : The difference in brain activation between no scaling and global scaling was not large in case of right upper extremity movement (p<0.000001). For auditory test, brain activation with global scaling showed larger activation than that of without global scaling (p<0.05). Conclusion : A caution must be taken into account when analyzing functional imaging data with global scaling especially for functional study of small local BOLD signal change.

  • PDF