• Title/Summary/Keyword: auditory stimuli

Search Result 138, Processing Time 0.02 seconds

A Study on Development of Disney Animation's Box-office Prediction AI Model Based on Brain Science (뇌과학 기반의 디즈니 애니메이션 흥행 예측 AI 모형 개발 연구)

  • Lee, Jong-Eun;Yang, Eun-Young
    • Journal of Digital Convergence
    • /
    • v.16 no.9
    • /
    • pp.405-412
    • /
    • 2018
  • When a film company decides whether to invest or not in a scenario is the appropriate time to predict box office success. In response to market demands, AI based scenario analysis service has been launched, yet the algorithm is by no means perfect. The purpose of this study is to present a prediction model of movie scenario's box office hit based on human brain processing mechanism. In order to derive patterns of visual, auditory, and cognitive stimuli on the time spectrum of box office animation hit, this study applied Weber's law and brain mechanism. The results are as follow. First, the frequency of brain stimulation in the biggest box office movies was 1.79 times greater than that in the failure movies. Second, in the box office success, the cognitive stimuli codes are spread evenly, whereas in the failure, concentrated among few intervals. Third, in the box office success movie, cognitive stimuli which have big cognition load appeared alone, whereas visual and auditory stimuli which have little cognitive load appeared simultaneously.

Different Types of Encoding and Processing in Auditory Sensory Memory according to Stimulus Modality (자극양식에 따른 청감각기억에서의 여러가지 부호화방식과 처리방식)

  • Kim, Jeong-Hwan;Lee, Man-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.77-85
    • /
    • 1990
  • This study investigated Greene and Corwder(1984)'s modified PAS model, according to which, in a short-term memory recall task, the recency and suffix effects existing in auditory and visual conditions are mediated by the same mechanisms. It also investigated whether the auditory information and mouthed information are encoded by the same codes. Though the experimental manipulation of the phonological nature, the presence of differential recall effect of consonant-and vowel-varied stimuli in auditory and mouthing conditions which has been supposed to interact with the recency and suffix effects, was investigated. The result shows that differential recall effect between consonant and vowel exists only in the auditory condition, but not in the mouthing condition. Thus, this result supported Turner.

  • PDF

Correlation of acoustic features and electrophysiological outcomes of stimuli at the level of auditory brainstem (자극음의 음향적 특성과 청각 뇌간에서의 전기생리학적 반응의 상관성)

  • Chun, Hyungi;Han, Woojae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.1
    • /
    • pp.63-73
    • /
    • 2016
  • It is widely acknowledged that the human auditory system is organized tonotopically and people generally listen to sounds as a function of frequency distribution through the auditory system. However, it is still unclear how acoustic features of speech sounds are indicated to the human brain in terms of speech perception. Thus, the purpose of this study is to investigate whether two sounds with similar high-frequency characteristics in the acoustic analysis show similar results at the level of auditory brainstem. Thirty three young adults with normal hearing participated in the study. As stimuli, two Korean monosyllables (i.e., /ja/ and /cha/) and four frequencies of toneburst (i.e., 500, 1000, 2000, and 4000 Hz) were used to elicit the auditory brainstem response (ABR). Measures of monosyllable and toneburst were highly replicable and the wave V of waveform was detectable in all subjects. In the results of Pearson correlation analysis, the /ja/ syllable had a high correlation with 4000 Hz of toneburst which means that its acoustic characteristics (i.e., 3671~5384 Hz) showed the same results in the brainstem. However, the /cha/ syllable had a high correlation with 1000 and 2000 Hz of toneburst although it has acoustical distribution of 3362~5412 Hz. We concluded that there was disagreement between acoustic features and physiology outcomes at the auditory brainstem level. This finding suggests that an acoustical-perceptual mapping study is needed to scrutinize human speech perception.

Analysis of Nonlinear Time Series by Bispectrum Methods and its Applications (바이스펙트럼에 의한 비선형 시계열 신호 해석과 그 응용)

  • Kim, Eung-Su;Lee, Yu-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.5
    • /
    • pp.1312-1322
    • /
    • 1999
  • The world of linearity, which is regular, predictable and irrelevant to time sequence in most natural phenomenon, is a very small part. In fact, signals generated from natural phenomenon with which we're in contact are showed only slight linearity. Therefore it is very difficult to understand and analyze natural phenomenon with only predictable and regular linear systems. Due to these reasons researches concerning non-linear signals that of analysis were excluded being regarded as noise are being actively carried out. Countless signals generated from nonlinear system have the information about itself, and analyzing those signals and get information from it, that will be able to be used effectively in so may fields. Hence, in this paper we used a higher order spectrum, especially the bispectrum. After we prove the validity applying bispectrum to logistic map, which is typical chaotic signal. Subsequently by showing the result applying for actual signal analysis of EEG according to auditory stimuli, we show that higher order spectra is a very useful parameter in analysis of non-linear signals and the result of EEG analysis according to auditory stimuli.

  • PDF

Consistency between Individuals of Affective Responses for Multiple Modalities based on Behavioral and Physiological Data (행동 및 생리측정기반 개인 간 다중 감각정서 반응일치성)

  • Junhyuk Jang;Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.1
    • /
    • pp.43-54
    • /
    • 2023
  • In this study, we assessed how participants represent various sensory stimuli experiences through behavioral ratings and physiological measurements. Utilizing intersubject correlation (ISC) analysis, we evaluated whether individuals' affective responses of dominance, arousal, and valence differed when stimuli of three modality conditions (auditory, visual, and haptic) were presented. ISC analyses were used to measure the similarities between one participant's responses and those of the others. To calculate the intersubject correlation, we divided the entire dataset into one subject and all other subject datasets and then correlated the two for all possible stimulus pair combinations. The results revealed that for dominance, ISCs of the visual modality condition were greater than the auditory modality condition, whereas, for arousal, the auditory condition was greater than the visual modality. Last, negative valence conditions had the greater consistency of the participants' reactions than positive conditions in each of the sensory modalities. When comparing modalities, greater ISCs were observed in haptic modality conditions than in visual and auditory modality conditions, regardless of the affective categories. We discussed three core affective representations of multiple modalities and proposed ISC analysis as a tool for examining differences in individuals' affective representations.

Consistency of Responses to Affective Stimuli Across Individuals using Intersubject Representational Similarity Analysis based on Behavioral and Physiological Data (참가자 간 표상 유사성 분석을 이용한 정서 자극 반응 일치성 비교: 행동 및 생리 데이터를 기반으로)

  • Junhyuk Jang;Hyeonjung Kim;Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.3
    • /
    • pp.3-14
    • /
    • 2023
  • This study used intersubject representational similarity analysis (IS-RSA) to identify participant-response consistency patterns in previously published data. Additionally, analysis of variance (ANOVA) was utilized to detect any variations in the conditions of each experiment. In each experiment, a combination of ASMR stimulation, visual and auditory stimuli, and time-series emotional video stimulation was employed, and emotional ratings and physiological measurements were collected in accordance with the respective experimental conditions. Every pair of participants' measurements for each stimulus in each experiment was correlated using Pearson correlation coefficient as part of the IS-RSA. The results of study revealed a consistent response pattern among participants exposed to ASMR, visual, and auditory stimuli, in contrast to those exposed to time-series emotional video stimulation. Notably, the ASMR experiment demonstrated a high level of response consistency among participants in positive conditions. Furthermore, both auditory and visual experiments exhibited remarkable consistency in participants' responses, especially when subjected to high arousal levels and visual stimulation. The findings of this study confirm that IS-RSA serves as a valuable tool for summarizing and presenting multidimensional data information. Within the scope of this study, IS-RSA emerged as a reliable method for analyzing multidimensional data, effectively capturing and presenting comprehensive information pertaining to the participants.

Hearing Ability of Bambooleaf wrasse Pseudolabrus japonicus caught in the coast of Jeju (제주 연안에서 어획된 황놀래기의 청각 능력)

  • Choi, Chan-Moon;Park, Yong-Seok;Lee, Chang-Heon
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.25 no.6
    • /
    • pp.1381-1388
    • /
    • 2013
  • In order to improve the availability of underwater sound by the fundamental data on the hearing ability, the auditory thresholds for the bambooleaf wrasse pseudolabrus japonicus were determined at 80Hz, 100Hz, 200Hz, 300Hz, 500Hz and 800Hz by heartbeat conditioning method using pure tones coupled with a delayed electric shock. The audible range of the bambooleaf wrasse extended from 80Hz to 800Hz with the best sensitivity around 100Hz and 200Hz. In addition, the auditory thresholds over 300Hz increased rapidly. The mean auditory thresholds of the bambooleaf wrasse at the test frequencies, 80Hz, 100Hz, 200Hz, 300Hz, 500Hz and 800Hz were 100dB, 95.1dB, 94.8dB, 109dB, 121dB and 125dB, respectively. Auditory critical ratios for the bambooleaf wrasse were measured using masking stimuli with the spectrum level range of about 70, 74, 78dB (0dB re $1{\mu}Pa/\sqrt{Hz}$). According to white noise level, the auditory thresholds increased as compared with thresholds in a quiet background noise. The Auditory masking by the white noise spectrum level was stared over about 60dB within 80~300Hz. Critical ratios to be measured at frequencies from 80Hz to 300Hz were minimum 33dB and maximum 39dB.

Hearing Ability of Redlip croaker Pseudosciaena polyactis cultured in the Coastal Sea of Jeju (제주 연안에서 양식된 참조기의 청각 능력)

  • AHN, Jang-Young;KIM, Seok-Jong;CHOI, Chan-Moon;PARK, Young-Seok;LEE, Chang-Heon
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.28 no.2
    • /
    • pp.384-390
    • /
    • 2016
  • The purpose of this paper is to improve the availability of underwater sound by the fundamental data on the hearing ability of Redlip croaker Pseudosciaena polyactis, which is cultured according to the cultivation technology, recently. The auditory thresholds of Redlip croaker were determined at 6 frequencies from 80Hz to 800Hz by heartbeat conditioning method using pure tones coupled with a delayed electric shock. The audible range of the Redlip croaker extended from 80Hz to 800Hz with the best sensitive frequency range including little difference in hearing ability from 80Hz to 500Hz. In addition, the auditory thresholds over 800Hz increased rapidly. The mean auditory thresholds of the Redlip croaker at the test frequencies from 80Hz to 800Hz were 90.7dB, 93.4dB, 92.9dB, 94.4dB, 95.5dB and 108dB, respectively. Auditory masking for the redlip croaker was measured using masking stimuli with the spectrum level range of about 66, 71, 75dB (0dB re $1{\mu}Pa/{\sqrt{Hz}}$). According to white noise level, the auditory thresholds increased as compared with thresholds in a quiet background noise. The Auditory masking by the white noise spectrum level was stared over about 70dB within 80~500Hz. Critical ratio ranged from minimum 20.7dB to maximum 25.5dB at test frequencies of 80Hz~500Hz.

Evaluation of Human Factors on Autostereoscopic 3D Viewing by Using Auditory Stimuli (청각자극을 이용한 무안경방식 3D 영상의 휴먼팩터 평가)

  • Mun, Sungchul;Cho, Sungjin;Park, Min-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.11
    • /
    • pp.1000-1009
    • /
    • 2013
  • This study investigated changes in behavioral performance before and after watching a multi-view 3D content by using auditory stimuli based on the selective attention theory in order to quantitatively evaluate 3D visual fatigue. Twenty-one undergraduates were asked to report on their current visual and physical condition both in the pre- and post-experiment. A selective attention task was conducted before and after mobile 3D viewing to compare the changes in performance. After performing a Wilcoxon's matched-pairs signed-ranks test on the subjective ratings of 3D visual fatigue, participants were categorized into two groups, unfatigued and fatigued group with a definite criterion. For the unfatigued group, no significant fatigue effects were found in behavioral response times and accuracies to specific auditory targets. In sharply contrast to the unfatigued group, the fatigued group showed significantly delayed response times and less response accuracies. However, no significant changes in accuracies for a working memory task were observed in both groups.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Journal of Audiology & Otology
    • /
    • v.23 no.1
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.