• 제목/요약/키워드: Auditory and Visual Stimuli

검색결과 57건 처리시간 0.023초

청각자극에 의해 유발된 정서 및 주의반응의 생리적 지표 (PHYSIOLOGICAL INDICATORS OF EMOTION AND ATTENTION PROCESSES DURING AFFECTIVE AND ORIENTING AUDITORY STIULATION)

  • Estate M. Sokhadze
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 학술발표대회 논문집 제17권 1호
    • /
    • pp.291-296
    • /
    • 1998
  • In the experiment carried out on 20 college students, recorded were frontal, temporal and occipital EEG, skin conductance response, skin conductance level, heart rate and respiration rate during listening to two music fragments with different affective valences and white noise administered immediately after negative visual stimulation. Analysis of physiological patterns observed during the experiment suggests that affective auditory stimulation with music is able to selectively modulate autonomic and cortical activity evoked by preceding aversive visual stimulation and to restore initial baseline levels. On other hand, physiological responses to white noise, which does not possess emotion-eliciting capabilities, evokes response typical for orienting reaction after the onset of a stimulus and is rapidly followed by habituation. Observed responses to white noise were similar to those specific to attention only and had no evidence for any emotion-related processes. Interpretation of the obtained data is considered in terms of the role of emotional and orienting significance of stimuli, dependence of effects on the background physiological activation level and time courses of attention and emotion processes. Physiological parameters are summarized with regard to their potential utility in differentiation of psychological processes induced by auditory stimuli.

  • PDF

참여형 멀티미디어 시스템 사용자 감성평가를 위한 다차원 심물리학적 척도 체계 (Development of Multiple-modality Psychophysical Scaling System for Evaluating Subjective User Perception of the Participatory Multimedia System)

  • 나종관;박민용
    • 대한인간공학회지
    • /
    • 제23권3호
    • /
    • pp.89-99
    • /
    • 2004
  • A comprehensive psychophysical scaling system, multiple-modality magnitude estimation system (MMES) has been designed to measure subjective multidimensional human perception. Unlike paper-based magnitude estimation systems, the MMES has an additional auditory peripheral cue that varies with corresponding visual magnitude. As the simplest, purely psychological case, bimodal divided-attention conditions were simulated to establish the superiority of the MMES. Subjects were given brief presentations of pairs of simultaneous stimuli consisting of visual line-lengths and auditory white-noise levels. In the visual or auditory focused-attention conditions, only the line-lengths or the noise levels perceived should be reported respectively. On the other hand, in the divided-attention conditions, both the line-lengths and the noise levels should be reported. There were no significant differences among the different attention conditions. Human performance was better when the proportion of magnitude in stimulus pairs were identically presented. The additional auditory cues in the MMES improved the correlations between the magnitude of stimuli and MMES values in the divided-attention conditions.

정신분열병 환자의 인지적/행동적 특성평가를 위한 가상현실시스템 구현 (A Virtual Reality System for the Cognitive and Behavioral Assessment of Schizophrenia)

  • Cho, Won-Geun;Kim, Ho-Sung;Ku, Jung-Hun;Kim, Jae-Hun;Kim, Byoung-Nyun;Lee, Jang-Han;Kim, Sun I.
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2003년도 춘계학술대회 논문집
    • /
    • pp.94-100
    • /
    • 2003
  • Patients with schizophrenia have thinking disorders such as delusion or hallucination, because they have a deficit in the ability which to systematize and integrate information. Therefore, they cannot integrate or systemize visual, auditory and tactile stimuli. In this study we suggest a virtual reality system for the assessment of cognitive ability of schizophrenia patients, based on the brain multimodal integration model. The virtual reality system provides multimodal stimuli, such as visual and auditory stimuli, to the patient, and can evaluate the patient's multimodal integration and working memory integration abilities by making the patient interpret and react to multimodal stimuli, which must be remembered for a given period of time. The clinical study showed that the virtual reality program developed is comparable to those of the WCST and the SPM.

  • PDF

청각 주변 자극의 효과를 고려한 효율적 차량-운전자 상호 연동 모델 구현 방법론 (Implementation of the Perception Process in Human‐Vehicle Interactive Models(HVIMs) Considering the Effects of Auditory Peripheral Cues)

  • 나종관;박민용
    • 대한인간공학회지
    • /
    • 제25권3호
    • /
    • pp.67-75
    • /
    • 2006
  • HVIMs consists of simulated driver models implemented with series of mathematical functions and computerized vehicle dynamic models. To effectively model the perception process, as a part of driver models, psychophysical nonlinearity should be considered not only for the single-modal stimulus but for the stimulus of multiple modalities and interactions among them. A series of human factors experiments were conducted using the primary sensory of visual and auditory modalities to find out the effects of auditory cues in visual velocity estimation tasks. The variations of auditory cues were found to enhance/reduce the perceived intensity of velocity as the level changed. These results indicate that the conventional psychophysical power functions could not applied for the perception process of the HVIMs with multi-modal stimuli. 'Ruled surfaces' in a 3-D coordinate system(with the intensities of both kinds of stimuli and the ratio of enhancement, respectively for each coordinate) were suggested to model the realistic perception process of multi-modal HVIMs.

Improved Feature Extraction of Hand Movement EEG Signals based on Independent Component Analysis and Spatial Filter

  • 응웬탄하;박승민;고광은;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제22권4호
    • /
    • pp.515-520
    • /
    • 2012
  • In brain computer interface (BCI) system, the most important part is classification of human thoughts in order to translate into commands. The more accuracy result in classification the system gets, the more effective BCI system is. To increase the quality of BCI system, we proposed to reduce noise and artifact from the recording data to analyzing data. We used auditory stimuli instead of visual ones to eliminate the eye movement, unwanted visual activation, gaze control. We applied independent component analysis (ICA) algorithm to purify the sources which constructed the raw signals. One of the most famous spatial filter in BCI context is common spatial patterns (CSP), which maximize one class while minimize the other by using covariance matrix. ICA and CSP also do the filter job, as a raw filter and refinement, which increase the classification result of linear discriminant analysis (LDA).

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Journal of Audiology & Otology
    • /
    • 제23권1호
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • 대한청각학회지
    • /
    • 제23권1호
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

시청각 복합자극에 대한 인감감성의 변화 (Effects of Multimodal Stimuli on Human Sensibility)

  • 이구형;김병주;정일석
    • 감성과학
    • /
    • 제4권1호
    • /
    • pp.43-51
    • /
    • 2001
  • When consumer evaluates a product, every aspec of the product affects the evaluation. Human uses at least five sensory organs for the evaluation. Multi-sensory or multi-modal design tries to add auditory and olfactory factors to the traditional visual-centered design. For the multi-modal design, it is essential to understand relationships between combined sensory stimuli and human sensibility. Information between simple sensory stimulus and human sensibility is pre-requisite to combine multi-modal stimuli. This study investigated human sensibility against 8 colors and 30 sounds, presented independently. The combined stimuli of color and sound were made based on the sensibility generated by each stimulus. Human sensibility generated by the combined stimuli of color and sound were made based on the sensibility generated by each stimulus. Human sensibility generated by the combined stimuli was investigated with 20 female subjects. For combined stimuli that generated the same kind of sensibility respectively, generated sensibility was same but strength was diminished. For combined stimuli that generated the different sensibilities respectively, subjects showed neutral sensibility or no special sensibilities. Sensibilities to the same stimuli also showed difference depending on personal background of the subjects.

  • PDF

청각자극을 이용한 무안경방식 3D 영상의 휴먼팩터 평가 (Evaluation of Human Factors on Autostereoscopic 3D Viewing by Using Auditory Stimuli)

  • 문성철;조성진;박민철
    • 한국통신학회논문지
    • /
    • 제38C권11호
    • /
    • pp.1000-1009
    • /
    • 2013
  • 본 연구는 다차원 영상 시각피로를 정량적으로 평가하기 위해 선택적 주의이론에 근거한 청각자극 실험패러다임을 이용하여 시청 전후의 태스크 퍼포먼스 변화를 비교 평가하고자 하였다. 21명의 대학생들이 약 80분의 모바일 3D 콘텐츠 시청 전후에 청각자극을 이용한 선택적 주의 태스크를 각각 수행하였다. 실험참가자들의 주관피로평가 결과를 본페로니의 알파 수정을 적용한 윌콕슨 부호 순위 검정으로 분석하고 피로군과 비피로군으로 분류하여 태스크 퍼포먼스를 비교 평가하였다. 비피로군의 경우 시청 전후 태스크 퍼포먼스 인덱스에서 유의미한 차이가 나타나지 않았으나, 피로군의 경우 모바일 3D 콘텐츠 시청 후에 타겟에 대한 반응 시간이 유의하게 증가하고 반응 정확도가 유의하게 감소하였다. 작업기억 태스크 정확도의 경우는 두 그룹 다 유의미한 차이를 보이지 않았다.

정신분열병 환자의 인지적/행동적 특성평가를 위한 가상현실시스템 구현 (A Virtual Reality System for the Cognitive and Behavioral Assessment of Schizophrenia)

  • Lee, Jang-Han;Cho, Won-Geun;Kim, Ho-Sung;Ku, Jung-Hun;Kim, Jae-Hun;Kim, Byoung-Nyun;Kim, Sun-I.
    • 감성과학
    • /
    • 제6권3호
    • /
    • pp.55-62
    • /
    • 2003
  • 정신분열병은 망상이나 환각과 같은 양성증상과 감정적 둔마와 같은 음성증상이 대표적인 사고장애로서 외부입력 정보를 통합하거나 체계적으로 처리하는 능력이 매우 부족하다. 즉, 정신분열병 환자는 시각, 청각, 촉각 등의 자극을 종합하고 통합하여 인지하지 못한다. 본 연구에서는 뇌 인지 통합 모델(Brain Multimodal Integration Model)에 기반하여 정신분열병 환자의 인지 능력을 측정하기 위한 가상현실시스템을 제안한다. 정신분열병 환자의 지각, 인지, 운동능력을 측정하기 위한 가상현실시스템은 환자에게 시각과 청각의 멀티모달 자극을 제시하여, 환자로 하여금 일정시간 동안 자극을 기억하고 처리하여 주어진 과제를 수행하도록 하였다. 수행 결과를 통해 환자의 멀티모달 자극 통합능력 및 작업기억 통합능력, 네비게이션 능력을 평가한다. 임상연구를 통해 개발된 가상현실시스템을 WCST과 같은 기존 검사방법들과 비교하여 검증하였는데, 가상현실로 측정한 파라미터와 WCST의 파라미터 및 SPM 점수 사이에 매우 유의미한 상관관계를 보여 가상현실시스템의 유용성을 확인할 수 있었다.

  • PDF