• Title/Summary/Keyword: Crossmodal

Search Result 5, Processing Time 0.02 seconds

Extraction Analysis for Crossmodal Association Information using Hypernetwork Models (하이퍼네트워크 모델을 이용한 비전-언어 크로스모달 연관정보 추출)

  • Heo, Min-Oh;Ha, Jung-Woo;Zhang, Byoung-Tak
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.278-284
    • /
    • 2009
  • Multimodal data to have several modalities such as videos, images, sounds and texts for one contents is increasing. Since this type of data has ill-defined format, it is not easy to represent the crossmodal information for them explicitly. So, we proposed new method to extract and analyze vision-language crossmodal association information using the documentaries video data about the nature. We collected pairs of images and captions from 3 genres of documentaries such as jungle, ocean and universe, and extracted a set of visual words and that of text words from them. We found out that two modal data have semantic association on crossmodal association information from this analysis.

  • PDF

Attentional Effects of Crossmodal Spatial Display using HRTF in Target Detection Tasks (항공 목표물 탐지과제 수행에서 머리전달함수(HRTF)를 이용한 이중감각적 공간 디스플레이의 주의효과)

  • Lee, Ju-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.571-577
    • /
    • 2010
  • Driving aircraft requires extremely complicated and detailed information processing. Pilots perform their tasks by selecting the information relevant to them. In this processing, spatial information presented simultaneously through crossmodal link is advantageous over the one provided in singular sensory mode. In this paper, probability to apply providing visual spatial information along with auditory information to enemy tracking system in aircraft navigation is empirically investigated. The result shows that auditory spatial information, which is virtually created through HRTF is advantageous to visual spatial information alone in attention processing. The findings suggest auditory spatial information along with visual one can be presented through crossmodal link by utilizing stereophonic sound such as HRTF. which is available in the existing simple stereo system.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

Development of Multiple-modality Psychophysical Scaling System for Evaluating Subjective User Perception of the Participatory Multimedia System (참여형 멀티미디어 시스템 사용자 감성평가를 위한 다차원 심물리학적 척도 체계)

  • Na, Jong-Gwan;Park, Min-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.89-99
    • /
    • 2004
  • A comprehensive psychophysical scaling system, multiple-modality magnitude estimation system (MMES) has been designed to measure subjective multidimensional human perception. Unlike paper-based magnitude estimation systems, the MMES has an additional auditory peripheral cue that varies with corresponding visual magnitude. As the simplest, purely psychological case, bimodal divided-attention conditions were simulated to establish the superiority of the MMES. Subjects were given brief presentations of pairs of simultaneous stimuli consisting of visual line-lengths and auditory white-noise levels. In the visual or auditory focused-attention conditions, only the line-lengths or the noise levels perceived should be reported respectively. On the other hand, in the divided-attention conditions, both the line-lengths and the noise levels should be reported. There were no significant differences among the different attention conditions. Human performance was better when the proportion of magnitude in stimulus pairs were identically presented. The additional auditory cues in the MMES improved the correlations between the magnitude of stimuli and MMES values in the divided-attention conditions.

A research on the media player transferring vibrotactile stimulation from digital sound (디지털 음원의 촉각 자극 전이를 위한 미디어 플레이어에 대한 연구)

  • Lim, Young-Hoon;Lee, Su-Jin;Jung, Jong-Hwan;Ha, Ji-Min;Whang, Min-Cheol;Park, Jun-Seok
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.881-886
    • /
    • 2007
  • This study was to develope a vibrotactile display system using windows media player from digital audio signal. WMPlayer10SDK system which was plug-in tool by microsoft windows media player provided its video and audio signal information. The audio signal was tried to be change into vibrotactile display. Audio signal had 4 sections such as 8bit, 16bit, 24bit, and 32bit. Each section was computed its frequency and vibrato scale. And data was transferred to 38400bps network port(COM1) for vibration. Using this system was able to develop the music suit which presented tactile feeling of music beyond sound. Therefore, it may provide cross modal technology for fusion technology of human senses.

  • PDF