• Title/Summary/Keyword: Multimodal Contents

Search Result 35, Processing Time 0.032 seconds

Multimodal Discourse: A Visual Design Analysis of Two Advertising Images

  • Ly, Tan Hai;Jung, Chae Kwan
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.50-56
    • /
    • 2015
  • The area of discourse analysis has long neglected the value of images as a semiotic resource in communication. This paper suggests that like language, images are rich in meaning potential and are governed by visual grammar structures which can be utilized to decode the meanings of images. Employing a theoretical framework in visual communication, two digital images are examined for their representational and interactive dimensions and the dimensions' relation to the magazine advertisement genre. The results show that the framework identified narrative and conceptual processes, relations between participants and viewers, and symbolic attributes of the images, which all contribute to the sociological interpretations of the images. The identities and relationships between viewers and participants suggested in the images signify desirable qualities that may be associated to the product of the advertiser. The findings support the theory of visual grammar and highlight the potential of images to convey multi-layered meanings.

Layout Based Multimodal Contents Aughoring Tool for Digilog Book (디지로그 북을 위한 레이아웃 기반 다감각 콘텐츠 저작 도구)

  • Park, Jong-Hee;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.512-515
    • /
    • 2009
  • In this paper, we propose layout based multimodal contents authoring tool for Digilog Book. In authoring step, users create a virtual area using mouse or pen-type device and select property of the area repetitively. After finishing authoring step, system recognizes printed page number and generate page layout including areas and property information. Page layout is represented as a scene graph and stored as XML format. Digilog Book viewer loads stored page layout and analyze properties then augment virtual contents or execute functions based on area. Users can author visual and auditory contents easily by using hybrid interface. In AR environment, system provides area templates in order to help creating area. In addition, proposed authoring tool separates page recognition module from page tracking module. So, it is possible to author many pages using only single marker. As a result of experiment, we showed proposed authoring tool has reasonable performance time in AR environment. We expect that proposed authoring tool would be applicable to many fields such as education and publication.

  • PDF

Multimodal Interface Control Module for Immersive Virtual Education (몰입형 가상교육을 위한 멀티모달 인터페이스 제어모듈)

  • Lee, Jaehyub;Im, SungMin
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.40-44
    • /
    • 2013
  • This paper suggests a multimodal interface control module which allows a student to naturally interact with educational contents in virtual environment. The suggested module recognizes a user's motion when he/she interacts with virtual environment and then conveys the user's motion to the virtual environment via wireless communication. Futhermore, a haptic actuator is incorporated into the proposed module in order to create haptic information. Due to the proposed module, a user can haptically sense the virtual object as if the virtual object is exists in real world.

  • PDF

Multimodal approach for blocking obscene and violent contents (멀티미디어 유해 콘텐츠 차단을 위한 다중 기법)

  • Baek, Jin-heon;Lee, Da-kyeong;Hong, Chae-yeon;Ahn, Byeong-tae
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.113-121
    • /
    • 2017
  • Due to the development of IT technology, harmful multimedia contents are spreading out. In addition, obscene and violent contents have a negative impact on children. Therefore, in this paper, we propose a multimodal approach for blocking obscene and violent video contents. Within this approach, there are two modules each detects obsceneness and violence. In the obsceneness module, there is a model that detects obsceneness based on adult and racy score. In the violence module, there are two models for detecting violence: one is the blood detection model using RGB region and the other is motion extraction model for observation that violent actions have larger magnitude and direction change. Through result of these three models, this approach judges whether or not the content is harmful. This can contribute to the blocking obscene and violent contents that are distributed indiscriminately.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

A Bridge Technique of Heterogeneous Smart Platform supporting Social Immersive Game (소셜 실감 게임을 위한 이기종 스마트 플랫폼 브릿지 기술)

  • Jang, S.E.;Tang, J.M.;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.1033-1040
    • /
    • 2014
  • Recently, the concept of mobile content service has changed from providing unilaterally contents for single-device to providing same contents for multi-device. This service should be able to provide diverse contents for multi-devices without platform and specification of multi-device. In this study, we propose a bridge technique of heterogeneous smart platform supporting social immersive game. It is possible to access social immersive game by using a multi-platform bridge. To achieve this, we explain techniques of device connection and data transmission between heterogeneous devices using server-client structure and UPnP. It provides an immersive game environment for multi-user, which is able to play in a public place using big screen.

Haptically Enhanced Movie System (몰입감 있는 촉감영화 시스템)

  • Kim, Yeong-Mi;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02c
    • /
    • pp.6-11
    • /
    • 2008
  • The more technologies of multimedia are developed and multimodal interactions are proposed, the more people expect immersive interactions. This paper presents a enhanced moyie system which provides viewers with passive haptic sensation synchronized with audiovisual media. Also, we discuss the potential haptic contents in a movie system and the characteristics of effective authoring tool generating various haptic contents for various scenes. Furthermore, an example of enhanced haptic movie system is discussed and the first version of our haptic authoring tool for creating haptic contents of a movie system is introduced.

  • PDF

Extraction Analysis for Crossmodal Association Information using Hypernetwork Models (하이퍼네트워크 모델을 이용한 비전-언어 크로스모달 연관정보 추출)

  • Heo, Min-Oh;Ha, Jung-Woo;Zhang, Byoung-Tak
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.278-284
    • /
    • 2009
  • Multimodal data to have several modalities such as videos, images, sounds and texts for one contents is increasing. Since this type of data has ill-defined format, it is not easy to represent the crossmodal information for them explicitly. So, we proposed new method to extract and analyze vision-language crossmodal association information using the documentaries video data about the nature. We collected pairs of images and captions from 3 genres of documentaries such as jungle, ocean and universe, and extracted a set of visual words and that of text words from them. We found out that two modal data have semantic association on crossmodal association information from this analysis.

  • PDF

A study of effective contents construction for AR based English learning (AR기반 영어학습을 위한 효과적 콘텐츠 구성 방향에 대한 연구)

  • Kim, Young-Seop;Jeon, Soo-Jin;Lim, Sang-Min
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.10 no.4
    • /
    • pp.143-147
    • /
    • 2011
  • The system using augmented reality can save the time and cost. It is verified in various fields under the possibility of a technology by solving unrealistic feeling in the virtual space. Therefore, augmented reality has a variety of the potential to be used. Generally, multimodal senses such as visual/auditory/tactile feed back are well known as a method for enhancing the immersion in case of interaction with virtual object. By adapting tangible object we can provide touch sensation to users. a 3D model of the same scale overlays the whole area of the tangible object; thus, the marker area is invisible. This contributes to enhancing immersive and natural images to users. Finally, multimodal feedback also creates better immersion. In this paper, sound feedback is considered. By further improving immersion learning augmented reality for children with the initial step learning content is presented. Augmented reality is in the intermediate stages between future world and real world as well as its adaptability is estimated more than virtual reality.

  • PDF

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.