• Title/Summary/Keyword: Visual Language

Search Result 709, Processing Time 0.031 seconds

The Interactive Use of Microcomputer for Distance Learning

  • Hong, Sung-Ryong
    • Journal of Digital Contents Society
    • /
    • v.8 no.2
    • /
    • pp.121-127
    • /
    • 2007
  • For human beings, language is the most important means of communication. Bloom and Lahey see successful language development as an interaction between form, content, and use. Language knowledge is a social phenomenon produced in a socio-cultural environment through interaction. Teachers have traditionally concentrated on the structure of their student's writing rather than on the message. If writing is to be seen as an interactive social process between humans, it is the content which is responded to. Language acquisition could be a major problem for hearing-impaired children and their acquisition of written language is characteristically problematic. This study is to search the use of microcomputers in written conversational methods, which enable the hearing-impaired student to hear their conversations in a visual form and which usefully extend their written language learning opportunities.

  • PDF

Modularity and Modality in ‘Second’ Language Learning: The Case of a Polyglot Savant

  • Smith, Neil
    • Korean Journal of English Language and Linguistics
    • /
    • v.3 no.3
    • /
    • pp.411-426
    • /
    • 2003
  • I report on the case of a polyglot ‘savant’ (C), who is mildly autistic, severely apraxic, and of limited intellectual ability; yet who can read, write, speak and understand about twenty languages. I outline his abilities, both verbal and non-verbal, noting the asymmetry between his linguistic ability and his general intellectual inability and, within the former, between his unlimited morphological and lexical prowess as opposed to his limited syntax. I then spell out the implications of these findings for modularity. C's unique profile suggested a further project in which we taught him British Sign Language. I report on this work, paying particular attention to the learning and use of classifiers, and discuss its relevance to the issue of modality: whether the human language faculty is preferentially tied to the oral domain, or is ‘modality-neutral’ as between the spoken and the visual modes.

  • PDF

Multimodal Discourse: A Visual Design Analysis of Two Advertising Images

  • Ly, Tan Hai;Jung, Chae Kwan
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.50-56
    • /
    • 2015
  • The area of discourse analysis has long neglected the value of images as a semiotic resource in communication. This paper suggests that like language, images are rich in meaning potential and are governed by visual grammar structures which can be utilized to decode the meanings of images. Employing a theoretical framework in visual communication, two digital images are examined for their representational and interactive dimensions and the dimensions' relation to the magazine advertisement genre. The results show that the framework identified narrative and conceptual processes, relations between participants and viewers, and symbolic attributes of the images, which all contribute to the sociological interpretations of the images. The identities and relationships between viewers and participants suggested in the images signify desirable qualities that may be associated to the product of the advertiser. The findings support the theory of visual grammar and highlight the potential of images to convey multi-layered meanings.

The Effect of Audio and Visual Cues on Korean and Japanese EFL Learners' Perception of English Liquids

  • Chung, Hyun-Song
    • English Language & Literature Teaching
    • /
    • v.11 no.2
    • /
    • pp.135-148
    • /
    • 2005
  • This paper investigated the effect of audio and visual cues on Korean and Japanese EFL learners' perception of the lateral/retroflex contrast in English. In a perception experiment, the two English consonants /l/ and /r/ were embedded in initial and medial position in nonsense words in the context of the vowels /i, a, u/. Singletons and clusters were included in the speech material. Audio and video recordings were made using a total of 108 items. The items were presented to Korean and Japanese learners of English in three conditions: audio-alone (A), visual-alone (V) and audio-visual presentation (AV). The results showed that there was no evidence of AV benefit for the perception of the /l/-/r/ contrast for either Korean or Japanese learners of English. Korean listeners showed much better identification rates of the /l/-/r/ contrast than Japanese listeners when presented in audio or audio-visual conditions.

  • PDF

Development of an Evaluation Criterion for Educational Programming Language Contents (프로그래밍 언어 교육용 콘텐츠의 평가준거 개발)

  • Kim, Yong-Dae;Lee, Jong-Yun
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.289-296
    • /
    • 2010
  • So far, previous works with respect to evaluating program contents have concentrated on the implementation of general educational content evaluation. In terms of the efforts required to solve problems, however, there are a few evaluation methods on educational programming language contents. Therefore, we propose new evaluation criteria for educational programming language contents. The detailed research contents can be summarized as follows. First, we analyze existing works and propose naval evaluation criteria for educational programming language contents. Second, the new evaluation criteria is verified by teachers who use Visual Basic educational contents via questionaries. Also, a program content is experimented via the evaluation criteria. Finally, it is expected that our proposed evaluation criteria for educational programming language contents can be used to evaluate newly developed educational programming language contents and to design its evaluation plans.

Korean consumers' perceptions of health/functional food claims according to the strength of scientific evidence

  • Kim, Ji-Yeon;Kang, Eun-Jin;Kwon, O-Ran;Kim, Gun-Hee
    • Nutrition Research and Practice
    • /
    • v.4 no.5
    • /
    • pp.428-432
    • /
    • 2010
  • In this study, we investigated that consumers could differentiate between levels of claims and clarify how a visual aid influences consumer understanding of the different claim levels. We interviewed 2,000 consumers in 13 shopping malls on their perception of and confidence in different levels of health claims using seven point scales. The average confidence scores given by participants were 4.17 for the probable level and 4.07 for the possible level; the score for the probable level was significantly higher than that for the possible level (P < 0.05). Scores for confidence in claims after reading labels with and without a visual aid were 5.27 and 4.43, respectively; the score for labeling with a visual aid was significantly higher than for labeling without a visual aid (P < 0.01). Our results provide compelling evidence that providing health claims with qualifying language differentiating levels of scientific evidence can help consumers understand the strength of scientific evidence behind those claims. Moreover, when a visual aid was included, consumers perceived the scientific levels more clearly and had greater confidence in their meanings than when a visual aid was not included. Although this result suggests that consumers react differently to different claim levels, it is not yet clear whether consumers understand the variations in the degree of scientific support.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Journal of Audiology & Otology
    • /
    • v.23 no.1
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Korean Journal of Audiology
    • /
    • v.23 no.1
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

Functional MRI of Language: Difference of its Activated Areas and Lateralization according to the Input Modality (언어의 기능적 자기공명영상: 자극방법에 따른 활성화와 편재화의 차이)

  • Ryoo, Jae-Wook;Cho, Jae-Min;Choi, Ho-Chul;Park, Mi-Jung;Choi, Hye-Young;Kim, Ji-Eun;Han, Heon;Kim, Sam-Soo;Jeon, Yong-Hwan;Khang, Hyun-Soo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.2
    • /
    • pp.130-138
    • /
    • 2011
  • Purpose : To compare fMRIs of visual and auditory word generation tasks, and to evaluate the difference of its activated areas and lateralization according to the mode of stimuli. Materials and Methods : Eight male normal volunteers were included and all were right handed. Functional maps were obtained during auditory and visual word generation tasks in all. Normalized group analysis were performed in each task and the threshold for significance was set at p<0.05. Activated areas in each task were compared visually and statistically. Results : In both tasks, left dominant activations were demonstrated and were more lateralized in visual task. Both frontal lobes (Broca's area, premotor area, and SMA) and left posterior middle temporal gyrus were activated in both tasks. Extensive bilateral temporal activations were noted in auditory task. Both occipital and parietal activations were demonstrated in visual task. Conclusion : Modality independent areas could be interpreted as a core area of language function. Modality specific areas may be associated with processing of stimuli. Visual task induced more lateralized activation and could be a more useful in language study than auditory task.