• Title/Summary/Keyword: audio-visual learning

Search Result 55, Processing Time 0.021 seconds

Analysis of learning effects using audio-visual manual of SWAT (SWAT의 시청각 매뉴얼을 통한 학습 효과 분석)

  • Lee, Ju-Yeong;Kim, Tea-Ho;Ryu, Ji-Chul;Kang, Hyun-Woo;Kum, Dong-Hyuk;Woo, Won-Hee;Jang, Chun-Hwa;Choi, Jong-Dae;Lim, Kyoung-Jae
    • Korean Journal of Agricultural Science
    • /
    • v.38 no.4
    • /
    • pp.731-737
    • /
    • 2011
  • In the modern society, GIS-based decision support system has been used in evaluating environmental issues and changes due to spatial and temporal analysis capabilities of the GIS. However without proper manual of these systems, its desired goals could not be achieved. In this study, audio-visual SWAT tutorial system was developed to evaluate its effectives in learning the SWAT model. Learning effects was analyzed after in-class demonstration and survey. The survey was conducted for $3^{rd}$ grade students with/without audio-visual materials using 30 questionnaires, composed of 3 items for trend of respondent, 5 items for effects of audio-visual materials, and 12 items for effects of with/without manual in learning the model. For group without audio-visual manual, 2.98 out of 5 was obtained and 4.05 out of 5 was obtained for group with audio-visual manual, indicating higher content delivery with audio-visual learning effects. As shown in this study, the audio-visual learning material should be developed and used in various computer-based modeling system.

Changes of the Prefrontal EEG(Electroencephalogram) Activities according to the Repetition of Audio-Visual Learning (시청각 학습의 반복 수행에 따른 전두부의 뇌파 활성도 변화)

  • Kim, Yong-Jin;Chang, Nam-Kee
    • Journal of The Korean Association For Science Education
    • /
    • v.21 no.3
    • /
    • pp.516-528
    • /
    • 2001
  • In the educational study, the measure of EEG(brain waves) can be useful method to study the functioning state of brain during learning behaviour. This study investigated the changes of neuronal response according to four times repetition of audio-visual learning. EEG data at the prefrontal$(Fp_{1},Fp_{2})$ were obtained from twenty subjects at the 8th grade, and analysed quantitatively using FFT(fast Fourier transform) program. The results were as follows: 1) In the first audio-visual learning, the activities of $\beta_{2}(20-30Hz)$ and $\beta_{1}(14-19Hz)$ waves increased highly, but the activities of $\theta(4-7Hz)$ and $\alpha$ (8-13Hz) waves decreased compared with the base lines. 2). According to the repetitive audio-visual learning, the activities of $\beta_{2}$ and $\beta_{1}$ waves decreased gradually after the 1st repetitive learning. And, the activity of $\beta_{2}$ wave had the higher change than that of $\beta_{1}$ wave. 3). The activity of $\alpha$ wave decreased smoothly according to the repetitive audio-visual learning, and the activity of $\theta$ wave decreased radically after twice repetitive learning. 4). $\beta$ and $\theta$ waves together showed high activities in the 2nd audio-visual learning(once repetition), and the learning achievement increased highly after the 2nd learning. 5). The right prefrontal$(Fp_{2})$ showed higher activation than the left$(Fp_{1})$ in the first audio-visual learning. However, there were not significant differences between the right and the left prefrontal EEG activities in the repetitive audio-visual learning. Based on these findings, we can conclude that the habituation of neuronal response shows up in the repetitive audio-visual learning and brain hemisphericity can be changed by learning experiences. In addition, it is suggested once repetition of audio-visual learning be effective on the improvement of the learning achievement and on the activation of the brain function.

  • PDF

A STUDY ON CAI AUDIO SYSTEM CONTROL BY PERSONAL COMPUTER (CAI 음성 관리매체의 퍼스날 컴퓨터 제어에 관한 연구)

  • Kho, Dae-Ghon;Park, Sang-Hee
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.486-490
    • /
    • 1989
  • In this paper, a program controlling an auto-audio media - cassette deck - by a 16 bit personal computer is studied in order to execute audio and visual learning in CAI. The results of this study are as follows. 1. Audio and visual learning is executed efficiently in CAI. 2. Access rate of voice information to text/image information is about 98% and 60% in "play" and "fast forward" respectively. 3. In "fast forward", quality of a cassette tape affects voice information access rate in propotion to motor driving speed. 4. Synchronizing signal may be mistaken by defects of tape itself.

  • PDF

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

Design of Music Learning Assistant Based on Audio Music and Music Score Recognition

  • Mulyadi, Ahmad Wisnu;Machbub, Carmadi;Prihatmanto, Ary S.;Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.5
    • /
    • pp.826-836
    • /
    • 2016
  • Mastering a musical instrument for an unskilled beginning learner is not an easy task. It requires playing every note correctly and maintaining the tempo accurately. Any music comes in two forms, a music score and it rendition into an audio music. The proposed method of assisting beginning music players in both aspects employs two popular pattern recognition methods for audio-visual analysis; they are support vector machine (SVM) for music score recognition and hidden Markov model (HMM) for audio music performance tracking. With proper synchronization of the two results, the proposed music learning assistant system can give useful feedback to self-training beginners.

The use of audio-visual aids and hyper-pronunciation method in teaching English consonants to Japanese college students

  • Todaka, Yuichi
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.149-154
    • /
    • 1996
  • Since the 1980s, a number of professionals in the ESL/EFL field have investigated the role of pronunciation in the ESL/EFL curriculum. Applying the insights gained from the second language acquisition research, these efforts have focused on the integration of pronunciation teaching and learning into the communicative curriculum, with a shift towards overall intelligibility as the primary goal of pronunciation teaching and learning. The present study reports on the efficacy of audio-visual aids and hyper-pronunciation training method in teaching the productions of English consonants to Japanese college students. The talk will focus on the implications of the present study, and the presenter makes suggestions to teaching pronunciation to Japanese learners.

  • PDF

Learners' Perceptions toward Non-speech Sounds Designed in e-Learning Contents (이러닝 콘텐츠에서 비음성 사운드에 대한 학습자 인식 분석)

  • Kim, Tae-Hyun;Rha, Il-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.470-480
    • /
    • 2010
  • Although e-Learning contents contain audio materials as well as visual materials, research on the design of audio materials has been focused on visual design. If it is considered that non-speech sounds which are a type of audio materials can promptly provide feedbacks of learners' responses and guide learners' learning process, the systemic design of non-speech sounds is needed. Therefore, the purpose of this study is to investigate the learners' perceptions toward non-speech sounds contained the e-Learning contents with multidimensional scaling method. For this purpose, the eleven non-speech sounds were selected among non-speech sounds designed Korea Open Courseware. The 66 juniors in A university responded the degree of similarity among 11 non-speech sounds and the learners' perceptions towards non-speech sounds were represented in the multidimensional space. The result shows that learners perceive separately non-speech sounds by the length of non-speech sounds and the atmosphere which is positive or negative.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

An Audio-Visual Teaching Aid (AVTA) with Scrolling Display and Speech to Text over the Internet

  • Davood Khalili;Chung, Wan-Young
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2649-2652
    • /
    • 2003
  • In this Paper, an Audio-Visual Teaching aid (AVTA) for use in a classroom and with Internet is presented. A system, which was designed and tested, consists of a wireless Microphone system, Text to Speech conversion Software, Noise filtering circuit and a Computer. An IBM compatible PC with sound card and Network Interface card and a Web browser and a voice and text messenger service were used to provide slightly delayed text and also voice over the internet for remote teaming, while providing scrolling text from a real time lecture in a classroom. The motivation for design of this system, was to aid Korean students who may have difficulty in listening comprehension while have, fairly good reading ability of text. This application of this system is twofold. On one hand it will help the students in a class to view and listen to a lecture, and on the other hand, it will serve as a vehicle for remote access (audio and text) for a classroom lecture. The project provides a simple and low cost solution to remote learning and also allows a student to have access to classroom in emergency situations when the student, can not attend a class. In addition, such system allows the student in capturing a teacher's lecture in audio and text form, without the need to be present in class or having to take many notes. This system will therefore help students in many ways.

  • PDF

Development of Processing System for Audio-vision System Based on Auditory Input (청각을 이용한 시각 재현 시스템의 개발)

  • Kim, Jung-Hun;Kim, Deok-Kyu;Won, Chul-Ho;Lee, Jong-Min;Lee, Hee-Jung;Lee, Na-Hee;Yoon, Su-Young
    • Journal of Biomedical Engineering Research
    • /
    • v.33 no.1
    • /
    • pp.25-31
    • /
    • 2012
  • The audio vision system was developed for visually impaired people and usability was verified. In this study ten normal volunteers were included in the subject group and their mean age was 28.8 years old. Male and female ratio was 7:3. The usability of audio vision system was verified by as follows. First, volunteers learned distance of obstacles and up-down discrimination. After learning of audio vision system, indoor and outdoor walking examination was performed. The test was scored by ability of up-down and lateral discrimination, distance recognition and walking without collision. Each parameter was scored by 1 to 5. The results were 93.5 +- SD(ranges, 86 to 100) of 100. In this study, we could convert visual information to auditory information by audio-vision system and verified possibility of applying to daily life for visually impaired people.