• 제목/요약/키워드: audio-visual learning

검색결과 55건 처리시간 0.02초

SWAT의 시청각 매뉴얼을 통한 학습 효과 분석 (Analysis of learning effects using audio-visual manual of SWAT)

  • 이주영;김태호;류지철;강현우;금동혁;우원희;장춘화;최중대;임경재
    • 농업과학연구
    • /
    • 제38권4호
    • /
    • pp.731-737
    • /
    • 2011
  • In the modern society, GIS-based decision support system has been used in evaluating environmental issues and changes due to spatial and temporal analysis capabilities of the GIS. However without proper manual of these systems, its desired goals could not be achieved. In this study, audio-visual SWAT tutorial system was developed to evaluate its effectives in learning the SWAT model. Learning effects was analyzed after in-class demonstration and survey. The survey was conducted for $3^{rd}$ grade students with/without audio-visual materials using 30 questionnaires, composed of 3 items for trend of respondent, 5 items for effects of audio-visual materials, and 12 items for effects of with/without manual in learning the model. For group without audio-visual manual, 2.98 out of 5 was obtained and 4.05 out of 5 was obtained for group with audio-visual manual, indicating higher content delivery with audio-visual learning effects. As shown in this study, the audio-visual learning material should be developed and used in various computer-based modeling system.

시청각 학습의 반복 수행에 따른 전두부의 뇌파 활성도 변화 (Changes of the Prefrontal EEG(Electroencephalogram) Activities according to the Repetition of Audio-Visual Learning)

  • 김용진;장남기
    • 한국과학교육학회지
    • /
    • 제21권3호
    • /
    • pp.516-528
    • /
    • 2001
  • 학습행동에서의 뇌파 측정은 실시간으로 두뇌 기능 상태를 연구하는데 유용한 연구 방법이며, 대뇌의 부위 중 전전두엽은 새로움에 대한 지향반응과 사고 활동에 중요한 역할을 한다. 본 연구에서는 중학교 2학년 학생 20명에게 새로운 시청각 학습자료를 제시하고 4회의 반복학습이 이루어지는 과정에서 전전두부$(Fp_{1},\;Fp_{2})$의 뇌파를 측정하고 고속푸리에 변환(FFT)을 하여 정량적으로 분석하였다. 그 결과는 다음과 같다. 1) 새로운 내용의 첫번째 시청각 학습으로 기준상태에서보다 뇌신경의 속파 리듬인 $\beta_{2}$파(20-30Hz)와 $\beta_{1}$파(13-20Hz)의 활성은 증가하였으며, 서파 리듬인 $\theta$파(4-7Hz)와 $\alpha$ 파(8-13Hz)의 활성은 감소하였다. 2) $\beta_{2}$파와 $\beta_{1}$파의 활성은 1회의 반복학습 이후에 점차로 감소하였으며, $\beta_{2}$파가 $\beta_{1}$파보다 반복학습에 따른 활성도의 변화가 크게 이루어졌다. 3) 반복적인 시청각 학습이 이루어짐에 따라 $\alpha$ 파의 활성도는 기준상태에서보다 낮은 상태에서 완만하게 감소하였으며 $\theta$파의 활성은 2회의 반복학습 후에 감소하였다. 4) $\beta$$\theta$파의 활성이 함께 높은 2차 시청각 학습(1회 반복학습)에서 높은 학업 성취도의 향상을 보였다. 5) 처음 시청각 학습이 이루어질 때에는 전전두엽의 우뇌$(Fp_{2})$가 좌뇌$(Fp_{1})$보다 우세한 기능을 보였지만 반복적인 시청각 학습에서는 좌 우 뇌의 우세성이 구별되지 않았다. 따라서 교과학습과 관련된 인간의 정신행동에 있어서 뇌신경 반응의 습관화 현상이 나타나며, 학습 경험에 의해 두뇌 반구의 우세성이 변화할 수도 있음을 제시한다. 또한 시청각 학습에 있어서 두뇌 기능의 효율적 활용을 통한 학업 성취도 향상을 위해서는 l회의 반복 학습이 적합하다고 할 수 있다.

  • PDF

CAI 음성 관리매체의 퍼스날 컴퓨터 제어에 관한 연구 (A STUDY ON CAI AUDIO SYSTEM CONTROL BY PERSONAL COMPUTER)

  • 고대곤;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1989년도 하계종합학술대회 논문집
    • /
    • pp.486-490
    • /
    • 1989
  • In this paper, a program controlling an auto-audio media - cassette deck - by a 16 bit personal computer is studied in order to execute audio and visual learning in CAI. The results of this study are as follows. 1. Audio and visual learning is executed efficiently in CAI. 2. Access rate of voice information to text/image information is about 98% and 60% in "play" and "fast forward" respectively. 3. In "fast forward", quality of a cassette tape affects voice information access rate in propotion to motor driving speed. 4. Synchronizing signal may be mistaken by defects of tape itself.

  • PDF

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

Design of Music Learning Assistant Based on Audio Music and Music Score Recognition

  • Mulyadi, Ahmad Wisnu;Machbub, Carmadi;Prihatmanto, Ary S.;Sin, Bong-Kee
    • 한국멀티미디어학회논문지
    • /
    • 제19권5호
    • /
    • pp.826-836
    • /
    • 2016
  • Mastering a musical instrument for an unskilled beginning learner is not an easy task. It requires playing every note correctly and maintaining the tempo accurately. Any music comes in two forms, a music score and it rendition into an audio music. The proposed method of assisting beginning music players in both aspects employs two popular pattern recognition methods for audio-visual analysis; they are support vector machine (SVM) for music score recognition and hidden Markov model (HMM) for audio music performance tracking. With proper synchronization of the two results, the proposed music learning assistant system can give useful feedback to self-training beginners.

The use of audio-visual aids and hyper-pronunciation method in teaching English consonants to Japanese college students

  • Todaka, Yuichi
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.149-154
    • /
    • 1996
  • Since the 1980s, a number of professionals in the ESL/EFL field have investigated the role of pronunciation in the ESL/EFL curriculum. Applying the insights gained from the second language acquisition research, these efforts have focused on the integration of pronunciation teaching and learning into the communicative curriculum, with a shift towards overall intelligibility as the primary goal of pronunciation teaching and learning. The present study reports on the efficacy of audio-visual aids and hyper-pronunciation training method in teaching the productions of English consonants to Japanese college students. The talk will focus on the implications of the present study, and the presenter makes suggestions to teaching pronunciation to Japanese learners.

  • PDF

이러닝 콘텐츠에서 비음성 사운드에 대한 학습자 인식 분석 (Learners' Perceptions toward Non-speech Sounds Designed in e-Learning Contents)

  • 김태현;나일주
    • 한국콘텐츠학회논문지
    • /
    • 제10권7호
    • /
    • pp.470-480
    • /
    • 2010
  • 이러닝 콘텐츠에는 시각자료와 함께 다양한 청각자료를 포함하고 있음에도 불구하고 그동안 학습자료에서 청각정보 설계에 대한 연구는 극히 제한적으로 이루어져 왔다. 청각정보의 한 유형인 비음성 사운드가 학습자들에게 피드백 제공 및 행위유도를 즉시적으로 할 수 있다는 장점을 감안한다면 비음성 사운드의 체계적 설계가 요구된다. 이에 본 논문은 다차원척도법을 활용하여 학습자들이 이러닝 콘텐츠에 설계된 비음성 사운드를 어떠한 방식으로 인식하고 있는지를 경험적으로 탐색하는 것을 목적으로 수행되었다. 한국교육학술정보원에서 제공하는 이러닝 콘텐츠에 설계된 비음성 사운드 중 대표성이 있는 11개의 비음성 사운드가 선정되었다. A 대학교 3학년 학생 66명을 대상으로 11개의 비음성 사운드들 간의 유사 정도에 대해 응답하도록 하였고 그 결과가 다차원 공간에 표현되었다. 연구결과, 학습자들은 비음성 사운드의 길이와 비음성 사운드가 전달하는 긍정적 혹은 부정적 분위기에 따라 비음성 사운드를 구분하여 인식하고 있는 것으로 나타났다.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

An Audio-Visual Teaching Aid (AVTA) with Scrolling Display and Speech to Text over the Internet

  • Davood Khalili;Chung, Wan-Young
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 V
    • /
    • pp.2649-2652
    • /
    • 2003
  • In this Paper, an Audio-Visual Teaching aid (AVTA) for use in a classroom and with Internet is presented. A system, which was designed and tested, consists of a wireless Microphone system, Text to Speech conversion Software, Noise filtering circuit and a Computer. An IBM compatible PC with sound card and Network Interface card and a Web browser and a voice and text messenger service were used to provide slightly delayed text and also voice over the internet for remote teaming, while providing scrolling text from a real time lecture in a classroom. The motivation for design of this system, was to aid Korean students who may have difficulty in listening comprehension while have, fairly good reading ability of text. This application of this system is twofold. On one hand it will help the students in a class to view and listen to a lecture, and on the other hand, it will serve as a vehicle for remote access (audio and text) for a classroom lecture. The project provides a simple and low cost solution to remote learning and also allows a student to have access to classroom in emergency situations when the student, can not attend a class. In addition, such system allows the student in capturing a teacher's lecture in audio and text form, without the need to be present in class or having to take many notes. This system will therefore help students in many ways.

  • PDF

청각을 이용한 시각 재현 시스템의 개발 (Development of Processing System for Audio-vision System Based on Auditory Input)

  • 김정훈;김덕규;원철호;이종민;이희중;이나희;윤수영
    • 대한의용생체공학회:의공학회지
    • /
    • 제33권1호
    • /
    • pp.25-31
    • /
    • 2012
  • The audio vision system was developed for visually impaired people and usability was verified. In this study ten normal volunteers were included in the subject group and their mean age was 28.8 years old. Male and female ratio was 7:3. The usability of audio vision system was verified by as follows. First, volunteers learned distance of obstacles and up-down discrimination. After learning of audio vision system, indoor and outdoor walking examination was performed. The test was scored by ability of up-down and lateral discrimination, distance recognition and walking without collision. Each parameter was scored by 1 to 5. The results were 93.5 +- SD(ranges, 86 to 100) of 100. In this study, we could convert visual information to auditory information by audio-vision system and verified possibility of applying to daily life for visually impaired people.