• Title/Summary/Keyword: Video Learning

Search Result 1,098, Processing Time 0.031 seconds

SOA-based Video Service Platform Model Design for Military e-Learning Service (군 원격교육체계를 위한 SOA기반 동영상서비스 플랫폼모델 설계)

  • Kim, Kyung-Rog;Moon, Nam-Mee
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.24-32
    • /
    • 2011
  • According to accelerate the convergence of defense and information technology, there is a need for innovative change in Military e-Learning service system. In other words, It has increased the need for system integration based on standards and interoperability to develop into a network-centric information and knowledge. In this study, It would like to introduce an integrated direction Military e-Learning service system on the SOA-based video content services in the operating system for the operating model. SOA is taking advantage in integration and expansion of the unit with a process. Using it, define of video services platform architecture and define of business model based on the Imprimatur model. Based on this, it define the role of actors for video content service in each step of the operating model, that is Production model, Brokerage model and consumption model. In the operating system, it define the functions and data to control and handle the needed functionality for video content services based on the operational model.

Collaborative Recommendation of Online Video Lectures in e-Learning System (이러닝 시스템에서 온라인 비디오 강좌의 협업적 추천 방법)

  • Ha, In-Ay;Song, Gyu-Sik;Kim, Heung-Nam;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.85-94
    • /
    • 2009
  • It is becoming increasingly difficult for learners to find the lectures they are looking for. In turn, the ability to find the particular lecture sought by the learner in an accurate and prompt manner has become an important issue in e-Learning. To deal this issue, in this paper. we present a collaborative approach to provide personalized recommendations of online video lectures. The proposed approach first identifies candidated video lectures that will be of interest to a certain user. Partitioned collaborative filtering is employed as an approach in order to generate neighbor learners and predict learners'preferences for the lectures. Thereafter, Attribute-based filtering is employed to recommend a final list of video lectures that the target user will like the most.

A Study in the Preference of e-Learning Contents Delivery Types on Web Information Search Literacy in the case of Agricultural High School (농업계 고등학교 학생들의 정보검색 능력에 따른 이러닝 콘텐츠 유형 선호도 연구)

  • Yu, Byeong-Min;Kim, Su-Wook;Park, Sung-Youl;Choi, Jun-Sik
    • Journal of Agricultural Extension & Community Development
    • /
    • v.16 no.2
    • /
    • pp.463-486
    • /
    • 2009
  • The purpose of this study was to find out the differences of preferences in e-Learning contents delivery types according to information searching retrieval ability in agricultural high school students. Contents delivery types are limited three kinds which are HTML type, video type, and text type and need to know about differences. The following summarizes the results of this study. On the preference of e-Learning contents delivery type on information searching retrieval ability had differences. High level group of information searching retrieval ability showed that they mostly preferred text contents delivery type. However, low level group of information searching retrieval ability showed that they preferred video contents delivery type. The results support our belief that there could be the differences in preferences in e-Learning delivery types with students' information searching retrieval abilities. We suggest that delivery types of e-Learning should be based on the students not on designers and developers.

  • PDF

NCS Course Design and Result Analysis of Class Application

  • Lee, Soonmi;Park, Hea-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.9
    • /
    • pp.157-163
    • /
    • 2016
  • In this paper, I had studied to focus on the NCS(National Competency Standard) course in the development process of NCS-based curriculum and designed and developed 'Digital Color Correction' course. 'Digital Color Correction' course was come up with in process of development of NCS-based curriculum in a department of university that aims to educate video-broadcasting experts who lead the advanced digital age. The course developed in this paper follows criteria of NCS and is designed to step in course-profile, instruction and evaluation. Also I made an analysis of learning effects after applying to class. And I compared and analyzed NCS-class and non NCS-class. As a result of comparison, NCS-class is better than non NCS-class in student attitude for learning and evaluation method.

Effects of Distance Education via Synchronous Video Conferencing on Attitude Changes of Korean and Japanese Students

  • LEE, Sangsoo
    • Educational Technology International
    • /
    • v.10 no.2
    • /
    • pp.107-125
    • /
    • 2009
  • This study seeks to prove three points through the research. The first point is to examine the changes of international attitudes with actual experiences using synchronous international distance learning. The second point is to examine the effectiveness of a synchronous international distance system. And the final point is to compare international attitudes among middle school and undergraduate school students in Korea and Japan. The study used the DVTS for audio and video communication tools and automatic translating chat as a text communication tool. This combination of communication tools was very effective for students from both countries to communicate for international collaborative learning activities. The study found several interesting patterns of attitude change from the results of the study. For whole category analysis, there are positive changes in four categories of international attitudes: consciousness to foreign countries, consideration for other's viewpoints, motivation for international education, and recognition for the counterpart country. However, there was no change in the nationality category.

An Explainable Deep Learning Algorithm based on Video Classification (비디오 분류에 기반 해석가능한 딥러닝 알고리즘)

  • Jin Zewei;Inwhee Joe
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.449-452
    • /
    • 2023
  • The rapid development of the Internet has led to a significant increase in multimedia content in social networks. How to better analyze and improve video classification models has become an important task. Deep learning models have typical "black box" characteristics. The model requires explainable analysis. This article uses two classification models: ConvLSTM and VGG16+LSTM models. And combined with the explainable method of LRP, generate visualized explainable results. Finally, based on the experimental results, the accuracy of the classification model is: ConvLSTM: 75.94%, VGG16+LSTM: 92.50%. We conducted explainable analysis on the VGG16+LSTM model combined with the LRP method. We found VGG16+LSTM classification model tends to use the frames biased towards the latter half of the video and the last frame as the basis for classification.

Effects of Lecturer Appearance and Students' Behavioral Patterns on Learning Flow and Teaching Presence of Chinese University Students' Video Lectures (중국 대학생의 동영상 학습에서 교수자 출연이 학습자 행동유형에 따라 학습몰입과 교수실재감에 미치는 효과)

  • Tai, Xiao-Xia;Zhu, Hui-Qin;Kim, Bo-Kyeong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.107-114
    • /
    • 2021
  • The purpose of this study is to investigate whether there is a difference in the effect of learning flow and teaching presence according to the lecturer's appearance and students' behavioral patterns in video learning. For this experiment, 183 freshmen from Xingtai University in China were selected as subjects. After being classified according to DISC, students were assigned to study the lecture videos with the appearance of the lecturer and the video without the appearance of the lecturer. After testing the level of their learning flow and teaching presence, the differences between groups were analyzed. According to the results of the analysis, the learning flow and teaching presence of groups who learned the videos in which the lecturer appeared were significantly higher than the groups who learned the videos without the appearance of the lecturer. Second, the effects of whether the lecturer appears or not according to DISC on learning flow were significant. However, the effects of DISC, and the interactive effect of DISC and the lecturer appearance were found to have no significant interactive effect on learning flow. Third, the effects of whether the lecturer appears or not according to DISC on teaching presence were significant, and the effects of DISC on teaching presence were not significant, but the interactive effect of lecturer appearance and DISC was significant. These findings suggest that lecture videos with the appearance of the lecturer generally have better effect. In particular, in order to enhance teaching presence, it is effective to decide whether the lecturer appears or not by considering its interactive effects with learners' DISC.

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

ADPM 기반의 실기 수업을 위한 저작 시스템의 프로토타입 개발

  • 구정모;한병래
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.2
    • /
    • pp.301-310
    • /
    • 2004
  • The Current 7th Curriculum for Computer Education emphasized the class of practice oriented, student oriented. But it is very hard because of many students, poor environments, insufficiency of the teaching model. So ADPM will gives our help. a ADPM based practical class using ebook synchronized with video files give a little student's wating time for answering, much student's learning efficiency, much student's voluntary learning custom, a individualized learning. And this study developed the prototype to support the ADPM. This prototype will make up for the weak points in authoring systems, which they are a wizard type program, capturing video file, synchronizing video files. And it will improve a practical class.

  • PDF

Video Classification System Based on Similarity Representation Among Sequential Data (순차 데이터간의 유사도 표현에 의한 동영상 분류)

  • Lee, Hosuk;Yang, Jihoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.1-8
    • /
    • 2018
  • It is not easy to learn simple expressions of moving picture data since it contains noise and a lot of information in addition to time-based information. In this study, we propose a similarity representation method and a deep learning method between sequential data which can express such video data abstractly and simpler. This is to learn and obtain a function that allow them to have maximum information when interpreting the degree of similarity between image data vectors constituting a moving picture. Through the actual data, it is confirmed that the proposed method shows better classification performance than the existing moving image classification methods.