• Title/Summary/Keyword: Video-based Learning

Search Result 662, Processing Time 0.03 seconds

Predicting Learning Achievements with Indicators of Perceived Affordances Based on Different Levels of Content Complexity in Video-based Learning

  • Dasom KIM;Gyeoun JEONG
    • Educational Technology International
    • /
    • v.25 no.1
    • /
    • pp.27-65
    • /
    • 2024
  • The purpose of this study was to identify differences in learning patterns according to content complexity in video-based learning environments and to derive variables that have an important effect on learning achievement within particular learning contexts. To achieve our aims, we observed and collected data on learners' cognitive processes through perceived affordances, using behavioral logs and eye movements as specific indicators. These two types of reaction data were collected from 67 male and female university students who watched two learning videos classified according to their task complexity through the video learning player. The results showed that when the content complexity level was low, learners tended to navigate using other learners' digital logs, but when it was high, students tended to control the learning process and directly generate their own logs. In addition, using derived prediction models according to the degree of content complexity level, we identified the important variables influencing learning achievement in the low content complexity group as those related to video playback and annotation. In comparison, in the high content complexity group, the important variables were related to active navigation of the learning video. This study tried not only to apply the novel variables in the field of educational technology, but also attempt to provide qualitative observations on the learning process based on a quantitative approach.

Interactive Video Player for Supporting Learner Engagement in Video-Based Online Learning

  • YOON, Meehyun;ZHENG, Hua;JO, Il-Hyun
    • Educational Technology International
    • /
    • v.23 no.2
    • /
    • pp.129-155
    • /
    • 2022
  • This study sought to design and develop an interactive video player (IVP) capable of promoting student engagement through the use of online video content. We designed features built upon interactive, constructive, active, passive (ICAP), and crowd learning frameworks. In the development stage of this study, we integrated numerous interactive features into the IVP intended to help learners shift from passive to interactive learning activities. We then explored the effectiveness and usability of the developed IVP by conducting an experiment in which we evaluated students' exam scores after using either our IVP or a conventional video player. There were 158 college students who participated in the study; 76 students in the treatment group used the IVP and 82 students in the control group used a conventional video player. Results indicate that the participants in the experiment group demonstrated better achievement than the participants in the control group. We further discuss the implications of this study based on an additional survey that was administered to disclose how usable the participants perceived the IVP to be.

Exploration of Predictive Model for Learning Achievement of Behavior Log Using Machine Learning in Video-based Learning Environment (동영상 기반 학습 환경에서 머신러닝을 활용한 행동로그의 학업성취 예측 모형 탐색)

  • Lee, Jungeun;Kim, Dasom;Jo, Il-Hyun
    • The Journal of Korean Association of Computer Education
    • /
    • v.23 no.2
    • /
    • pp.53-64
    • /
    • 2020
  • As online learning forms centered on video lectures become more common and constantly increasing, the video-based learning environment applying various educational methods is also changing and developing to enhance learning effectiveness. Learner's log data has emerged for measuring the effectiveness of education in the online learning environment, and various analysis methods of log data are important for learner's customized learning prescriptions. To this end, the study analyzed learner behavior data and predictions of achievement by machine learning in video-based learning environments. As a result, interactive behaviors such as video navigation and comment writing, and learner-led learning behaviors predicted achievement in common in each model. Based on the results, the study provided implications for the design of the video learning environment.

A study on the dental technology student's recognition for non-face-to-face classes (비대면 수업에 대한 치기공과 학습자 인식에 관한 연구)

  • Choi, Ju young;Jung, Hyo Kyung
    • Journal of Technologic Dentistry
    • /
    • v.42 no.4
    • /
    • pp.402-408
    • /
    • 2020
  • Purpose: To understand the students' level of recognition of online classes in the Department of Dental Technology and to provide the basic data for designing online classes based on the dental technology course. Methods: A survey was conducted among the students of the dental technology department. The collected data was analyzed with the SPSS ver. 25.0 program. To ensure a reliable verification, the α=0.05 significance level was used. The t-test and analysis of variance were also performed. Results: The students' level of recognition of online classes in the Department of Dental Technology is shown in the rate of recognition for video-based classes for both the theory and experiments. Students displayed high positivity with the video-based learning as it is repeated learning that is not affected by the limitations of time. In addition, video-based learning is highly beneficial in terms of convenience, satisfaction, and achievement for learning. Conclusion: Based on the results, video-based learning is a highly positive learning type for students. It was also recommended that the Department of Dental Technology should offer a post-COVID-19 online class to include the blended methods of a face-to-face class and video-based learning.

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

Comparing Learning Outcome of e-Learning with Face-to-Face Lecture of a Food Processing Technology Course in Korean Agricultural High School

  • PARK, Sung Youl;LEE, Hyeon-ah
    • Educational Technology International
    • /
    • v.8 no.2
    • /
    • pp.53-71
    • /
    • 2007
  • This study identified the effectiveness of e-learning by comparing learning outcome in conventional face-to-face lecture with the selected e-learning methods. Two e-learning contents (animation based and video based) were developed based on the rapid prototyping model and loaded onto the learning management system (LMS), which is http://www.enaged.co.kr. Fifty-four Korean agricultural high school students were randomly assigned into three groups (face-to-face lecture, animation based e-learning, and video based e-learning group). The students of the e-learning group logged on the LMS in school computer lab and completed each e-learning. All students were required to take a pretest and posttest before and after learning under the direction of the subject teacher. A one-way analysis of covariance was administered to verify whether there was any difference between face-to-face lecture and e-learning in terms of students' learning outcomes after controlling the covariate variable, pretest score. According to the results, no differences between animation based and video based e-learning as well as between face-to-face learning and e-learning were identified. Findings suggest that the use of well designed e-learning could be worthy even in agricultural education, which stresses hands-on experience and lab activities if e-learning was used appropriately in combination with conventional learning. Further research is also suggested, focusing on a preference of e-learning content type and its relationship with learning outcome.

A Study of Video-Based Abnormal Behavior Recognition Model Using Deep Learning

  • Lee, Jiyoo;Shin, Seung-Jung
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.115-119
    • /
    • 2020
  • Recently, CCTV installations are rapidly increasing in the public and private sectors to prevent various crimes. In accordance with the increasing number of CCTVs, video-based abnormal behavior detection in control systems is one of the key technologies for safety. This is because it is difficult for the surveillance personnel who control multiple CCTVs to manually monitor all abnormal behaviors in the video. In order to solve this problem, research to recognize abnormal behavior using deep learning is being actively conducted. In this paper, we propose a model for detecting abnormal behavior based on the deep learning model that is currently widely used. Based on the abnormal behavior video data provided by AI Hub, we performed a comparative experiment to detect anomalous behavior through violence learning and fainting in videos using 2D CNN-LSTM, 3D CNN, and I3D models. We hope that the experimental results of this abnormal behavior learning model will be helpful in developing intelligent CCTV.

Exploring the Relationships Between Emotions and State Motivation in a Video-based Learning Environment

  • YU, Jihyun;SHIN, Yunmi;KIM, Dasom;JO, Il-Hyun
    • Educational Technology International
    • /
    • v.18 no.2
    • /
    • pp.101-129
    • /
    • 2017
  • This study attempted to collect learners' emotion and state motivation, analyze their inner states, and measure state motivation using a non-self-reported survey. Emotions were measured by learning segment in detailed learning situations, and they were used to indicate total state motivation with prediction power. Emotion was also used to explain state motivation by learning segment. The purpose of this study was to overcome the limitations of video-based learning environments by verifying whether the emotions measured during individual learning segments can be used to indicate the learner's state motivation. Sixty-eight students participated in a 90-minute to measure their emotions and state motivation, and emotions showed a statistically significant relationship between total state motivation and motivation by learning segment. Although this result is not clear because this was an exploratory study, it is meaningful that this study showed the possibility that emotions during different learning segments can indicate state motivation.

The Effect of Segment Size on Quality Selection in DQN-based Video Streaming Services (DQN 기반 비디오 스트리밍 서비스에서 세그먼트 크기가 품질 선택에 미치는 영향)

  • Kim, ISeul;Lim, Kyungshik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1182-1194
    • /
    • 2018
  • The Dynamic Adaptive Streaming over HTTP(DASH) is envisioned to evolve to meet an increasing demand on providing seamless video streaming services in the near future. The DASH performance heavily depends on the client's adaptive quality selection algorithm that is not included in the standard. The existing conventional algorithms are basically based on a procedural algorithm that is not easy to capture and reflect all variations of dynamic network and traffic conditions in a variety of network environments. To solve this problem, this paper proposes a novel quality selection mechanism based on the Deep Q-Network(DQN) model, the DQN-based DASH Adaptive Bitrate(ABR) mechanism. The proposed mechanism adopts a new reward calculation method based on five major performance metrics to reflect the current conditions of networks and devices in real time. In addition, the size of the consecutive video segment to be downloaded is also considered as a major learning metric to reflect a variety of video encodings. Experimental results show that the proposed mechanism quickly selects a suitable video quality even in high error rate environments, significantly reducing frequency of quality changes compared to the existing algorithm and simultaneously improving average video quality during video playback.

Improved Inference for Human Attribute Recognition using Historical Video Frames

  • Ha, Hoang Van;Lee, Jong Weon;Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.120-124
    • /
    • 2021
  • Recently, human attribute recognition (HAR) attracts a lot of attention due to its wide application in video surveillance systems. Recent deep-learning-based solutions for HAR require time-consuming training processes. In this paper, we propose a post-processing technique that utilizes the historical video frames to improve prediction results without invoking re-training or modifying existing deep-learning-based classifiers. Experiment results on a large-scale benchmark dataset show the effectiveness of our proposed method.