• 제목/요약/키워드: Video Learning

검색결과 1,076건 처리시간 0.028초

The Efficacy of Zoom Technology as an Educational Tool for English Reading Comprehension Achievement in EFL Classroom

  • Kim, HyeJeong
    • International Journal of Advanced Culture Technology
    • /
    • 제8권3호
    • /
    • pp.198-205
    • /
    • 2020
  • The purpose of this study is to investigate the effect of real-time remote video instruction using zoom on learners' English reading achievement. The study also sought to identify the efficiency of zoom video lectures and consider supplementing them by surveying learners' opinions and satisfaction regarding zoom video lectures. To this end, control and experimental groups were set up, and two achievement tests and a questionnaire were conducted. The study's results demonstrated that zoom video lectures have a positive effect on learners' English reading achievement. The questionnaire found that learners are satisfied with zoom video lectures for the following reasons: 'increased interest in and motivation towards learning', 'self-directed learning', 'active interaction', 'ease of access', 'ease of information retrieval'. At the same time, the questionnaire also found that some learners are dissatisfied with zoom video lectures due to 'mechanical errors or defects', 'poor audio quality', and 'the need to add customized functions for efficient classes'. In practice, zoom video lectures must be supplemented with automatic attendance processing, convenient data upload and download, and more efficient video screen management. Given the recent increase in online classes, we, as instructors, must develop teaching activities and/or strategies for video lectures that can encourage active participation by learners.

비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발 (Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제22권6호
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.

비디오활용 사례기반학습이 간호대학생의 임상의사결정능력 및 학습동기에 미치는 효과 (The Effects of Case-Based Learning Using Video on Clinical Decision Making and Learning Motivation in Undergraduate Nursing Students)

  • 유문숙;박진희;이시라
    • 대한간호학회지
    • /
    • 제40권6호
    • /
    • pp.863-871
    • /
    • 2010
  • Purpose: The purpose of this study was to examine the effects of case-base learning (CBL) using video on clinical decision-making and learning motivation. Methods: This research was conducted between June 2009 and April 2010 as a nonequivalent control group non-synchronized design. The study population was 44 third year nursing students who enrolled in a college of nursing, A University in Korea. The nursing students were divided into the CBL and the control group. The intervention was the CBL with three cases using video. The controls attended a traditional live lecture on the same topics. With questionnaires objective clinical decision-making, subjective clinical decision-making, and learning motivation were measured before the intervention, and 10 weeks after the intervention. Results: Significant group differences were observed in clinical decision-making and learning motivation. The post-test scores of clinical decision-making in the CBL group were statistically higher than the control group. Learning motivation was also significantly higher in the CBL group than in the control group. Conclusion: These results indicate that CBL using video is effective in enhancing clinical decision-making and motivating students to learn by encouraging self-directed learning and creating more interest and curiosity in learning.

대조적 학습을 활용한 주요 프레임 검출 방법 (Key Frame Detection Using Contrastive Learning)

  • 박경태;김원준;이용;장래영;최명석
    • 방송공학회논문지
    • /
    • 제27권6호
    • /
    • pp.897-905
    • /
    • 2022
  • 비디오 영상 내 주요 프레임(Key Frame) 검출은 컴퓨터 비전 분야에서 꾸준히 연구되고 있는 분야 중 하나이다. 최근 심층학습(Deep Learning) 기술의 발전으로 비디오 영상에서의 주요 프레임 검출 성능이 향상 되었으나, 다양한 종류의 영상 콘텐츠 및 복잡한 배경으로 인해 여전히 효과적인 학습이 어려운 문제점이 있다. 본 논문에서는 대조적 학습(Contrastive Learning)과 메모리 뱅크(Memory Bank)를 통해 영상의 주요 프레임을 검출하는 새로운 방법을 제안한다. 제안하는 방법은 입력 프레임과 같은 영상 내 이웃하는 프레임 간 차이와 다른 영상 내 프레임과의 차이를 기반으로 특징 추출 신경망을 학습한다. 이와 같은 대조적 학습을 통해 메모리 뱅크에 주요 프레임을 저장 및 갱신하여 영상의 중복성을 효과적으로 제거한다. 비디오 영상 데이터셋에서의 실험 결과를 통해 제안하는 방법의 성능을 검증하였다.

RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화 (Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data)

  • 정재혁;김민석
    • 한국멀티미디어학회논문지
    • /
    • 제25권8호
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

A Multi-category Task for Bitrate Interval Prediction with the Target Perceptual Quality

  • Yang, Zhenwei;Shen, Liquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4476-4491
    • /
    • 2021
  • Video service providers tend to face user network problems in the process of transmitting video streams. They strive to provide user with superior video quality in a limited bitrate environment. It is necessary to accurately determine the target bitrate range of the video under different quality requirements. Recently, several schemes have been proposed to meet this requirement. However, they do not take the impact of visual influence into account. In this paper, we propose a new multi-category model to accurately predict the target bitrate range with target visual quality by machine learning. Firstly, a dataset is constructed to generate multi-category models by machine learning. The quality score ladders and the corresponding bitrate-interval categories are defined in the dataset. Secondly, several types of spatial-temporal features related to VMAF evaluation metrics and visual factors are extracted and processed statistically for classification. Finally, bitrate prediction models trained on the dataset by RandomForest classifier can be used to accurately predict the target bitrate of the input videos with target video quality. The classification prediction accuracy of the model reaches 0.705 and the encoded video which is compressed by the bitrate predicted by the model can achieve the target perceptual quality.

비디오 녹화를 통한 자가평가 학습법이 간호술기 수행능력과 자기주도적 학습능력, 학업적 자기효능감에 미치는 영향 (Effect of a Self-Evaluation Method Using Video Recording on Competency in Nursing Skills, Self-Directed Learning Ability, and Academic Self-Efficacy)

  • 송소라;김영주
    • 기본간호학회지
    • /
    • 제22권4호
    • /
    • pp.416-423
    • /
    • 2015
  • Purpose: The purpose of this study was to evaluate the effect of a self-evaluation method using video recording on competency in nursing skills, self-directed learning ability, and academic self-efficacy in nursing students. Methods: The study design was a non-equivalent pre-post quasi-experimental design. The experimental and control groups were randomly assigned with 35 participants in each group. Interventions for the experimental group were video recording and students' self-evaluation of what they did. Nursing skills included in the study were tube feeding, intradermal injection, subcutaneous injection, and intramuscular injection. Competency in nursing skills was measured one time at the end of the study using a checklist. Self-directed learning ability and academic self-efficacy were measured 3 times (pre-, mid-, and post-intervention) over the 8 weeks. Independent t-test, chi-square test, and repeated measures ANOVA were used for data analyses. Results: There was no statistically significant difference for competency in nursing skills and self-directed learning ability over the 8 weeks of the practice session. There was a significant difference in academic self-efficacy by groups over time. Conclusion: Results indicate that self-evaluation method using video recording is an effective learning way to improve academic achievement in nursing students.

3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향 (Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN)

  • 정영지
    • 한국인터넷방송통신학회논문지
    • /
    • 제23권3호
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN은 시계열 데이터 학습을 위한 딥 러닝 기법 중 하나이다. 이러한 3차원 학습은 많은 매개변수를 생성할 수 있으므로 고성능 기계학습이 필요하거나 학습 속도에 커다란 영향을 미칠 수 있다. 본 연구에서는 손의 동적인 제스처 동작을 시공간적으로 학습할 때, 3D-CNN 모델의 구조적 변화 없이 입력 영상 데이터의 시공간적 변화에 따른 학습 정확성을 분석함으로써, 3D-CNN을 이용한 동적 제스처 학습의 효율성을 높이기 위한 입력 영상 데이터의 최적 조건을 찾고자 한다. 첫 번째로 동적 손 제스처 영상 데이터에서 동적 이미지 프레임의 학습구간을 설정함으로써 제스처 동작간 시간 비율을 조정한다. 둘째로는 클래스간 2차원 교차 상관 분석을 통해 영상 데이터의 이미지 프레임간 유사도를 측정하여 정규화 함으로써 프레임간 평균값을 얻고 학습 정확성을 분석한다. 이러한 분석을 통하여, 동적 손 제스처의 3D-CNN 딥 러닝을 위한 입력 영상 데이터를 효과적으로 선택하는 두 가지 방법을 제안한다. 실험 결과는 영상 데이터 프레임의 학습구간과 클래스간 이미지 프레임간 유사도가 학습 모델의 정확성에 영향을 미칠 수 있음을 보여준다.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

Wikispaces: A Social Constructivist Approach to Flipped Learning in Higher Education Contexts

  • Ha, Myung-Jeong
    • International Journal of Contents
    • /
    • 제12권4호
    • /
    • pp.62-68
    • /
    • 2016
  • This paper describes an attempt to integrate flip teaching into a language classroom by adopting wikispaces as an online learning platform. The purpose of this study is to examine student perceptions of the effectiveness of using video lectures and wikispaces to foster active participation and collaborative learning. Flipped learning was implemented in an English writing class over one semester. Participants were 27 low intermediate level Korean university students. Data collection methods included background questionnaires at the beginning of the semester, learning experience questionnaires at the end of the semester, and semi-structured interviews with 6 focal participants. Because of the significance of video lectures in flip teaching, oCam was used for making weekly online lectures as a way of pre-class activities. Every week, online lectures were posted on the school LMS system (moodle). Every week, participants met in a computer room to perform in-class activities. Both in-class activities and post-class activities were managed by wikispaces. The results indicate that the flipped classroom facilitated student learning in the writing class. More than 53% of the respondents felt that it was useful to develop writing skills in a flipped classroom. Particularly, students felt that the video lectures prior to the class helped them improve their grammar skills. However, with respect to their satisfaction with collaborative works, about 44% of the participants responded positively. Similarly, 44% of the participants felt that in-class group work helped them interact with the other group members. Considering these results, this paper concludes with pedagogical suggestions and implications for further research.