• Title/Summary/Keyword: Video Learning

Search Result 1,087, Processing Time 0.028 seconds

Telepresence in Video Game Streaming: Understanding Viewers' Perception of Personal Internet Broadcasting

  • Kyubin Cho;Choong C. Lee;Haejung Yun
    • Asia pacific journal of information systems
    • /
    • v.32 no.3
    • /
    • pp.684-705
    • /
    • 2022
  • A new trend has been emerging in recent years, with video game live streaming becoming a meeting ground for gamers, as well as a marketing strategy for game developers. In line with this trend, the emergence of the "Let's Play" culture has significantly changed the manner in which people enjoyed video games. In order to academically explore this new experience, this study seeks to answer the following research questions: (1) Does engaging in video game streaming offer the same feeling as playing the game? (2) If so, what are the factors that affect the feeling of telepresence from viewers' perspective? and (3) How does the feeling of telepresence affect viewers' learning experience of the streamed game? We generated and empirically tested a comprehensive research model based on the telepresence and consumer learning theories. The research findings revealed that the authenticity and pleasantness of the streamer and the interaction of viewers positively affect telepresence, which in turn is positively associated with the gained knowledge and a positive attitude toward the streamed game. Based on the research findings, various practical implications are discussed for game developers as well as platform providers.

Case Study of Flipped-learning on a Signal Processing Class (신호처리 교과목에 대한 플립러닝 적용사례)

  • Yoo, Jae Ha
    • Journal of Practical Engineering Education
    • /
    • v.9 no.2
    • /
    • pp.125-132
    • /
    • 2017
  • This paper is a study on the application of flipped learning, which is known as a teaching method that provides effective learning, to signal processing subjects. The teaching - learning model used for the class and the implementation examples for three years are described. In-class can be judged to be a relatively successful class, but organization of the video data provided in the pre-class and evaluation of whether or not to study pre-class video have to be improved.

A Study on Google Classroom as a Tool for the Development of the Learning Model of College English

  • Lee, Jeong-Hwa;Cha, Kyung-Whan
    • International Journal of Contents
    • /
    • v.17 no.2
    • /
    • pp.65-76
    • /
    • 2021
  • The aim of this study was to explore the use of Google Classroom as a learning management system for College English. The study targeted 34 university students. They took part in various activities, such as writing reactions to video lectures, peer-editing essays, and recording video presentations, et cetera. For the study, a t-test was conducted to evaluate the English development of the students. The two essays that each student wrote were used as the data sources. The result (t=-5.854, p=.000) indicated an improvement in their English writing proficiency. In addition, a survey was conducted to gather students' feedback regarding their perceptions towards the course. The study covered five aspects of their experience: Google Classroom, language development, Quizlet, classroom experience, and essay-writing experience. From the results, students indicated a positive response to the program. The use of Google Classroom in an online learning setting accomplishes two things; it helped the students in the development of their English proficiency, and provided activities that students find interesting, which in turn stimulates their self-learning spirit.

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

Analysis of Learning Effect on Multitude of Screens in Video Demonstration -On High School's Physics- (시범실험 동영상의 다중화면 학습 효과 분석 -고등학교 물리교과 중심으로-)

  • Lee, Seung-Bok;Jeon, Byeong-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.9
    • /
    • pp.240-247
    • /
    • 2007
  • The purpose of this study was to analyze the learning effects of video demonstration in a multitude of video screens in a science class. To examine this, an experiment from the first year science textbook was chosen, which looked at the relationship between electrical voltage and electrical current. Three experimental groups were used for the purpose of this study: 1/ a control group which used experimental still photos during a traditional class. 2/ experimental group A which used videos in a single screen, and 3/ experimental group B which used a multitude of video screens to demonstrate their effects. Post test learning effects was then carried out on each group related to the units. The results showed an improvement in grade for all groups. Experimental group B showed the most significant result, followed by the experimental group A. The control group showed the least significant grade improvement. In conclusion, the study revealed that the utilization of video demonstration in a science class is very useful and can be adapted in different forms in class. To enhance the effects of the learning method in conveying efficient meaning, versatile methods should be used to stimulate and heighten students' interests by mixing still photographs and video demonstration with various screen composition with the help of information technology.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Qualitative Exploration on Children's Interactions in Telepresence Robot Assisted Language Learning (원격로봇 보조 언어교육의 아동 상호작용 질적 탐색)

  • Shin, Kyoung Wan Cathy;Han, Jeong-Hye
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.3
    • /
    • pp.177-184
    • /
    • 2017
  • The purpose of this study was to explore children and robot interaction in distant language learning environments using three different video-conferencing technologies-two traditional screen-based videoconference technologies and a telepresence robot. One American and six Korean elementary school students participated in our case study. We relied on narratives of one-on-one interviews and observation of nonverbal cues in robot assisted language learning. Our findings suggest that participants responded more positively to interactions via a telepresence robot than to two screen-based video-conferencings, with many citing a stronger sense of immediacy during robot-mediated communications.

A Mask Wearing Detection System Based on Deep Learning

  • Yang, Shilong;Xu, Huanhuan;Yang, Zi-Yuan;Wang, Changkun
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.159-166
    • /
    • 2021
  • COVID-19 has dramatically changed people's daily life. Wearing masks is considered as a simple but effective way to defend the spread of the epidemic. Hence, a real-time and accurate mask wearing detection system is important. In this paper, a deep learning-based mask wearing detection system is developed to help people defend against the terrible epidemic. The system consists of three important functions, which are image detection, video detection and real-time detection. To keep a high detection rate, a deep learning-based method is adopted to detect masks. Unfortunately, according to the suddenness of the epidemic, the mask wearing dataset is scarce, so a mask wearing dataset is collected in this paper. Besides, to reduce the computational cost and runtime, a simple online and real-time tracking method is adopted to achieve video detection and monitoring. Furthermore, a function is implemented to call the camera to real-time achieve mask wearing detection. The sufficient results have shown that the developed system can perform well in the mask wearing detection task. The precision, recall, mAP and F1 can achieve 86.6%, 96.7%, 96.2% and 91.4%, respectively.

Educational Effects of Flipped Learning on Fashion Practical Course (패션 실기 수업에 적용한 플립드 러닝의 교육적 효과)

  • Kim, Jang-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.4
    • /
    • pp.497-508
    • /
    • 2020
  • The paradigm focusing on learner-centered classes and the introduction of flipped learning through the connection between online and offline have been increasing. This study proved the educational effect by applying flipped learning to the basic draping course within the fashion practical course and revealed the implications for flipped learning from the instructor's perspective. The research methods are theatrical research and model development research in order to guide basic drape utilizing flipped learning. The study results revealed that learners' satisfaction was very high about the basic draping course combined with flipped learning, and it showed that students were very satisfied with the learning-related video because it can compensate for the decrease in education efficiency according to the number of attendees and improve education. Improvements shall include technical and content supplementation of video learning materials and presentation of documented learning materials, in addition to video materials. From the instructor's perspective, time needs to be set aside for video shooting and editing, a view of the composition of education from the learner's perspective, and an in-depth understanding of the instructor's curriculum for flipped learning design.

A Study on the Alternative Method of Video Characteristics Using Captioning in Text-Video Retrieval Model (텍스트-비디오 검색 모델에서의 캡션을 활용한 비디오 특성 대체 방안 연구)

  • Dong-hun, Lee;Chan, Hur;Hyeyoung, Park;Sang-hyo, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.347-353
    • /
    • 2022
  • In this paper, we propose a method that performs a text-video retrieval model by replacing video properties using captions. In general, the exisiting embedding-based models consist of both joint embedding space construction and the CNN-based video encoding process, which requires a lot of computation in the training as well as the inference process. To overcome this problem, we introduce a video-captioning module to replace the visual property of video with captions generated by the video-captioning module. To be specific, we adopt the caption generator that converts candidate videos into captions in the inference process, thereby enabling direct comparison between the text given as a query and candidate videos without joint embedding space. Through the experiment, the proposed model successfully reduces the amount of computation and inference time by skipping the visual processing process and joint embedding space construction on two benchmark dataset, MSR-VTT and VATEX.