• Title/Summary/Keyword: Video Learning

Search Result 1,076, Processing Time 0.025 seconds

Improved Inference for Human Attribute Recognition using Historical Video Frames

  • Ha, Hoang Van;Lee, Jong Weon;Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.120-124
    • /
    • 2021
  • Recently, human attribute recognition (HAR) attracts a lot of attention due to its wide application in video surveillance systems. Recent deep-learning-based solutions for HAR require time-consuming training processes. In this paper, we propose a post-processing technique that utilizes the historical video frames to improve prediction results without invoking re-training or modifying existing deep-learning-based classifiers. Experiment results on a large-scale benchmark dataset show the effectiveness of our proposed method.

Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board (임베디드 보드에서 영상 처리 및 딥러닝 기법을 혼용한 돼지 탐지 정확도 개선)

  • Yu, Seunghyun;Son, Seungwook;Ahn, Hanse;Lee, Sejun;Baek, Hwapyeong;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.583-599
    • /
    • 2022
  • Although the object detection accuracy with a single image has been significantly improved with the advance of deep learning techniques, the detection accuracy for pig monitoring is challenged by occlusion problems due to a complex structure of a pig room such as food facility. These detection difficulties with a single image can be mitigated by using a video data. In this research, we propose a method in pig detection for video monitoring environment with a static camera. That is, by using both image processing and deep learning techniques, we can recognize a complex structure of a pig room and this information of the pig room can be utilized for improving the detection accuracy of pigs in the monitored pig room. Furthermore, we reduce the execution time overhead by applying a pruning technique for real-time video monitoring on an embedded board. Based on the experiment results with a video data set obtained from a commercial pig farm, we confirmed that the pigs could be detected more accurately in real-time, even on an embedded board.

How Long Will Your Videos Remain Popular? Empirical Study with Deep Learning and Survival Analysis

  • Min Gyeong Choi;Jae Hong Park
    • Asia pacific journal of information systems
    • /
    • v.33 no.2
    • /
    • pp.282-297
    • /
    • 2023
  • One of the emerging trends in the marketing field is digital video marketing. Online videos offer rich content typically containing more information than any other type of content (e.g., audible or textual content). Accordingly, previous researchers have examined factors influencing videos' popularity. However, few studies have examined what causes a video to remain popular. Some videos achieve continuous, ongoing popularity, while others fade out quickly. For practitioners, videos at the recommendation slots may serve as strong communication channels, as many potential consumers are exposed to such videos. So,this study will provide practitioners important advice regarding how to choose videos that will survive as long-lasting favorites, allowing them to advertise in a cost-effective manner. Using deep learning techniques, this study extracts text from videos and measured the videos' tones, including factual and emotional tones. Additionally, we measure the aesthetic score by analyzing the thumbnail images in the data. We then empirically show that the cognitive features of a video, such as the tone of a message and the aesthetic assessment of a thumbnail image, play an important role in determining videos' long-term popularity. We believe that this is the first study of its kind to examine new factors that aid in ensuring a video remains popular using both deep learning and econometric methodologies.

How to Search and Evaluate Video Content for Online Learning (온라인 학습을 위한 동영상 콘텐츠 검색 및 평가방법)

  • Yong, Sung-Jung;Moon, Il-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.238-244
    • /
    • 2020
  • The development and distribution rate of smartphones have progressed so rapidly that it is safe for the entire nation to use them in the smart age, and the use of smartphones has become an essential medium for the use of domestic media content, and many people are using various contents regardless of gender, age, or region. Recently, various media outlets have been consuming video content for online learning, indicating that learners utilize video content online for learning. In the previous research, satisfaction studies were conducted according to the type of content, and the improvement plan was necessary because no research was conducted on how to evaluate the learning content itself and provide it to learners. In this paper, we would like to propose a system through evaluation and review of learning content itself as a way to improve the way of providing video content for learning and quality learning content.

Exploring Online Learning Profiles of In-service Teachers in a Professional Development Course

  • PARK, Yujin;SUNG, Jihyun;CHO, Young Hoan
    • Educational Technology International
    • /
    • v.18 no.2
    • /
    • pp.193-213
    • /
    • 2017
  • This study aimed to explore online learning profiles of in-service teachers in South Korea, focusing on video lecture and discussion activities. A total of 269 teachers took an online professional development course for 14 days, using an online learning platform from which web log data were collected. The data showed the frequency of participation and the initial participation time, which was closely related to procrastinating behaviors. A cluster analysis revealed three online learning profiles of in-service teachers: procrastinating (n=42), passive interaction (n=136), and active learning (n=91) clusters. The active learning cluster showed high-level participation in both video lecture and discussion activities from the beginning of the online course, whereas the procrastinating cluster was seldom engaged in learning activities for the first half of the learning period. The passive interaction cluster was actively engaged in watching video lectures from the beginning of the online course but passively participated in discussion activities. As a result, the active learning cluster outperformed the passive interaction cluster in learning achievements. The findings were discussed in regard to how to improve online learning environments through considering online learning profiles of in-service teachers.

A study on the image design PBL class that can be used for e-Digital contents production

  • Ahn, In-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.2
    • /
    • pp.77-82
    • /
    • 2018
  • In this paper, we propose an improvement plan to increase the learning effect and satisfaction through the PBL - related video design class. PBL To prepare for the Fourth Industrial Revolution era, we must acquire diverse knowledge and skills to discover problems and solve them creatively. Therefore, various learning methods are being studied, and one of them is PBL learning. PBL is a learner-centered education that explores problems that may arise from specific topics other than existing curriculum-based education methods and finds solutions to problems. In this study, two lectures on video design related to video contents and image contents were taught in PBL class, and PBL class problem was analyzed and the improvement plan was studied.

The Effect of Segment Size on Quality Selection in DQN-based Video Streaming Services (DQN 기반 비디오 스트리밍 서비스에서 세그먼트 크기가 품질 선택에 미치는 영향)

  • Kim, ISeul;Lim, Kyungshik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1182-1194
    • /
    • 2018
  • The Dynamic Adaptive Streaming over HTTP(DASH) is envisioned to evolve to meet an increasing demand on providing seamless video streaming services in the near future. The DASH performance heavily depends on the client's adaptive quality selection algorithm that is not included in the standard. The existing conventional algorithms are basically based on a procedural algorithm that is not easy to capture and reflect all variations of dynamic network and traffic conditions in a variety of network environments. To solve this problem, this paper proposes a novel quality selection mechanism based on the Deep Q-Network(DQN) model, the DQN-based DASH Adaptive Bitrate(ABR) mechanism. The proposed mechanism adopts a new reward calculation method based on five major performance metrics to reflect the current conditions of networks and devices in real time. In addition, the size of the consecutive video segment to be downloaded is also considered as a major learning metric to reflect a variety of video encodings. Experimental results show that the proposed mechanism quickly selects a suitable video quality even in high error rate environments, significantly reducing frequency of quality changes compared to the existing algorithm and simultaneously improving average video quality during video playback.

Suggestions for Using Flipped Learning Videos and a Study on the Educational Value (플립러닝의 동영상 활용시 제안과 교육적 가치 고찰)

  • Yi, Eun-Seon;Lim, Heui-Seok
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.6
    • /
    • pp.87-95
    • /
    • 2020
  • Previously, there were no studies that were not aware of the importance of video, were interested in the educational value of Flipped Learning, and there were many wrong experiments because they did not understand Flipped Learning. Therefore, an accurate understanding of Flipped Learning is needed. This study proposed the caution of video learning of Flipped Learning and presented the papers and grounds to support to examine the educational value of Flipped Learning. A video of the lecture should be produced with a core content of 10 to 15 minutes, and found that the educational value of Flipped Learning lies in self-directed learning, cooperative learning, habruta, overcoming the oblivion curve and metacognition. It is hoped that this study will provide good guidance and direction of education for teachers who have difficulty Flipped Learning.

Effect of text and image presenting method on Chinese college students' learning flow, learning satisfaction and learning outcome in video learning environment (중국대학생 동영상 학습에서 텍스트 제시방식과 이미지 제시방식이 학습몰입, 학습만족, 학업성취에 미치는 효과)

  • Zhang, Jing;Zhu, Hui-Qin;Kim, Bo-Kyeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.1
    • /
    • pp.633-640
    • /
    • 2021
  • This study analyzes the effects of text and image presenting methods in video lectures on students' learning flow, learning satisfaction and learning outcomes. The text presenting methods include forming short sentences of 2 or 3 words or using key words, while image presenting methods include images featuring both detailed and related information as well as images containing only related information. 167 first year students from Xingtai University were selected as experimental participants. Groups of participants were randomly assigned to engage in four types of video. The research results are as follows. First, it was found that learning flow, learning satisfaction and learning outcomes of group presented with video forms of short sentences had higher statistical significance compared to the group experiencing the key word method. Second, learning flow, learning satisfaction and learning outcomes of group presented with video forms of only related information had higher statistical significance compared to the group experiencing the presenting method of both detailed and related information. That is, the mean values of dependent variables for groups of short form text and only related information were highest. In contrast, the mean values of dependent variables for groups of key words and both detailed and related information were the lowest.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.