• Title/Summary/Keyword: Video Learning

Search Result 1,086, Processing Time 0.025 seconds

The Effect of Nursing Students Academic Achievement in the COVID-19 On-Contact Learning Environment: Focusing on Video production class and Real-time video class (COVID-19 온택 학습환경에서 간호대학생의 학업성취감에 미치는 영향요인: 동영상 제작수업과 실시간 화상수업을 중심으로)

  • Hye Kyung Yang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.321-328
    • /
    • 2023
  • This study is tried to to identify factors affecting academic achievement depending on the quality of class, learning immersion, level of academic achievement, and class type according to video production classes and real-time video classes in the on-contact learning situation due to the COVID-19 epidemic. The subjects of the study were 122 students enrolled in the nursing department at two universities. As a result of the study, the quality of the class was high in real-time video classes (t=-2.69, P=0.02), learning immersion was high in video production classes (t=1.14, P=0.28), and academic achievement was high in video production classes (t=4.24, P=0.01). Depending on the type of class, the effect on academic achievement is learning immersion in production video classes (β=.37, p<.001) has the most influence, and in real-time video classes, class quality (β=.29, p<.001) had the most influence on academic achievement. Based on the results of this study, it is suggested that it is necessary to develop a strategy for instructional design suitable for class types to improve academic achievement in an on-contact environment.

Case Studies and Derivation of Course Profile in accordance with Video Graphics Job (영상그래픽 직무에 따른 교과목운영의 사례분석)

  • Park, Hea-Sook
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.66 no.3
    • /
    • pp.135-138
    • /
    • 2017
  • This study analyzed with the case analysis of a series of processes from job analysis survey and results analysis, and academic achievement in order to transform the curriculum of existing courses of the NCS-based video broadcasting. Also this study analysed the existing curriculum and analyzed the trend of workforce trends and needs of the broadcasting content industry. Also through a needs analysis for the industry and alumni and students, video graphics, video editing and video directing were selected. In this paper it dealt mainly with respect to the video graphics in a dual job. Modeling capability into the unit through a job analysis, animation, effects and lighting were chosen accordingly based introduction of graphics and application of graphics were derived two courses and selected profiles and performance criteria. This training according to the NCS curriculum for students was evaluated based on the student's job was to investigate the learning ability.

Accurate Pig Detection for Video Monitoring Environment (비디오 모니터링 환경에서 정확한 돼지 탐지)

  • Ahn, Hanse;Son, Seungwook;Yu, Seunghyun;Suh, Yooil;Son, Junhyung;Lee, Sejun;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.

Deep Learning based Loss Recovery Mechanism for Video Streaming over Mobile Information-Centric Network

  • Han, Longzhe;Maksymyuk, Taras;Bao, Xuecai;Zhao, Jia;Liu, Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4572-4586
    • /
    • 2019
  • Mobile Edge Computing (MEC) and Information-Centric Networking (ICN) are essential network architectures for the future Internet. The advantages of MEC and ICN such as computation and storage capabilities at the edge of the network, in-network caching and named-data communication paradigm can greatly improve the quality of video streaming applications. However, the packet loss in wireless network environments still affects the video streaming performance and the existing loss recovery approaches in ICN does not exploit the capabilities of MEC. This paper proposes a Deep Learning based Loss Recovery Mechanism (DL-LRM) for video streaming over MEC based ICN. Different with existing approaches, the Forward Error Correction (FEC) packets are generated at the edge of the network, which dramatically reduces the workload of core network and backhaul. By monitoring network states, our proposed DL-LRM controls the FEC request rate by deep reinforcement learning algorithm. Considering the characteristics of video streaming and MEC, in this paper we develop content caching detection and fast retransmission algorithm to effectively utilize resources of MEC. Experimental results demonstrate that the DL-LRM is able to adaptively adjust and control the FEC request rate and achieve better video quality than the existing approaches.

A Study on Efficient Learning Units for Behavior-Recognition of People in Video (비디오에서 동체의 행위인지를 위한 효율적 학습 단위에 관한 연구)

  • Kwon, Ick-Hwan;Hadjer, Boubenna;Lee, Dohoon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.196-204
    • /
    • 2017
  • Behavior of intelligent video surveillance system is recognized by analyzing the pattern of the object of interest by using the frame information of video inputted from the camera and analyzes the behavior. Detection of object's certain behaviors in the crowd has become a critical problem because in the event of terror strikes. Recognition of object's certain behaviors is an important but difficult problem in the area of computer vision. As the realization of big data utilizing machine learning, data mining techniques, the amount of video through the CCTV, Smart-phone and Drone's video has increased dramatically. In this paper, we propose a multiple-sliding window method to recognize the cumulative change as one piece in order to improve the accuracy of the recognition. The experimental results demonstrated the method was robust and efficient learning units in the classification of certain behaviors.

Multi-channel Video Analysis Based on Deep Learning for Video Surveillance (보안 감시를 위한 심층학습 기반 다채널 영상 분석)

  • Park, Jang-Sik;Wiranegara, Marshall;Son, Geum-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1263-1268
    • /
    • 2018
  • In this paper, a video analysis is proposed to implement video surveillance system with deep learning object detection and probabilistic data association filter for tracking multiple objects, and suggests its implementation using GPU. The proposed video analysis technique involves object detection and object tracking sequentially. The deep learning network architecture uses ResNet for object detection and applies probabilistic data association filter for multiple objects tracking. The proposed video analysis technique can be used to detect intruders illegally trespassing any restricted area or to count the number of people entering a specified area. As a results of simulations and experiments, 48 channels of videos can be analyzed at a speed of about 27 fps and real-time video analysis is possible through RTSP protocol.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Implementation of a Video Retrieval System Using Annotation and Comparison Area Learning of Key-Frames (키 프레임의 주석과 비교 영역 학습을 이용한 비디오 검색 시스템의 구현)

  • Lee Keun-Wang;Kim Hee-Sook;Lee Jong-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.269-278
    • /
    • 2005
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantics-based retrieval method can be available for various queries of users. In this paper, we propose a video retrieval system which support semantics retrieval of various users for massive video data by user's keywords and comparison area learning based on automatic agent. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user becomes a query image and searches the most similar key frame through color histogram comparison and comparison area learning method that proposed. From experiment, the designed and implemented system showed high precision ratio in performance assessment more than 93 percents.

  • PDF

Designing a Micro-Learning-Based Learning Environment and Its Impact on Website Designing Skills and Achievement Motivation Among Secondary School Students

  • Almalki, Mohammad Eidah Messfer
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.335-343
    • /
    • 2021
  • The study aimed to elucidate how to design a learning environment on the premise of micro-learning (ML) and investigate its impact on website designing skills and achievement motivation among secondary school students. Adopting the experimental approach, data were collected through an achievement test, a product evaluation form, and a test to gauge motivation for achievement. The sample was divided into two experimental groups. Results revealed statistically significant differences at 0.05≥α between the mean scores of the two groups that experienced ML, irrespective of the two modes of presenting the video in the pre-test and post-test, as for the test of websites design skills, product evaluation form, and achievement motivation test. Besides, there have been statistically significant differences at 0.05≥α between the mean scores of the first experimental group that had exposure to ML using the split-video presentation style and the scores of the second experimental group that underwent ML using continuous video presentation style in the post cognitive test of website design and management skills in favor of the group that had segmented-video-presentation ML. Another salient finding is the nonexistence of significant differences at 0.05≥α between the mean scores of the first experimental group that underwent segmented-video-presentation ML and the grades of the second experimental group that received ML with continuous video presentation style in the post-application of the product scorecard of websites designing skills and the motivation test. In light of these salient findings, the study recommended using ML in teaching computer courses at different educational stages in Saudi Arabia, training computer and information technology teachers to harness ML in their teaching and using ML in designing courses at all levels of education.