• Title/Summary/Keyword: Video-based Learning

Search Result 677, Processing Time 0.027 seconds

A Study about the Characteristics of Teachers' Viewpoint in Analysis of an Instruction : Focused on a Centroid Teaching-Learning Case (교사들의 수업 분석 관점에 대한 연구 - 삼각형의 무게중심에 대한 수업 사례를 중심으로 -)

  • Shin, Bomi
    • Journal of Educational Research in Mathematics
    • /
    • v.26 no.3
    • /
    • pp.421-442
    • /
    • 2016
  • This study analyzed characteristics which emerged while 38 secondary school teachers observed a video clip about a centroid of triangles instruction. The aim of this study based on the analysis was to deduce implications in terms of the various means which would enhance teachers' knowledge in teaching mathematics and assist in designing mathematics education programs for teachers and professional development initiatives. To achieve this goal, this research firstly reviewed previous studies relevant to the 'Knowledge Quartet' as a framework of analyzing teachers' knowledge in mathematics instructions. Secondly, this study probed the observation results from the teachers in the light of the KQ. Therefore, some issues in the teacher education program for teaching mathematics were thirdly identified in the categories of 'Foundation', 'Transformation', 'Connection', and 'Contingency' based on the analysis. This research inspires the elaboration of what features have with regard to effective teachers' knowledge in teaching mathematics through the analyzing process and additionally the elucidation of essential matters related to mathematics education on the basis of the analyzed results.

Analysis of Emotions in Broadcast News Using Convolutional Neural Networks (CNN을 활용한 방송 뉴스의 감정 분석)

  • Nam, Youngja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.1064-1070
    • /
    • 2020
  • In Korea, video-based news broadcasters are primarily classified into terrestrial broadcasters, general programming cable broadcasters and YouTube broadcasters. Recently, news broadcasters get subjective while targeting the desired specific audience. This violates normative expectations of impartiality and neutrality on journalism from its audience. This phenomenon may have a negative impact on audience perceptions of issues. This study examined whether broadcast news reporting conveys emotions and if so, how news broadcasters differ according to emotion type. Emotion types were classified into neutrality, happiness, sadness and anger using a convolutional neural network which is a class of deep neural networks. Results showed that news anchors or reporters tend to express their emotions during TV broadcasts regardless of broadcast systems. This study provides the first quantative investigation of emotions in broadcasting news. In addition, this study is the first deep learning-based approach to emotion analysis of broadcasting news.

Deep-Learning Based Real-time Fire Detection Using Object Tracking Algorithm

  • Park, Jonghyuk;Park, Dohyun;Hyun, Donghwan;Na, Youmin;Lee, Soo-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.1
    • /
    • pp.1-8
    • /
    • 2022
  • In this paper, we propose a fire detection system based on CCTV images using an object tracking technology with YOLOv4 model capable of real-time object detection and a DeepSORT algorithm. The fire detection model was learned from 10800 pieces of learning data and verified through 1,000 separate test sets. Subsequently, the fire detection rate in a single image and fire detection maintenance performance in the image were increased by tracking the detected fire area through the DeepSORT algorithm. It is verified that a fire detection rate for one frame in video data or single image could be detected in real time within 0.1 second. In this paper, our AI fire detection system is more stable and faster than the existing fire accident detection system.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Enhancing the Quality of Students' Argumentation and Characteristics of Students' Argumentation in Different Contexts (과학적 논의과정 활동을 통한 학생들의 논의과정 변화 및 논의상황에 따른 논의과정 특성)

  • Kwak, Kyoung-Hwa;Nam, Jeong-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.4
    • /
    • pp.400-413
    • /
    • 2009
  • The purpose of this study was to investigate middle school students' processes of argumentation in science lessons and to compare students' argumentation in different contexts (socioscientific context/scientific context). An argumentation-based teaching-learning strategy was used to enhance quality in students' arguments in science lessons. Data were collected from five lessons by video- and audio-recording eight groups of four students each engaging in argumentation. The quality and frequency of students' argumentation was analyzed using an assessment framework based on the work of Toulmin. The findings showed that: (a) there was improvement in the quality of students' argumentation; (b) there were no differences in the structure of argumentation and percentage of explanatory argumentation components as well as dialogic argumentation components in different argumentation contexts. The results of this study showed that students' argumentation can be enhanced with strategic argumentation teaching-learning.

A Discussion on AI-based Automated Picture Creations (인공지능기반의 자동 창작 영상에 관한 논구)

  • Junghoe Kim;Joonsung Yoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.723-730
    • /
    • 2024
  • In order to trace the changes in the concept and understanding of automatically generated images, this study analogously explores the creative methods of photography and cinema, which represent the existing image fields, in terms of AI-based image creation methods and 'automaticity', and discusses the understanding and possibilities of new automatic image creation. At the time of the invention of photography and cinema, the field of 'automatic creation' was established for them in comparison to traditional art genres such as painting. Recently, as AI has been applied to video production, the concept of 'automatic creation' has been expanded, and experimental creations that freely cross the boundaries of literature, art, photography, and film are active. By utilizing technologies such as machine learning and deep learning, AI automated creation allows AI to perform the creative process independently. Automated creation using AI can greatly improve efficiency, but it also risks compromising the personal and subjective nature of art. The problem stems from the fact that AI cannot completely replace human creativity.

AdaBoost-based Real-Time Face Detection & Tracking System (AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발)

  • Kim, Jeong-Hyun;Kim, Jin-Young;Hong, Young-Jin;Kwon, Jang-Woo;Kang, Dong-Joong;Lho, Tae-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

Blind Image Quality Assessment on Gaussian Blur Images

  • Wang, Liping;Wang, Chengyou;Zhou, Xiao
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.448-463
    • /
    • 2017
  • Multimedia is a ubiquitous and indispensable part of our daily life and learning such as audio, image, and video. Objective and subjective quality evaluations play an important role in various multimedia applications. Blind image quality assessment (BIQA) is used to indicate the perceptual quality of a distorted image, while its reference image is not considered and used. Blur is one of the common image distortions. In this paper, we propose a novel BIQA index for Gaussian blur distortion based on the fact that images with different blur degree will have different changes through the same blur. We describe this discrimination from three aspects: color, edge, and structure. For color, we adopt color histogram; for edge, we use edge intensity map, and saliency map is used as the weighting function to be consistent with human visual system (HVS); for structure, we use structure tensor and structural similarity (SSIM) index. Numerous experiments based on four benchmark databases show that our proposed index is highly consistent with the subjective quality assessment.

An Architecture for Mobile Instruction: Application to Mathematics Education through the Web

  • Kim, Steven H.;Kwon, Oh-Nam;Kim, Eun-Jung
    • Research in Mathematical Education
    • /
    • v.4 no.1
    • /
    • pp.45-55
    • /
    • 2000
  • The rapid proliferation of wireless networks provides a ubiquitous channel for delivering instructional materials at the convenience of the user. By delivering content through portable devices linked to the Internet, the full spectrum of multimedia capabilities is available for engaging the user's interest. This capability encompasses not only text but images, video, speech generation and voice recognition. Moreover, the incorporation of machine learning capabilities at the source provides the ability to tailor the material to the general level of expertise of the user as well as the immediate needs of the moment: for instance, a request for information regarding a particular city might be covered by a leisurely presentation if solicited from the home, but more tersely if the user happens to be driving a car. This paper presents system architecture to support mobile instruction in conjunction with knowledge-based tutoring capabilities. For concreteress, the general concepts are examined in the context of a system for mathematics education on the Web.

  • PDF

Human Detection in Overhead View and Near-Field View Scene

  • Jung, Sung-Hoon;Jung, Byung-Hee;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.860-868
    • /
    • 2008
  • Human detection techniques in outdoor scenes have been studied for a long time to watch suspicious movements or to keep someone from danger. However there are few methods of human detection in overhead or near-field view scenes, while lots of human detection methods in far-field view scenes have been developed. In this paper, a set of five features useful for human detection in overhead view scenes and another set of four useful features in near-field view scenes are suggested. Eight feature-candidates are first extracted by analyzing geometrically varying characteristics of moving objects in samples of video sequences. Then highly contributed features for each view scene to classifying human from other moving objects are selected among them by using a neural network learning technique. Through experiments with hundreds of moving objects, we found that each set of features is very useful for human detection and classification accuracy for overhead view and near-field view scenes was over 90%. The suggested sets of features can be used effectively in a PTZ camera based surveillance system where both the overhead and near-field view scenes appear.

  • PDF