• Title/Summary/Keyword: AI Video

Search Result 157, Processing Time 0.03 seconds

A Study on Elementary Education Examples for Data Science using Entry (엔트리를 활용한 초등 데이터 과학 교육 사례 연구)

  • Hur, Kyeong
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.5
    • /
    • pp.473-481
    • /
    • 2020
  • Data science starts with small data analysis and includes machine learning and deep learning for big data analysis. Data science is a core area of artificial intelligence technology and should be systematically reflected in the school curriculum. For data science education, The Entry also provides a data analysis tool for elementary education. In a big data analysis, data samples are extracted and analysis results are interpreted through statistical guesses and judgments. In this paper, the big data analysis area that requires statistical knowledge is excluded from the elementary area, and data science education examples focusing on the elementary area are proposed. To this end, the general data science education stage was explained first, and the elementary data science education stage was newly proposed. After that, an example of comparing values of data variables and an example of analyzing correlations between data variables were proposed with public small data provided by Entry, according to the elementary data science education stage. By using these Entry data-analysis examples proposed in this paper, it is possible to provide data science convergence education in elementary school, with given data generated from various subjects. In addition, data science educational materials combined with text, audio and video recognition AI tools can be developed by using the Entry.

A Study on the Operating Conditions of Lecture Contents in Contactless Online Classes for University Students (대학생 대상 비대면 온라인 수업에서의 강의 콘텐츠 운영 실태 연구)

  • Lee, Jongmoon
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.4
    • /
    • pp.5-24
    • /
    • 2021
  • The purpose of this study was to investigate and analyze the operating conditions of lecture contents in contactless online classes for University students. First, as a result of analyzing the responses of 93 respondents, 93.3% of the respondents took real-time online lectures (47.7%) or recorded video lectures (45.6%). Second, as a result of analyzing the contents used as textbooks, it was found that e-books (materials) and paper books (materials) were used together (36.6%), or e-books or electronic materials (36.6% and 37.6% respectively) were used in both liberal arts (47.3%) and major subjects (39.8%). In addition to textbooks, both major subjects and liberal arts highly used web materials (47.6% and 40.5% respectively) and YouTube materials (33.3% and 48.0% respectively) as external materials. Third, both liberal arts and major subjects used 'electronic files in the form of PPT or text organized and written by instructors' (62.9% and 58.1% respectively), 'internet materials' (16.7% and 19% respectively) and 'paper book or materials' (10.4% and 12.3% respectively) to share lecture contents. For the screen displayed lecture contents, 93.5% of the respondents satisfied in major subjects, and 90.2% of the respondents satisfied in liberal arts. These results suggest developing multimedia-based lecture contents and an evaluation solution capable of real-time exam supervision, developing a task management system capable of AI-based plagiarism search, task guidance, and task evaluation, and institutionalizing a solution to copyright problems for electronicizing lecture materials so that lectures can be given in the ubiquitous environment.

A Study on Deep Learning based Aerial Vehicle Classification for Armament Selection (무장 선택을 위한 딥러닝 기반의 비행체 식별 기법 연구)

  • Eunyoung, Cha;Jeongchang, Kim
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.936-939
    • /
    • 2022
  • As air combat system technologies developed in recent years, the development of air defense systems is required. In the operating concept of the anti-aircraft defense system, selecting an appropriate armament for the target is one of the system's capabilities in efficiently responding to threats using limited anti-aircraft power. Much of the flying threat identification relies on the operator's visual identification. However, there are many limitations in visually discriminating a flying object maneuvering high speed from a distance. In addition, as the demand for unmanned and intelligent weapon systems on the modern battlefield increases, it is essential to develop a technology that automatically identifies and classifies the aircraft instead of the operator's visual identification. Although some examples of weapon system identification with deep learning-based models by collecting video data for tanks and warships have been presented, aerial vehicle identification is still lacking. Therefore, in this paper, we present a model for classifying fighters, helicopters, and drones using a convolutional neural network model and analyze the performance of the presented model.

Cat Behavior Pattern Analysis and Disease Prediction System of Home CCTV Images using AI (AI를 이용한 홈CCTV 영상의 반려묘 행동 패턴 분석 및 질병 예측 시스템 연구)

  • Han, Su-yeon;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1266-1271
    • /
    • 2022
  • Cats have strong wildness so they have a characteristic of hiding diseases well. The disease may have already worsened when the guardian finds out that the cat has a disease. It will be of great help in treating the cat's disease if the owner can recognize the cat's polydipsia, polyuria, and frequent urination more quickly. In this paper, 1) Efficient version of DeepLabCut for pose estimation, 2) YOLO v4 for object detection, 3) LSTM is used for behavior prediction, and 4) BoT-SORT is used for object tracking running on an artificial intelligence device. Using artificial intelligence technology, it predicts the cat's next, polyuria and frequency of urination through the analysis of the cat's behavior pattern from the home CCTV video and the weight sensor of the water bowl. And, through analysis of cat behavior patterns, we propose an application that reports disease prediction and abnormal behavior to the guardian and delivers it to the guardian's mobile and the server system.

Cat Behavior Pattern Analysis and Disease Prediction System of Home CCTV Images using AI (AI를 이용한 홈CCTV 영상의 반려묘 행동 패턴 분석 및 질병 예측 시스템 연구)

  • Han, Su-yeon;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.165-167
    • /
    • 2022
  • The proportion of cat cats among companion animals has been increasing at an average annual rate of 25.4% since 2012. Cats have strong wildness compared to dogs, so they have a characteristic of hiding diseases well. Therefore, when the guardian finds out that the cat has a disease, the disease may have already worsened. Symptoms such as anorexia (eating avoidance), vomiting, diarrhea, polydipsia, and polyuria in cats are some of the symptoms that appear in cat diseases such as diabetes, hyperthyroidism, renal failure, and panleukopenia. It will be of great help in treating the cat's disease if the owner can recognize the cat's polydipsia (drinking a lot of water), polyuria (a large amount of urine), and frequent urination (urinating frequently) more quickly. In this paper, 1) Efficient version of DeepLabCut for posture prediction running on an artificial intelligence server, 2) yolov4 for object detection, and 3) LSTM are used for behavior prediction. Using artificial intelligence technology, it predicts the cat's next, polyuria and frequency of urination through the analysis of the cat's behavior pattern from the home CCTV video and the weight sensor of the water bowl. And, through analysis of cat behavior patterns, we propose an application that reports disease prediction and abnormal behavior to the guardian and delivers it to the guardian's mobile and the main server system.

  • PDF

Detecting Vehicles That Are Illegally Driving on Road Shoulders Using Faster R-CNN (Faster R-CNN을 이용한 갓길 차로 위반 차량 검출)

  • Go, MyungJin;Park, Minju;Yeo, Jiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.105-122
    • /
    • 2022
  • According to the statistics about the fatal crashes that have occurred on the expressways for the last 5 years, those who died on the shoulders of the road has been as 3 times high as the others who died on the expressways. It suggests that the crashes on the shoulders of the road should be fatal, and that it would be important to prevent the traffic crashes by cracking down on the vehicles intruding the shoulders of the road. Therefore, this study proposed a method to detect a vehicle that violates the shoulder lane by using the Faster R-CNN. The vehicle was detected based on the Faster R-CNN, and an additional reading module was configured to determine whether there was a shoulder violation. For experiments and evaluations, GTAV, a simulation game that can reproduce situations similar to the real world, was used. 1,800 images of training data and 800 evaluation data were processed and generated, and the performance according to the change of the threshold value was measured in ZFNet and VGG16. As a result, the detection rate of ZFNet was 99.2% based on Threshold 0.8 and VGG16 93.9% based on Threshold 0.7, and the average detection speed for each model was 0.0468 seconds for ZFNet and 0.16 seconds for VGG16, so the detection rate of ZFNet was about 7% higher. The speed was also confirmed to be about 3.4 times faster. These results show that even in a relatively uncomplicated network, it is possible to detect a vehicle that violates the shoulder lane at a high speed without pre-processing the input image. It suggests that this algorithm can be used to detect violations of designated lanes if sufficient training datasets based on actual video data are obtained.

Histogram-Based Singular Value Decomposition for Object Identification and Tracking (객체 식별 및 추적을 위한 히스토그램 기반 특이값 분해)

  • Ye-yeon Kang;Jeong-Min Park;HoonJoon Kouh;Kyungyong Chung
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.29-35
    • /
    • 2023
  • CCTV is used for various purposes such as crime prevention, public safety reinforcement, and traffic management. However, as the range and resolution of the camera improve, there is a risk of exposing personal information in the video. Therefore, there is a need for new technologies that can identify individuals while protecting personal information in images. In this paper, we propose histogram-based singular value decomposition for object identification and tracking. The proposed method distinguishes different objects present in the image using color information of the object. For object recognition, YOLO and DeepSORT are used to detect and extract people present in the image. Color values are extracted with a black-and-white histogram using location information of the detected person. Singular value decomposition is used to extract and use only meaningful information among the extracted color values. When using singular value decomposition, the accuracy of object color extraction is increased by using the average of the upper singular value in the result. Color information extracted using singular value decomposition is compared with colors present in other images, and the same person present in different images is detected. Euclidean distance is used for color information comparison, and Top-N is used for accuracy evaluation. As a result of the evaluation, when detecting the same person using a black-and-white histogram and singular value decomposition, it recorded a maximum of 100% to a minimum of 74%.

One-shot multi-speaker text-to-speech using RawNet3 speaker representation (RawNet3를 통해 추출한 화자 특성 기반 원샷 다화자 음성합성 시스템)

  • Sohee Han;Jisub Um;Hoirin Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • Recent advances in text-to-speech (TTS) technology have significantly improved the quality of synthesized speech, reaching a level where it can closely imitate natural human speech. Especially, TTS models offering various voice characteristics and personalized speech, are widely utilized in fields such as artificial intelligence (AI) tutors, advertising, and video dubbing. Accordingly, in this paper, we propose a one-shot multi-speaker TTS system that can ensure acoustic diversity and synthesize personalized voice by generating speech using unseen target speakers' utterances. The proposed model integrates a speaker encoder into a TTS model consisting of the FastSpeech2 acoustic model and the HiFi-GAN vocoder. The speaker encoder, based on the pre-trained RawNet3, extracts speaker-specific voice features. Furthermore, the proposed approach not only includes an English one-shot multi-speaker TTS but also introduces a Korean one-shot multi-speaker TTS. We evaluate naturalness and speaker similarity of the generated speech using objective and subjective metrics. In the subjective evaluation, the proposed Korean one-shot multi-speaker TTS obtained naturalness mean opinion score (NMOS) of 3.36 and similarity MOS (SMOS) of 3.16. The objective evaluation of the proposed English and Korean one-shot multi-speaker TTS showed a prediction MOS (P-MOS) of 2.54 and 3.74, respectively. These results indicate that the performance of our proposed model is improved over the baseline models in terms of both naturalness and speaker similarity.

Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification (고속 해상 객체 분류를 위한 양자화 적용 기반 CNN 딥러닝 모델 성능 비교 분석)

  • Lee, Seong-Ju;Lee, Hyo-Chan;Song, Hyun-Hak;Jeon, Ho-Seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.59-68
    • /
    • 2021
  • As artificial intelligence(AI) technologies, which have made rapid growth recently, began to be applied to the marine environment such as ships, there have been active researches on the application of CNN-based models specialized for digital videos. In E-Navigation service, which is combined with various technologies to detect floating objects of clash risk to reduce human errors and prevent fires inside ships, real-time processing is of huge importance. More functions added, however, mean a need for high-performance processes, which raises prices and poses a cost burden on shipowners. This study thus set out to propose a method capable of processing information at a high rate while maintaining the accuracy by applying Quantization techniques of a deep learning model. First, videos were pre-processed fit for the detection of floating matters in the sea to ensure the efficient transmission of video data to the deep learning entry. Secondly, the quantization technique, one of lightweight techniques for a deep learning model, was applied to reduce the usage rate of memory and increase the processing speed. Finally, the proposed deep learning model to which video pre-processing and quantization were applied was applied to various embedded boards to measure its accuracy and processing speed and test its performance. The proposed method was able to reduce the usage of memory capacity four times and improve the processing speed about four to five times while maintaining the old accuracy of recognition.

Ship Detection Using Background Estimation of Video and AIS Informations (영상의 배경추정기법과 AIS정보를 이용한 선박검출)

  • Kim, Hyun-Tae;Park, Jang-Sik;Yu, Yun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2636-2641
    • /
    • 2010
  • To support anti-collision between ship to ship and sea-search and sea-rescue work, ship automatic identification system(AIS) that can both send and receive messages between ship and VTS Traffic control have been adopted. And port control system can control traffic vessel service which is co-operated with AIS. For more efficient traffic vessel service, ship recognition and display system is required to cooperated with AIS. In this paper, we propose ship detection system which is co-operated with AIS by using background estimation based on image processing for on the sea or harbor image extracted from camera. We experiment with on the sea or harbor image extracted from real-time input image from camera. By computer simulation and real world test, the proposed system show more effective to ship monitoring.