• Title/Summary/Keyword: Learning Object

Search Result 1,565, Processing Time 0.03 seconds

Development of I-HTTP for supporting Interactive Learning Object (상호작용적 학습 객체 지원을 위한 I-HTTP 개발)

  • 정영식
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.713-722
    • /
    • 2003
  • The purpose of this study was to define an interactive learning object of ILO through implementation of learning object content standardization technology for the reuse of interactive tools between learners, and to develop I(Interactive)-HTTP for the ILO to properly communicate with LMS. 1-HTTP developed here was enabled to keep connection status during the entire session by improving the existing HTTP with its stateless connection property. This ceaseless connection made it possible to provide users with the real-time interactivity between learners that happened frequently in the ILO. Also, because the I-HTTP was an expanded version of HTTP, it was possible to conduct general HTML documentation as well as ILO. In particular, the standardized launch process between LMS and ILO was embodied in adding the INIT, GETVAL, SETVAL, COMMBT, FINISH methods in the protocol, and the results from the interactivity between ILO learners were channeled to the database storage to save them through separately defined data models.

  • PDF

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Development of Checking System for Emergency using Behavior-based Object Detection (행동기반 사물 감지를 통한 위급상황 확인 시스템 개발)

  • Kim, MinJe;Koh, KyuHan;Jo, JaeChoon
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.6
    • /
    • pp.140-146
    • /
    • 2020
  • Since the current crime prevention systems have a standard mechanism that victims request for help by themselves or ask for help from a third party nearby, it is difficult to obtain appropriate help in situations where a prompt response is not possible. In this study, we proposed and developed an automatic rescue request model and system using Deep Learning and OpenCV. This study is based on the prerequisite that immediate and precise threat detection is essential to ensure the user's safety. We validated and verified that the system identified by more than 99% of the object's accuracy to ensure the user's safety, and it took only three seconds to complete all necessary algorithms. We plan to collect various types of threats and a large amount of data to reinforce the system's capabilities so that the system can recognize and deal with all dangerous situations, including various threats and unpredictable cases.

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF

Design and Implementation of MDA-based Teaching and Learning Support System (MDA기반 교수-학습지원 시스템 설계 및 구현)

  • Kim, Haeng-Kon
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.931-938
    • /
    • 2006
  • It is important to operate an education resources which could be integrated to an system. But most of existing education information system was not developed with standardization. It is need the core education asset and reusable education service to make a good education system. Consequently, it is needed to use Sharable Content Object Reference Model(SCORM) based contents managing in order to reuse the contents of education. And it needs assembling and producing method with reusable core asset of education system to develop the application program for education. In this thesis, we study the Teaching-Learning supporting system to support systematic education resources. Teaching-Learning support system is developed of educational domain assess through development process based on Model Driven Architecture(MDA) and core asset on each stage. Application program of education is developed using MDA automatic tool through analyzing and designing for contents storage which is based on contents meta model. We finally can develop the application software of education with low cost and high productivity by raising the reusability of education contents and by using the core asset to the whole development process.

Lightweight Convolution Module based Detection Model for Small Embedded Devices (소형 임베디드 장치를 위한 경량 컨볼루션 모듈 기반의 검출 모델)

  • Park, Chan-Soo;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.9
    • /
    • pp.28-34
    • /
    • 2021
  • In the case of object detection using deep learning, both accuracy and real-time are required. However, it is difficult to use a deep learning model that processes a large amount of data in a limited resource environment. To solve this problem, this paper proposes an object detection model for small embedded devices. Unlike the general detection model, the model size was minimized by using a structure in which the pre-trained feature extractor was removed. The structure of the model was designed by repeatedly stacking lightweight convolution blocks. In addition, the number of region proposals is greatly reduced to reduce detection overhead. The proposed model was trained and evaluated using the public dataset PASCAL VOC. For quantitative evaluation of the model, detection performance was measured with average precision used in the detection field. And the detection speed was measured in a Raspberry Pi similar to an actual embedded device. Through the experiment, we achieved improved accuracy and faster reasoning speed compared to the existing detection method.

Microscopic Traffic Parameters Estimation from UAV Video Using Multiple Object Tracking of Deep Learning-based (다중객체추적 알고리즘을 활용한 드론 항공영상 기반 미시적 교통데이터 추출)

  • Jung, Bokyung;Seo, Sunghyuk;Park, Boogi;Bae, Sanghoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.5
    • /
    • pp.83-99
    • /
    • 2021
  • With the advent of the fourth industrial revolution, studies on driving management and driving strategies of autonomous vehicles are emerging. While obtaining microscopic traffic data on vehicles is essential for such research, we also see that conventional traffic data collection methods cannot collect the driving behavior of individual vehicles. In this study, UAV videos were used to collect traffic data from the viewpoint of the aerial base that is microscopic. To overcome the limitations of the related research in the literature, the micro-traffic data were estimated using the multiple object tracking of deep learning and an image registration technique. As a result, the speed obtained error rates of MAE 3.49 km/h, RMSE 4.43 km/h, and MAPE 5.18 km/h, and the traffic obtained a precision of 98.07% and a recall of 97.86%.

Object Detection and Optical Character Recognition for Mobile-based Air Writing (모바일 기반 Air Writing을 위한 객체 탐지 및 광학 문자 인식 방법)

  • Kim, Tae-Il;Ko, Young-Jin;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.53-63
    • /
    • 2019
  • To provide a hand gesture interface through deep learning in mobile environments, research on the light-weighting of networks is essential for high recognition rates while at the same time preventing degradation of execution speed. This paper proposes a method of real-time recognition of written characters in the air using a finger on mobile devices through the light-weighting of deep-learning model. Based on the SSD (Single Shot Detector), which is an object detection model that utilizes MobileNet as a feature extractor, it detects index finger and generates a result text image by following fingertip path. Then, the image is sent to the server to recognize the characters based on the learned OCR model. To verify our method, 12 users tested 1,000 words using a GALAXY S10+ and recognized their finger with an average accuracy of 88.6%, indicating that recognized text was printed within 124 ms and could be used in real-time. Results of this research can be used to send simple text messages, memos, and air signatures using a finger in mobile environments.

Analysis of Floating Population in Schools Using Open Source Hardware and Deep Learning-Based Object Detection Algorithm (오픈소스 하드웨어와 딥러닝 기반 객체 탐지 알고리즘을 활용한 교내 유동인구 분석)

  • Kim, Bo-Ram;Im, Yun-Gyo;Shin, Sil;Lee, Jin-Hyeok;Chu, Sung-Won;Kim, Na-Kyeong;Park, Mi-So;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.91-98
    • /
    • 2022
  • In this study, Pukyong National University's floating population survey and analysis were conducted using Raspberry Pie, an open source hardware, and object detection algorithms based on deep learning technology. After collecting images using Raspberry Pie, the person detection of the collected images using YOLO3's IMAGEAI and YOLOv5 models was performed, and Haar-like features and HOG models were used for accuracy comparison analysis. As a result of the analysis, the smallest floating population was observed due to the school anniversary. In general, the floating population at the entrance was larger than the floating population at the exit, and both the entrance and exit were found to be greatly affected by the school's anniversary and events.

Impact Analysis of Deep Learning Super-resolution Technology for Improving the Accuracy of Ship Detection Based on Optical Satellite Imagery (광학 위성 영상 기반 선박탐지의 정확도 개선을 위한 딥러닝 초해상화 기술의 영향 분석)

  • Park, Seongwook;Kim, Yeongho;Kim, Minsik
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.559-570
    • /
    • 2022
  • When a satellite image has low spatial resolution, it is difficult to detect small objects. In this research, we aim to check the effect of super resolution on object detection. Super resolution is a software method that increases the resolution of an image. Unpaired super resolution network is used to improve Sentinel-2's spatial resolution from 10 m to 3.2 m. Faster-RCNN, RetinaNet, FCOS, and S2ANet were used to detect vessels in the Sentinel-2 images. We experimented the change in vessel detection performance when super resolution is applied. As a result, the Average Precision (AP) improved by at least 12.3% and up to 33.3% in the ship detection models trained with the super-resolution image. False positive and false negative cases also decreased. This implies that super resolution can be an important pre-processing step in object detection, and it is expected to greatly contribute to improving the accuracy of other image-based deep learning technologies along with object detection.