• Title/Summary/Keyword: Teachable Machine

Search Result 20, Processing Time 0.028 seconds

Developing a motion recognition learning game using Teachable Machine (Teachable Machine을 활용한 모션 인식 러닝 게임 개발)

  • Ju-Han Hwang;Sung Jin Kim;Young Hyun Yoon;Jai Soon Baek
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.277-278
    • /
    • 2023
  • 본 논문은 머신러닝 학습 도구인 Teachable Machine을 활용하여 모션 인식 러닝 액션 게임인 Dino Run Game을 개발하는 것을 목표로 한다. JavaScript, HTML, CSS를 사용하여 기본적인 게임 프레임워크를 구현하고, Google에서 개발한 Teachable Machine의 이미지 인식 모델을 활용하여 웹캠을 통해 사용자의 손 이미지를 인식한다. 이를 기반으로 게임 캐릭터를 제어함으로써 키보드를 사용하지 않고도 게임을 즐길 수 있도록 구현한다.

  • PDF

Feasibility Study of Google's Teachable Machine in Diagnosis of Tooth-Marked Tongue

  • Jeong, Hyunja
    • Journal of dental hygiene science
    • /
    • v.20 no.4
    • /
    • pp.206-212
    • /
    • 2020
  • Background: A Teachable Machine is a kind of machine learning web-based tool for general persons. In this paper, the feasibility of Google's Teachable Machine (ver. 2.0) was studied in the diagnosis of the tooth-marked tongue. Methods: For machine learning of tooth-marked tongue diagnosis, a total of 1,250 tongue images were used on Kaggle's web site. Ninety percent of the images were used for the training data set, and the remaining 10% were used for the test data set. Using Google's Teachable Machine (ver. 2.0), machine learning was performed using separated images. To optimize the machine learning parameters, I measured the diagnosis accuracies according to the value of epoch, batch size, and learning rate. After hyper-parameter tuning, the ROC (receiver operating characteristic) analysis method determined the sensitivity (true positive rate, TPR) and specificity (false positive rate, FPR) of the machine learning model to diagnose the tooth-marked tongue. Results: To evaluate the usefulness of the Teachable Machine in clinical application, I used 634 tooth-marked tongue images and 491 no-marked tongue images for machine learning. When the epoch, batch size, and learning rate as hyper-parameters were 75, 0.0001, and 128, respectively, the accuracy of the tooth-marked tongue's diagnosis was best. The accuracies for the tooth-marked tongue and the no-marked tongue were 92.1% and 72.6%, respectively. And, the sensitivity (TPR) and specificity (FPR) were 0.92 and 0.28, respectively. Conclusion: These results are more accurate than Li's experimental results calculated with convolution neural network. Google's Teachable Machines show good performance by hyper-parameters tuning in the diagnosis of the tooth-marked tongue. We confirmed that the tool is useful for several clinical applications.

The Effect of AI Experience Program Using Teachable Machine on AI Perception of Elementary School Students (Teachable machine을 활용한 인공지능 체험 프로그램이 초등학생의 인공지능 인식에 미치는 영향)

  • Lee, Seung-mee;Chun, Seok-Ju
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.4
    • /
    • pp.611-619
    • /
    • 2021
  • Artificial intelligence is at the heart of the Fourth Industrial Revolution. Education must change in order to develop the capabilities necessary for future AI-based societies. This study developed and applied artificial intelligence experience classes using Teachable machine to elementary school students, and analyzed changes in artificial intelligence understanding and interest among students. Among the 10 artificial intelligence classes, 4 classes used various artificial intelligence education platforms, and 6 classes focused on Teachable machines. Before and after the application of the program, students' interest and understanding in artificial intelligence were examined. Quantitative and qualitative studies were conducted together. Studies have shown that both students' interest and understanding of artificial intelligence has improved since the application of the program. Furthermore, based on the findings, we propose a follow-up study for the development of artificial intelligence training programs.

The Development of Interactive Artificial Intelligence Blocks for Image Classification (이미지 분류를 위한 대화형 인공지능 블록 개발)

  • Park, Youngki;Shin, Youhyun
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.6
    • /
    • pp.1015-1024
    • /
    • 2021
  • There are various educational programming environments in which students can train artificial intelligence (AI) using block-based programming languages, such as Entry, Machine Learning for Kids, and Teachable Machine. However, these programming environments are designed so that students can train AI through a separate menu, and then use the trained model in the code editor. These approaches have the advantage that students can check the training process more intuitively, but there is also the disadvantage that both the training menu and the code editor must be used. In this paper, we present a novel artificial intelligence block that can perform both AI training and programming in the code editor. While this AI block is presented as a Scratch block, the training process is performed through a Python server. We describe the blocks in detail through the process of training a model to classify a blue pen and a red pen, and a model to classify a dental mask and a KF94 mask. Also, we experimentally show that our approach is not significantly different from Teachable Machine in terms of performance.

Ai-Based Cataract Detection Platform Develop (인공지능 기반의 백내장 검출 플랫폼 개발)

  • Park, Doyoung;Kim, Baek-Ki
    • Journal of Platform Technology
    • /
    • v.10 no.1
    • /
    • pp.20-28
    • /
    • 2022
  • Artificial intelligence-based health data verification has become an essential element not only to help clinical research, but also to develop new treatments. Since the US Food and Drug Administration (FDA) approved the marketing of medical devices that detect mild abnormal diabetic retinopathy in adult diabetic patients using artificial intelligence in the field of medical diagnosis, tests using artificial intelligence have been increasing. In this study, an artificial intelligence model based on image classification was created using a Teachable Machine supported by Google, and a predictive model was completed through learning. This not only facilitates the early detection of cataracts among eye diseases occurring among patients with chronic diseases, but also serves as basic research for developing a digital personal health healthcare app for eye disease prevention as a healthcare program for eye health.

Implementation of a Classification System for Dog Behaviors using YOLI-based Object Detection and a Node.js Server (YOLO 기반 개체 검출과 Node.js 서버를 이용한 반려견 행동 분류 시스템 구현)

  • Jo, Yong-Hwa;Lee, Hyuek-Jae;Kim, Young-Hun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.29-37
    • /
    • 2020
  • This paper implements a method of extracting an object about a dog through real-time image analysis and classifying dog behaviors from the extracted images. The Darknet YOLO was used to detect dog objects, and the Teachable Machine provided by Google was used to classify behavior patterns from the extracted images. The trained Teachable Machine is saved in Google Drive and can be used by ml5.js implemented on a node.js server. By implementing an interactive web server using a socket.io module on the node.js server, the classified results are transmitted to the user's smart phone or PC in real time so that it can be checked anytime, anywhere.

Design and Implementation of Facial Mask Wearing Monitoring System based on Open Source (오픈소스 기반 안면마스크 착용 모니터링 시스템 설계 및 구현)

  • Ku, Dong-Jin;Jang, Joon-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.89-96
    • /
    • 2021
  • The number of confirmed cases of coronavirus-19 is soaring around the world and has caused numerous deaths. Wearing a mask is very important to prevent infection. Incidents and accidents have occurred due to the recommendation to wear a mask in public places such as buses and subways, and it has emerged as a serious social problem. To solve this problem, this paper proposes an open source-based face mask wearing monitoring system. We used open source software, web-based artificial intelligence tool teachable machine and open source hardware Arduino. It judges whether the mask is worn, and performs commands such as guidance messages and alarms. The learning parameters of the teachable machine were learned with the optimal values of 50 learning times, 32 batch sizes, and 0.001 learning rate, resulting in an accuracy of 1 and a learning error of 0.003. We designed and implemented a mask wearing monitoring system that can perform commands such as guidance messages and alarms by determining whether to wear a mask using a web-based artificial intelligence tool teachable machine and Arduino to prove its validity.

Deep Learning Frameworks for Cervical Mobilization Based on Website Images

  • Choi, Wansuk;Heo, Seoyoon
    • Journal of International Academy of Physical Therapy Research
    • /
    • v.12 no.1
    • /
    • pp.2261-2266
    • /
    • 2021
  • Background: Deep learning related research works on website medical images have been actively conducted in the field of health care, however, articles related to the musculoskeletal system have been introduced insufficiently, deep learning-based studies on classifying orthopedic manual therapy images would also just be entered. Objectives: To create a deep learning model that categorizes cervical mobilization images and establish a web application to find out its clinical utility. Design: Research and development. Methods: Three types of cervical mobilization images (central posteroanterior (CPA) mobilization, unilateral posteroanterior (UPA) mobilization, and anteroposterior (AP) mobilization) were obtained using functions of 'Download All Images' and a web crawler. Unnecessary images were filtered from 'Auslogics Duplicate File Finder' to obtain the final 144 data (CPA=62, UPA=46, AP=36). Training classified into 3 classes was conducted in Teachable Machine. The next procedures, the trained model source was uploaded to the web application cloud integrated development environment (https://ide.goorm.io/) and the frame was built. The trained model was tested in three environments: Teachable Machine File Upload (TMFU), Teachable Machine Webcam (TMW), and Web Service webcam (WSW). Results: In three environments (TMFU, TMW, WSW), the accuracy of CPA mobilization images was 81-96%. The accuracy of the UPA mobilization image was 43~94%, and the accuracy deviation was greater than that of CPA. The accuracy of the AP mobilization image was 65-75%, and the deviation was not large compared to the other groups. In the three environments, the average accuracy of CPA was 92%, and the accuracy of UPA and AP was similar up to 70%. Conclusion: This study suggests that training of images of orthopedic manual therapy using machine learning open software is possible, and that web applications made using this training model can be used clinically.

CNN-LSTM-based Upper Extremity Rehabilitation Exercise Real-time Monitoring System (CNN-LSTM 기반의 상지 재활운동 실시간 모니터링 시스템)

  • Jae-Jung Kim;Jung-Hyun Kim;Sol Lee;Ji-Yun Seo;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.134-139
    • /
    • 2023
  • Rehabilitators perform outpatient treatment and daily rehabilitation exercises to recover physical function with the aim of quickly returning to society after surgical treatment. Unlike performing exercises in a hospital with the help of a professional therapist, there are many difficulties in performing rehabilitation exercises by the patient on a daily basis. In this paper, we propose a CNN-LSTM-based upper limb rehabilitation real-time monitoring system so that patients can perform rehabilitation efficiently and with correct posture on a daily basis. The proposed system measures biological signals through shoulder-mounted hardware equipped with EMG and IMU, performs preprocessing and normalization for learning, and uses them as a learning dataset. The implemented model consists of three polling layers of three synthetic stacks for feature detection and two LSTM layers for classification, and we were able to confirm a learning result of 97.44% on the validation data. After that, we conducted a comparative evaluation with the Teachable machine, and as a result of the comparative evaluation, we confirmed that the model was implemented at 93.6% and the Teachable machine at 94.4%, and both models showed similar classification performance.

Cat Recognition Application based on Machine Learning Techniques (머신러닝 기술을 이용한 고양이 인식 애플리케이션)

  • Hee-Young Yoon;Soo-Hyun Moon;Seong-Yong Ohm
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.663-668
    • /
    • 2023
  • This paper describes a mobile application that can recognize and identify cats residing on a university campus using the Google's machine learning platform, 'Teachable Machine'. Machine learning, one of the core technologies of the Fourth Industrial Revolution, performs an efficient task of finding optimal results through data learning. Therefore, the model is learned and generated using the platform based on machine learning, and then implemented as an application for smartphones, so that cats can be identified simply and efficiently. In this application, if you take a picture of a cat directly on the spot or call it from the gallery, the cat is identified and information about the cat is provided. Though this system was developed for a specific university campus, it is expected that it can be extended to other campuses and other species of animals.