• Title/Summary/Keyword: Embedded learning

Search Result 414, Processing Time 0.027 seconds

A Learning AI Algorithm for Poker with Embedded Opponent Modeling

  • Kim, Seong-Gon;Kim, Yong-Gi
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.170-177
    • /
    • 2010
  • Poker is a game of imperfect information where competing players must deal with multiple risk factors stemming from unknown information while making the best decision to win, and this makes it an interesting test-bed for artificial intelligence research. This paper introduces a new learning AI algorithm with embedded opponent modeling that can be used for these types of situations and we use this AI and apply it to a poker program. The new AI will be based on several graphs with each of its nodes representing inputs, and the algorithm will learn the optimal decision to make by updating the weight of the edges connecting these nodes and returning a probability for each action the graphs represent.

Deep Learning Based TSV Hole TCD Measurement (딥러닝 기반의 TSV Hole TCD 계측 방법)

  • Jeong, Jun Hee;Gu, Chang Mo;Cho, Joong Hwee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.2
    • /
    • pp.103-108
    • /
    • 2021
  • The TCD is used as one of the indicators for determining whether TSV Hole is defective. If the TCD is not normal size, it can lead to contamination of the CMP equipment or failure to connect the upper and lower chips. We propose a deep learning model for measuring the TCD. To verify the performance of the proposed model, we compared the prediction results of the proposed model for 2461 via holes with the CD-SEM measurement data and the prediction results of the existing model. Although the number of trainable parameters in the proposed model was about one two-thousandth of the existing model, the results were comparable. The experiment showed that the correlation between CD-SEM and the prediction results of the proposed model measured 98%, the mean absolute difference was 0.051um, the standard deviation of the absolute difference was 0.045um, and the maximum absolute difference was 0.299um on average.

Recyclable Objects Detection via Bounding Box CutMix and Standardized Distance-based IoU (Bounding Box CutMix와 표준화 거리 기반의 IoU를 통한 재활용품 탐지)

  • Lee, Haejin;Jung, Heechul
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.289-296
    • /
    • 2022
  • In this paper, we developed a deep learning-based recyclable object detection model. The model is developed based on YOLOv5 that is a one-stage detector. The deep learning model detects and classifies the recyclable object into 7 categories: paper, carton, can, glass, pet, plastic, and vinyl. We propose two methods for recyclable object detection models to solve problems during training. Bounding Box CutMix solved the no-objects training images problem of Mosaic, a data augmentation used in YOLOv5. Standardized Distance-based IoU replaced DIoU using a normalization factor that is not affected by the center point distance of the bounding boxes. The recyclable object detection model showed a final mAP performance of 0.91978 with Bounding Box CutMix and 0.91149 with Standardized Distance-based IoU.

Severity Prediction of Sleep Respiratory Disease Based on Statistical Analysis Using Machine Learning (머신러닝을 활용한 통계 분석 기반의 수면 호흡 장애 중증도 예측)

  • Jun-Su Kim;Byung-Jae Choi
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.2
    • /
    • pp.59-65
    • /
    • 2023
  • Currently, polysomnography is essential to diagnose sleep-related breathing disorders. However, there are several disadvantages to polysomnography, such as the requirement for multiple sensors and a long reading time. In this paper, we propose a system for predicting the severity of sleep-related breathing disorders at home utilizing measurable elements in a wearable device. To predict severity, the variables were refined through a three-step variable selection process, and the refined variables were used as inputs into three machine-learning models. As a result of the study, random forest models showed excellent prediction performance throughout. The best performance of the model in terms of F1 scores for the three threshold criteria of 5, 15, and 30 classified as the AHI index was about 87.3%, 90.7%, and 90.8%, respectively, and the maximum performance of the model for the three threshold criteria classified as the RDI index was approx 79.8%, 90.2%, and 90.1%, respectively.

Deep Learning Braille Block Recognition Method for Embedded Devices (임베디드 기기를 위한 딥러닝 점자블록 인식 방법)

  • Hee-jin Kim;Jae-hyuk Yoon;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-9
    • /
    • 2023
  • In this paper, we propose a method to recognize the braille blocks for embedded devices in real time through deep learning. First, a deep learning model for braille block recognition is trained on a high-performance computer, and the learning model is applied to a lightweight tool to apply to an embedded device. To recognize the walking information of the braille block, an algorithm is used to determine the path using the distance from the braille block in the image. After detecting braille blocks, bollards, and crosswalks through the YOLOv8 model in the video captured by the embedded device, the walking information is recognized through the braille block path discrimination algorithm. We apply the model lightweight tool to YOLOv8 to detect braille blocks in real time. The precision of YOLOv8 model weights is lowered from the existing 32 bits to 8 bits, and the model is optimized by applying the TensorRT optimization engine. As the result of comparing the lightweight model through the proposed method with the existing model, the path recognition accuracy is 99.05%, which is almost the same as the existing model, but the recognition speed is reduced by 59% compared to the existing model, processing about 15 frames per second.

Deep Learning System based on Morphological Neural Network (몰포러지 신경망 기반 딥러닝 시스템)

  • Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.1
    • /
    • pp.92-98
    • /
    • 2019
  • In this paper, we propose a deep learning system based on morphological neural network(MNN). The deep learning layers are morphological operation layer, pooling layer, ReLU layer, and the fully connected layer. The operations used in morphological layer are erosion, dilation, and edge detection, etc. Unlike CNN, the number of hidden layers and kernels applied to each layer is limited in MNN. Because of the reduction of processing time and utility of VLSI chip design, it is possible to apply MNN to various mobile embedded systems. MNN performs the edge and shape detection operations with a limited number of kernels. Through experiments using database images, it is confirmed that MNN can be used as a deep learning system and its performance.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.

Development of a Steel Plate Surface Defect Detection System Based on Small Data Deep Learning (소량 데이터 딥러닝 기반 강판 표면 결함 검출 시스템 개발)

  • Gaybulayev, Abdulaziz;Lee, Na-Hyeon;Lee, Ki-Hwan;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.3
    • /
    • pp.129-138
    • /
    • 2022
  • Collecting and labeling sufficient training data, which is essential to deep learning-based visual inspection, is difficult for manufacturers to perform because it is very expensive. This paper presents a steel plate surface defect detection system with industrial-grade detection performance by training a small amount of steel plate surface images consisting of labeled and non-labeled data. To overcome the problem of lack of training data, we propose two data augmentation techniques: program-based augmentation, which generates defect images in a geometric way, and generative model-based augmentation, which learns the distribution of labeled data. We also propose a 4-step semi-supervised learning using pseudo labels and consistency training with fixed-size augmentation in order to utilize unlabeled data for training. The proposed technique obtained about 99% defect detection performance for four defect types by using 100 real images including labeled and unlabeled data.

PartitionTuner: An operator scheduler for deep-learning compilers supporting multiple heterogeneous processing units

  • Misun Yu;Yongin Kwon;Jemin Lee;Jeman Park;Junmo Park;Taeho Kim
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.318-328
    • /
    • 2023
  • Recently, embedded systems, such as mobile platforms, have multiple processing units that can operate in parallel, such as centralized processing units (CPUs) and neural processing units (NPUs). We can use deep-learning compilers to generate machine code optimized for these embedded systems from a deep neural network (DNN). However, the deep-learning compilers proposed so far generate codes that sequentially execute DNN operators on a single processing unit or parallel codes for graphic processing units (GPUs). In this study, we propose PartitionTuner, an operator scheduler for deep-learning compilers that supports multiple heterogeneous PUs including CPUs and NPUs. PartitionTuner can generate an operator-scheduling plan that uses all available PUs simultaneously to minimize overall DNN inference time. Operator scheduling is based on the analysis of DNN architecture and the performance profiles of individual and group operators measured on heterogeneous processing units. By the experiments for seven DNNs, PartitionTuner generates scheduling plans that perform 5.03% better than a static type-based operator-scheduling technique for SqueezeNet. In addition, PartitionTuner outperforms recent profiling-based operator-scheduling techniques for ResNet50, ResNet18, and SqueezeNet by 7.18%, 5.36%, and 2.73%, respectively.

Obstacle Avoidance System for Autonomous CTVs in Offshore Wind Farms Based on Deep Reinforcement Learning (심층 강화학습 기반 자율운항 CTV의 해상풍력발전단지 내 장애물 회피 시스템)

  • Jingyun Kim;Haemyung Chon;Jackyou Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.131-139
    • /
    • 2024
  • Crew Transfer Vessels (CTVs) are primarily used for the maintenance of offshore wind farms. Despite being manually operated by professional captains and crew, collisions with other ships and marine structures still occur. To prevent this, the introduction of autonomous navigation systems to CTVs is necessary. In this study, research on the obstacle avoidance system of the autonomous navigation system for CTVs was conducted. In particular, research on obstacle avoidance simulation for CTVs using deep reinforcement learning was carried out, taking into account the currents and wind loads in offshore wind farms. For this purpose, 3 degrees of freedom ship maneuvering modeling for CTVs considering the currents and wind loads in offshore wind farms was performed, and a simulation environment for offshore wind farms was implemented to train and test the deep reinforcement learning agent. Specifically, this study conducted research on obstacle avoidance maneuvers using MATD3 within deep reinforcement learning, and as a result, it was confirmed that the model, which underwent training over 10,000 episodes, could successfully avoid both static and moving obstacles. This confirms the conclusion that the application of the methods proposed in this study can successfully facilitate obstacle avoidance for autonomous navigation CTVs within offshore wind farms.