• Title/Summary/Keyword: Deep Learning System

Search Result 1,745, Processing Time 0.031 seconds

Anomaly Sewing Pattern Detection for AIoT System using Deep Learning and Decision Tree

  • Nguyen Quoc Toan;Seongwon Cho
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.85-94
    • /
    • 2024
  • Artificial Intelligence of Things (AIoT), which combines AI and the Internet of Things (IoT), has recently gained popularity. Deep neural networks (DNNs) have achieved great success in many applications. Deploying complex AI models on embedded boards, nevertheless, may be challenging due to computational limitations or intelligent model complexity. This paper focuses on an AIoT-based system for smart sewing automation using edge devices. Our technique included developing a detection model and a decision tree for a sufficient testing scenario. YOLOv5 set the stage for our defective sewing stitches detection model, to detect anomalies and classify the sewing patterns. According to the experimental testing, the proposed approach achieved a perfect score with accuracy and F1score of 1.0, False Positive Rate (FPR), False Negative Rate (FNR) of 0, and a speed of 0.07 seconds with file size 2.43MB.

Technical Trends of Medical AI Hubs (의료 AI 중추 기술 동향)

  • Choi, J.H.;Park, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Post COVID-19, the medical legacy system will be transformed for utilizing medical resources efficiently, minimizing medical service imbalance, activating remote medical care, and strengthening private-public medical cooperation. This can be realized by achieving an entire medical paradigm shift and not simply via the application of advanced technologies such as AI. We propose a medical system configuration named "Medical AI Hub" that can realize the shift of the existing paradigm. The development stage of this configuration is categorized into "AI Cooperation Hospital," "AI Base Hospital," and "AI Hub Hospital." In the "AI Hub Hospital" stage, the medical intelligence in charge of individual patients cooperates and communicates autonomously with various medical intelligences, thereby achieving synchronous evolution. Thus, this medical intelligence supports doctors in optimally treating patients. The core technologies required during configuration development and their current R&D trends are described in this paper. The realization of the central configuration of medical AI through the development of these core technologies will induce a paradigm shift in the new medical system by innovating all medical fields with influences at the individual, society, industry, and public levels and by making the existing medical system more efficient and intelligent.

Meta learning-based open-set identification system for specific emitter identification in non-cooperative scenarios

  • Xie, Cunxiang;Zhang, Limin;Zhong, Zhaogen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1755-1777
    • /
    • 2022
  • The development of wireless communication technology has led to the underutilization of radio spectra. To address this limitation, an intelligent cognitive radio network was developed. Specific emitter identification (SEI) is a key technology in this network. However, in realistic non-cooperative scenarios, the system may detect signal classes beyond those in the training database, and only a few labeled signal samples are available for network training, both of which deteriorate identification performance. To overcome these challenges, a meta-learning-based open-set identification system is proposed for SEI. First, the received signals were pre-processed using bi-spectral analysis and a Radon transform to obtain signal representation vectors, which were then fed into an open-set SEI network. This network consisted of a deep feature extractor and an intrinsic feature memorizer that can detect signals of unknown classes and classify signals of different known classes. The training loss functions and the procedures of the open-set SEI network were then designed for parameter optimization. Considering the few-shot problems of open-set SEI, meta-training loss functions and meta-training procedures that require only a few labeled signal samples were further developed for open-set SEI network training. The experimental results demonstrate that this approach outperforms other state-of-the-art SEI methods in open-set scenarios. In addition, excellent open-set SEI performance was achieved using at least 50 training signal samples, and effective operation in low signal-to-noise ratio (SNR) environments was demonstrated.

Construction of an Analysis System Using Digital Breeding Technology for the Selection of Capsicum annuum

  • Donghyun Jeon;Sehyun Choi;Yuna Kang;Changsoo Kim
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.233-233
    • /
    • 2022
  • As the world's population grows and food needs diversify, the demand for horticultural crops for beneficial traits is increasing. In order to meet this demand, it is necessary to develop suitable cultivars and breeding methods accordingly. Breeding methods have changed over time. With the recent development of sequencing technology, the concept of genomic selection (GS) has emerged as large-scale genome information can be used. GS shows good predictive ability even for quantitative traits by using various markers, breaking away from the limitations of Marker Assisted Selection (MAS). Moreover, GS using machine learning (ML) and deep learning (DL) has been studied recently. In this study, we aim to build a system that selects phenotype-related markers using the genomic information of the pepper population and trains a genomic selection model to select individuals from the validation population. We plan to establish an optimal genome wide association analysis model by comparing and analyzing five models. Validation of molecular markers by applying linkage markers discovered through genome wide association analysis to breeding populations. Finally, we plan to establish an optimal genome selection model by comparing and analyzing 12 genome selection models. Then We will use the genome selection model of the learning group in the breeding group to verify the prediction accuracy and discover a prediction model.

  • PDF

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Automatic Object Extraction from Electronic Documents Using Deep Neural Network (심층 신경망을 활용한 전자문서 내 객체의 자동 추출 방법 연구)

  • Jang, Heejin;Chae, Yeonghun;Lee, Sangwon;Jo, Jinyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.411-418
    • /
    • 2018
  • With the proliferation of artificial intelligence technology, it is becoming important to obtain, store, and utilize scientific data in research and science sectors. A number of methods for extracting meaningful objects such as graphs and tables from research articles have been proposed to eventually obtain scientific data. Existing extraction methods using heuristic approaches are hardly applicable to electronic documents having heterogeneous manuscript formats because they are designed to work properly for some targeted manuscripts. This paper proposes a prototype of an object extraction system which exploits a recent deep-learning technology so as to overcome the inflexibility of the heuristic approaches. We implemented our trained model, based on the Faster R-CNN algorithm, using the Google TensorFlow Object Detection API and also composed an annotated data set from 100 research articles for training and evaluation. Finally, a performance evaluation shows that the proposed system outperforms a comparator adopting heuristic approaches by 5.2%.

Large orchard apple classification system (대형 과수원 사과 분류 시스템)

  • Kim, Weol-Youg;Shin, Seung Seung-Jung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.393-399
    • /
    • 2018
  • The development of unmanned AI continues, and the development of AI unmanned is aimed at more efficiently, accurately, and speedily the work that has been resolved by manpower such as industry, welfare, and manpower. AI unmanned technology is evolving in various places, and it is time to switch to unmanned systems from many industries and factories. We take this into consideration, and use the Deep Learning technology, which is one of the core technologies of artificial intelligence (AI), not the manpower but the fruits that pour the rails at once in a large orchard. We want to study the unmanned fruit sorting machine that can be operated under manager's supervision without dividing the fruit by type and grade and dividing by country of origin and grade. This unmanned automated classification system aims to reduce the labor cost by minimizing the manpower and to improve the

Comparison of Anomaly Detection Performance Based on GRU Model Applying Various Data Preprocessing Techniques and Data Oversampling (다양한 데이터 전처리 기법과 데이터 오버샘플링을 적용한 GRU 모델 기반 이상 탐지 성능 비교)

  • Yoo, Seung-Tae;Kim, Kangseok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.201-211
    • /
    • 2022
  • According to the recent change in the cybersecurity paradigm, research on anomaly detection methods using machine learning and deep learning techniques, which are AI implementation technologies, is increasing. In this study, a comparative study on data preprocessing techniques that can improve the anomaly detection performance of a GRU (Gated Recurrent Unit) neural network-based intrusion detection model using NGIDS-DS (Next Generation IDS Dataset), an open dataset, was conducted. In addition, in order to solve the class imbalance problem according to the ratio of normal data and attack data, the detection performance according to the oversampling ratio was compared and analyzed using the oversampling technique applied with DCGAN (Deep Convolutional Generative Adversarial Networks). As a result of the experiment, the method preprocessed using the Doc2Vec algorithm for system call feature and process execution path feature showed good performance, and in the case of oversampling performance, when DCGAN was used, improved detection performance was shown.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

Machine Learning Approaches to Corn Yield Estimation Using Satellite Images and Climate Data: A Case of Iowa State

  • Kim, Nari;Lee, Yang-Won
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.383-390
    • /
    • 2016
  • Remote sensing data has been widely used in the estimation of crop yields by employing statistical methods such as regression model. Machine learning, which is an efficient empirical method for classification and prediction, is another approach to crop yield estimation. This paper described the corn yield estimation in Iowa State using four machine learning approaches such as SVM (Support Vector Machine), RF (Random Forest), ERT (Extremely Randomized Trees) and DL (Deep Learning). Also, comparisons of the validation statistics among them were presented. To examine the seasonal sensitivities of the corn yields, three period groups were set up: (1) MJJAS (May to September), (2) JA (July and August) and (3) OC (optimal combination of month). In overall, the DL method showed the highest accuracies in terms of the correlation coefficient for the three period groups. The accuracies were relatively favorable in the OC group, which indicates the optimal combination of month can be significant in statistical modeling of crop yields. The differences between our predictions and USDA (United States Department of Agriculture) statistics were about 6-8 %, which shows the machine learning approaches can be a viable option for crop yield modeling. In particular, the DL showed more stable results by overcoming the overfitting problem of generic machine learning methods.