• Title/Summary/Keyword: Learning Machine System

Search Result 1,780, Processing Time 0.021 seconds

Learning of Adaptive Behavior of artificial Ant Using Classifier System (분류자 시스템을 이용한 인공개미의 적응행동의 학습)

  • 정치선;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.361-367
    • /
    • 1998
  • The main two applications of the Genetic Algorithms(GA) are the optimization and the machine learning. Machine Learning has two objectives that make the complex system learn its environment and produce the proper output of a system. The machine learning using the Genetic Algorithms is called GA machine learning or genetic-based machine learning (GBML). The machine learning is different from the optimization problems in finding the rule set. In optimization problems, the population of GA should converge into the best individual because optimization problems, the population of GA should converge into the best individual because their objective is the production of the individual near the optimal solution. On the contrary, the machine learning systems need to find the set of cooperative rules. There are two methods in GBML, Michigan method and Pittsburgh method. The former is that each rule is expressed with a string, the latter is that the set of rules is coded into a string. Th classifier system of Holland is the representative model of the Michigan method. The classifier systems arrange the strength of classifiers of classifier list using the message list. In this method, the real time process and on-line learning is possible because a set of rule is adjusted on-line. A classifier system has three major components: Performance system, apportionment of credit system, rule discovery system. In this paper, we solve the food search problem with the learning and evolution of an artificial ant using the learning classifier system.

  • PDF

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Sensor Data Collection & Refining System for Machine Learning-Based Cloud (기계학습 기반의 클라우드를 위한 센서 데이터 수집 및 정제 시스템)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.165-170
    • /
    • 2021
  • Machine learning has recently been applied to research in most areas. This is because the results of machine learning are not determined, but the learning of input data creates the objective function, which enables the determination of new data. In addition, the increase in accumulated data affects the accuracy of machine learning results. The data collected here is an important factor in machine learning. The proposed system is a convergence system of cloud systems and local fog systems for service delivery. Thus, the cloud system provides machine learning and infrastructure for services, while the fog system is located in the middle of the cloud and the user to collect and refine data. The data for this application shall be based on the Sensitive data generated by smart devices. The machine learning technique applied to this system uses SVM algorithm for classification and RNN algorithm for status recognition.

Trends in image processing techniques applied to corrosion detection and analysis (부식 검출과 분석에 적용한 영상 처리 기술 동향)

  • Beomsoo Kim;Jaesung Kwon;Jeonghyeon Yang
    • Journal of the Korean institute of surface engineering
    • /
    • v.56 no.6
    • /
    • pp.353-370
    • /
    • 2023
  • Corrosion detection and analysis is a very important topic in reducing costs and preventing disasters. Recently, image processing techniques have been widely applied to corrosion identification and analysis. In this work, we briefly introduces traditional image processing techniques and machine learning algorithms applied to detect or analyze corrosion in various fields. Recently, machine learning, especially CNN-based algorithms, have been widely applied to corrosion detection. Additionally, research on applying machine learning to region segmentation is very actively underway. The corrosion is reddish and brown in color and has a very irregular shape, so a combination of techniques that consider color and texture, various mathematical techniques, and machine learning algorithms are used to detect and analyze corrosion. We present examples of the application of traditional image processing techniques and machine learning to corrosion detection and analysis.

Underwater Acoustic Research Trends with Machine Learning: General Background

  • Yang, Haesang;Lee, Keunhwa;Choo, Youngmin;Kim, Kookhyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.2
    • /
    • pp.147-154
    • /
    • 2020
  • Underwater acoustics that is the study of the phenomenon of underwater wave propagation and its interaction with boundaries, has mainly been applied to the fields of underwater communication, target detection, marine resources, marine environment, and underwater sound sources. Based on the scientific and engineering understanding of acoustic signals/data, recent studies combining traditional and data-driven machine learning methods have shown continuous progress. Machine learning, represented by deep learning, has shown unprecedented success in a variety of fields, owing to big data, graphical processor unit computing, and advances in algorithms. Although machine learning has not yet been implemented in every single field of underwater acoustics, it will be used more actively in the future in line with the ongoing development and overwhelming achievements of this method. To understand the research trends of machine learning applications in underwater acoustics, the general theoretical background of several related machine learning techniques is introduced in this paper.

Study on Automatic Bug Triage using Deep Learning (딥 러닝을 이용한 버그 담당자 자동 배정 연구)

  • Lee, Sun-Ro;Kim, Hye-Min;Lee, Chan-Gun;Lee, Ki-Seong
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1156-1164
    • /
    • 2017
  • Existing studies on automatic bug triage were mostly used the method of designing the prediction system based on the machine learning algorithm. Therefore, it can be said that applying a high-performance machine learning model is the core of the performance of the automatic bug triage system. In the related research, machine learning models that have high performance are mainly used, such as SVM and Naïve Bayes. In this paper, we apply Deep Learning, which has recently shown good performance in the field of machine learning, to automatic bug triage and evaluate its performance. Experimental results show that the Deep Learning based Bug Triage system achieves 48% accuracy in active developer experiments, un improvement of up to 69% over than conventional machine learning techniques.

A Study on the Prediction Model for Imported Vehicle Purchase Cancellation Using Machine Learning: Case of H Imported Vehicle Dealers (머신러닝을 이용한 국내 수입 자동차 구매 해약 예측 모델 연구: H 수입차 딜러사 대상으로)

  • Jung, Dong Kun;Lee, Jong Hwa;Lee, Hyun Kyu
    • The Journal of Information Systems
    • /
    • v.30 no.2
    • /
    • pp.105-126
    • /
    • 2021
  • Purpose The purpose of this study is to implement a optimal machine learning model about the cancellation prediction performance in car sales business. It is to apply the data set of accumulated contract, cancellation, and sales information in sales support system(SFA) which is commonly used for sales, customers and inventory management by imported car dealers, to several machine learning models and predict performance of cancellation. Design/methodology/approach This study extracts 29,073 contracts, cancellations, and sales data from 2015 to 2020 accumulated in the sales support system(SFA) for imported car dealers and uses the analysis program Python Jupiter notebook in order to perform data pre-processing, verification, and modeling that is applying and learning to Machine learning model after then the final result was predicted using new data. Findings This study confirmed that cancellation prediction is possible by applying car purchase contract information to machine learning models. It proved the possibility of developing and utilizing a generalized predictive model by using data of imported car sales system with machine learning technology. It can reduce and prevent the sales failure as caring the potential lost customer intensively and it lead to increase sales revenue by predicting the cancellation possibility of individual customers.

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Research Trends Analysis of Machine Learning and Deep Learning: Focused on the Topic Modeling (머신러닝 및 딥러닝 연구동향 분석: 토픽모델링을 중심으로)

  • Kim, Chang-Sik;Kim, Namgyu;Kwahk, Kee-Young
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.2
    • /
    • pp.19-28
    • /
    • 2019
  • The purpose of this study is to examine the trends on machine learning and deep learning research in the published journals from the Web of Science Database. To achieve the study purpose, we used the abstracts of 20,664 articles published between 1990 and 2017, which include the word 'machine learning', 'deep learning', and 'artificial neural network' in their titles. Twenty major research topics were identified from topic modeling analysis and they were inclusive of classification accuracy, machine learning, optimization problem, time series model, temperature flow, engine variable, neuron layer, spectrum sample, image feature, strength property, extreme machine learning, control system, energy power, cancer patient, descriptor compound, fault diagnosis, soil map, concentration removal, protein gene, and job problem. The analysis of the time-series linear regression showed that all identified topics in machine learning research were 'hot' ones.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.