• Title/Summary/Keyword: Conventional machine learning

Search Result 284, Processing Time 0.028 seconds

Development of Firewall System for Automated Policy Rule Generation based on Machine learning (머신러닝 기반의 자동 정책 생성 방화벽 시스템 개발)

  • Han, Kyung-Hyun;Hwang, Seong-Oun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.29-37
    • /
    • 2020
  • Conventional firewalls cannot cope with attacks immediately. It is because security professionals or administrators need to analyze them and enter relevant policies to the firewalls. In addition, those policies may often block even normal accesses. Even though the packet themselves are normal, there exist many attacks that cause denial of service due to the inflow of a large amount of those packets. In this paper, we propose a method to block attacks such as Flooding, Spoofing and Scanning while allowing normal accesses based on whitelist policies which are automatedly generated by learning normal access patterns.

Decision support system for underground coal pillar stability using unsupervised and supervised machine learning approaches

  • Kamran, Muhammad;Shahani, Niaz Muhammad;Armaghani, Danial Jahed
    • Geomechanics and Engineering
    • /
    • v.30 no.2
    • /
    • pp.107-121
    • /
    • 2022
  • Coal pillar assessment is of broad importance to underground engineering structure, as the pillar failure can lead to enormous disasters. Because of the highly non-linear correlation between the pillar failure and its influential attributes, conventional forecasting techniques cannot generate accurate outcomes. To approximate the complex behavior of coal pillar, this paper elucidates a new idea to forecast the underground coal pillar stability using combined unsupervised-supervised learning. In order to build a database of the study, a total of 90 patterns of pillar cases were collected from authentic engineering structures. A state-of-the art feature depletion method, t-distribution symmetric neighbor embedding (t-SNE) has been employed to reduce significance of actual data features. Consequently, an unsupervised machine learning technique K-mean clustering was followed to reassign the t-SNE dimensionality reduced data in order to compute the relative class of coal pillar cases. Following that, the reassign dataset was divided into two parts: 70 percent for training dataset and 30 percent for testing dataset, respectively. The accuracy of the predicted data was then examined using support vector classifier (SVC) model performance measures such as precision, recall, and f1-score. As a result, the proposed model can be employed for properly predicting the pillar failure class in a variety of underground rock engineering projects.

Machine Learning-based Concrete Crack Detection Framework for Facility Maintenance (시설물의 유지관리를 위한 기계학습 기반 콘크리트 균열 감지 프레임워크)

  • Ji, Bongjun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.22 no.10
    • /
    • pp.5-12
    • /
    • 2021
  • The deterioration of facilities is an unavoidable phenomenon. For the management of aging facilities, cracks can be detected and tracked, and the condition of the facilities can be indirectly inferred. Therefore, crack detection plays a crucial role in the management of aged facilities. Conventional maintenances are conducted using the crack detection results. For example, maintenance activities to prevent further deterioration can be performed. However, currently, most crack detection relies only on human judgment, so if the area of the facility is large, cost and time are excessively used, and different judgment results may occur depending on the expert's competence, it causes reliability problems. This paper proposes a concrete crack detection framework based on machine learning to overcome these limitations. Fully automated concrete crack detection was possible through the proposed framework, which showed a high accuracy of 96%. It is expected that effective and efficient management will be possible through the proposed framework in this paper.

Autism Spectrum Disorder Detection in Children using the Efficacy of Machine Learning Approaches

  • Tariq Rafiq;Zafar Iqbal;Tahreem Saeed;Yawar Abbas Abid;Muneeb Tariq;Urooj Majeed;Akasha
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.179-186
    • /
    • 2023
  • For the future prosperity of any society, the sound growth of children is essential. Autism Spectrum Disorder (ASD) is a neurobehavioral disorder which has an impact on social interaction of autistic child and has an undesirable effect on his learning, speaking, and responding skills. These children have over or under sensitivity issues of touching, smelling, and hearing. Its symptoms usually appear in the child of 4- to 11-year-old but parents did not pay attention to it and could not detect it at early stages. The process to diagnose in recent time is clinical sessions that are very time consuming and expensive. To complement the conventional method, machine learning techniques are being used. In this way, it improves the required time and precision for diagnosis. We have applied TFLite model on image based dataset to predict the autism based on facial features of child. Afterwards, various machine learning techniques were trained that includes Logistic Regression, KNN, Gaussian Naïve Bayes, Random Forest and Multi-Layer Perceptron using Autism Spectrum Quotient (AQ) dataset to improve the accuracy of the ASD detection. On image based dataset, TFLite model shows 80% accuracy and based on AQ dataset, we have achieved 100% accuracy from Logistic Regression and MLP models.

Effective Policy Search Method for Robot Reinforcement Learning with Noisy Reward (노이즈 환경에서 효과적인 로봇 강화 학습의 정책 탐색 방법)

  • Yang, Young-Ha;Lee, Cheol-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.1-7
    • /
    • 2022
  • Robots are widely used in industries and services. Traditional robots have been used to perform repetitive tasks in a fixed environment, and it is very difficult to solve a problem in which the physical interaction of the surrounding environment or other objects is complicated with the existing control method. Reinforcement learning has been actively studied as a method of machine learning to solve such problems, and provides answers to problems that robots have not solved in the conventional way. Studies on the learning of all physical robots are commonly affected by noise. Complex noises, such as control errors of robots, limitations in performance of measurement equipment, and complexity of physical interactions with surrounding environments and objects, can act as factors that degrade learning. A learning method that works well in a virtual environment may not very effective in a real robot. Therefore, this paper proposes a weighted sum method and a linear regression method as an effective and accurate learning method in a noisy environment. In addition, the bottle flipping was trained on a robot and compared with the existing learning method, the validity of the proposed method was verified.

Prediction of Power Consumptions Based on Gated Recurrent Unit for Internet of Energy (에너지 인터넷을 위한 GRU기반 전력사용량 예측)

  • Lee, Dong-gu;Sun, Young-Ghyu;Sim, Is-sac;Hwang, Yu-Min;Kim, Sooh-wan;Kim, Jin-Young
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.120-126
    • /
    • 2019
  • Recently, accurate prediction of power consumption based on machine learning techniques in Internet of Energy (IoE) has been actively studied using the large amount of electricity data acquired from advanced metering infrastructure (AMI). In this paper, we propose a deep learning model based on Gated Recurrent Unit (GRU) as an artificial intelligence (AI) network that can effectively perform pattern recognition of time series data such as the power consumption, and analyze performance of the prediction based on real household power usage data. In the performance analysis, performance comparison between the proposed GRU-based learning model and the conventional learning model of Long Short Term Memory (LSTM) is described. In the simulation results, mean squared error (MSE), mean absolute error (MAE), forecast skill score, normalized root mean square error (RMSE), and normalized mean bias error (NMBE) are used as performance evaluation indexes, and we confirm that the performance of the prediction of the proposed GRU-based learning model is greatly improved.

Machine Diagnosis and Maintenance Policy Generation Using Adaptive Decision Tree and Shortest Path Problem (적응형 의사결정 트리와 최단 경로법을 이용한 기계 진단 및 보전 정책 수립)

  • 백준걸
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.27 no.2
    • /
    • pp.33-49
    • /
    • 2002
  • CBM (Condition-Based Maintenance) has increasingly drawn attention in industry because of its many benefits. CBM Problem Is characterized as a state-dependent scheduling model that demands simultaneous maintenance actions, each for an attribute that influences on machine condition. This problem is very hard to solve within conventional Markov decision process framework. In this paper, we present an intelligent machine maintenance scheduler, for which a new incremental decision tree learning method as evolutionary system identification model and shortest path problem as schedule generation model are developed. Although our approach does not guarantee an optimal scheduling policy in mathematical viewpoint, we verified through simulation based experiment that the intelligent scheduler is capable of providing good scheduling policy that can be used in practice.

A Novel Image Classification Method for Content-based Image Retrieval via a Hybrid Genetic Algorithm and Support Vector Machine Approach

  • Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.75-81
    • /
    • 2011
  • This paper presents a novel method for image classification based on a hybrid genetic algorithm (GA) and support vector machine (SVM) approach which can significantly improve the classification performance for content-based image retrieval (CBIR). Though SVM has been widely applied to CBIR, it has some problems such as the kernel parameters setting and feature subset selection of SVM which impact the classification accuracy in the learning process. This study aims at simultaneously optimizing the parameters of SVM and feature subset without degrading the classification accuracy of SVM using GA for CBIR. Using the hybrid GA and SVM model, we can classify more images in the database effectively. Experiments were carried out on a large-size database of images and experiment results show that the classification accuracy of conventional SVM may be improved significantly by using the proposed model. We also found that the proposed model outperformed all the other models such as neural network and typical SVM models.

Research Trends in Quantum Error Decoders for Fault-Tolerant Quantum Computing (결함허용 양자 컴퓨팅을 위한 양자 오류 복호기 연구 동향)

  • E.Y. Cho;J.H. On;C.Y. Kim;G. Cha
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.34-50
    • /
    • 2023
  • Quantum error correction is a key technology for achieving fault-tolerant quantum computation. Finding the best decoding solution to a single error syndrome pattern counteracting multiple errors is an NP-hard problem. Consequently, error decoding is one of the most expensive processes to protect the information in a logical qubit. Recent research on quantum error decoding has been focused on developing conventional and neural-network-based decoding algorithms to satisfy accuracy, speed, and scalability requirements. Although conventional decoding methods have notably improved accuracy in short codes, they face many challenges regarding speed and scalability in long codes. To overcome such problems, machine learning has been extensively applied to neural-network-based error decoding with meaningful results. Nevertheless, when using neural-network-based decoders alone, the learning cost grows exponentially with the code size. To prevent this problem, hierarchical error decoding has been devised by combining conventional and neural-network-based decoders. In addition, research on quantum error decoding is aimed at reducing the spacetime decoding cost and solving the backlog problem caused by decoding delays when using hardware-implemented decoders in cryogenic environments. We review the latest research trends in decoders for quantum error correction with high accuracy, neural-network-based quantum error decoders with high speed and scalability, and hardware-based quantum error decoders implemented in real qubit operating environments.

Study of Improved CNN Algorithm for Object Classification Machine Learning of Simple High Resolution Image (고해상도 단순 이미지의 객체 분류 학습모델 구현을 위한 개선된 CNN 알고리즘 연구)

  • Hyeopgeon Lee;Young-Woon Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.1
    • /
    • pp.41-49
    • /
    • 2023
  • A convolutional neural network (CNN) is a representative algorithm for implementing artificial neural networks. CNNs have improved on the issues of rapid increase in calculation amount and low object classification rates, which are associated with a conventional multi-layered fully-connected neural network (FNN). However, because of the rapid development of IT devices, the maximum resolution of images captured by current smartphone and tablet cameras has reached 108 million pixels (MP). Specifically, a traditional CNN algorithm requires a significant cost and time to learn and process simple, high-resolution images. Therefore, this study proposes an improved CNN algorithm for implementing an object classification learning model for simple, high-resolution images. The proposed method alters the adjacency matrix value of the pooling layer's max pooling operation for the CNN algorithm to reduce the high-resolution image learning model's creation time. This study implemented a learning model capable of processing 4, 8, and 12 MP high-resolution images for each altered matrix value. The performance evaluation result showed that the creation time of the learning model implemented with the proposed algorithm decreased by 36.26% for 12 MP images. Compared to the conventional model, the proposed learning model's object recognition accuracy and loss rate were less than 1%, which is within the acceptable error range. Practical verification is necessary through future studies by implementing a learning model with more varied image types and a larger amount of image data than those used in this study.