• Title/Summary/Keyword: Machine Learning and Artificial Intelligence

Search Result 747, Processing Time 0.031 seconds

Performance Comparison of Machine Learning in the Various Kind of Prediction (다양한 종류의 예측에서 머신러닝 성능 비교)

  • Park, Gwi-Man;Bae, Young-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.169-178
    • /
    • 2019
  • Now a day, we can perform various predictions by applying machine learning, which is a field of artificial intelligence; however, the finding of best algorithm in the field is always the problem. This paper predicts monthly power trading amount, monthly power trading amount of money, monthly index of production extension, final consumption of energy, and diesel for automotive using machine learning supervised algorithms. Then, we find most fit algorithm among them for each case. To do this we show the probability of predicting the value for monthly power trading amount and monthly power trading amount of money, monthly index of production extension, final consumption of energy, and diesel for automotive. Then, we try to average each predicting values. Finally, we confirm which algorithm is the most superior algorithm among them.

A Network Packet Analysis Method to Discover Malicious Activities

  • Kwon, Taewoong;Myung, Joonwoo;Lee, Jun;Kim, Kyu-il;Song, Jungsuk
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.143-153
    • /
    • 2022
  • With the development of networks and the increase in the number of network devices, the number of cyber attacks targeting them is also increasing. Since these cyber-attacks aim to steal important information and destroy systems, it is necessary to minimize social and economic damage through early detection and rapid response. Many studies using machine learning (ML) and artificial intelligence (AI) have been conducted, among which payload learning is one of the most intuitive and effective methods to detect malicious behavior. In this study, we propose a preprocessing method to maximize the performance of the model when learning the payload in term units. The proposed method constructs a high-quality learning data set by eliminating unnecessary noise (stopwords) and preserving important features in consideration of the machine language and natural language characteristics of the packet payload. Our method consists of three steps: Preserving significant special characters, Generating a stopword list, and Class label refinement. By processing packets of various and complex structures based on these three processes, it is possible to make high-quality training data that can be helpful to build high-performance ML/AI models for security monitoring. We prove the effectiveness of the proposed method by comparing the performance of the AI model to which the proposed method is applied and not. Forthermore, by evaluating the performance of the AI model applied proposed method in the real-world Security Operating Center (SOC) environment with live network traffic, we demonstrate the applicability of the our method to the real environment.

Humming: Image Based Automatic Music Composition Using DeepJ Architecture (허밍: DeepJ 구조를 이용한 이미지 기반 자동 작곡 기법 연구)

  • Kim, Taehun;Jung, Keechul;Lee, Insung
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.748-756
    • /
    • 2022
  • Thanks to the competition of AlphaGo and Sedol Lee, machine learning has received world-wide attention and huge investments. The performance improvement of computing devices greatly contributed to big data processing and the development of neural networks. Artificial intelligence not only imitates human beings in many fields, but also seems to be better than human capabilities. Although humans' creation is still considered to be better and higher, several artificial intelligences continue to challenge human creativity. The quality of some creative outcomes by AI is as good as the real ones produced by human beings. Sometimes they are not distinguishable, because the neural network has the competence to learn the common features contained in big data and copy them. In order to confirm whether artificial intelligence can express the inherent characteristics of different arts, this paper proposes a new neural network model called Humming. It is an experimental model that combines vgg16, which extracts image features, and DeepJ's architecture, which excels in creating various genres of music. A dataset produced by our experiment shows meaningful and valid results. Different results, however, are produced when the amount of data is increased. The neural network produced a similar pattern of music even though it was a different classification of images, which was not what we were aiming for. However, these new attempts may have explicit significance as a starting point for feature transfer that will be further studied.

Data-Driven Approach for Lithium-Ion Battery Remaining Useful Life Prediction: A Literature Review

  • Luon Tran Van;Lam Tran Ha;Deokjai Choi
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.63-74
    • /
    • 2022
  • Nowadays, lithium-ion battery has become more popular around the world. Knowing when batteries reach their end of life (EOL) is crucial. Accurately predicting the remaining useful life (RUL) of lithium-ion batteries is needed for battery health management systems and to avoid unexpected accidents. It gives information about the battery status and when we should replace the battery. With the rapid growth of machine learning and deep learning, data-driven approaches are proposed to address this problem. Extracting aging information from battery charge/discharge records, including voltage, current, and temperature, can determine the battery state and predict battery RUL. In this work, we first outlined the charging and discharging processes of lithium-ion batteries. We then summarize the proposed techniques and achievements in all published data-driven RUL prediction studies. From that, we give a discussion about the accomplishments and remaining works with the corresponding challenges in order to provide a direction for further research in this area.

Extracting characteristics of underachievers learning using artificial intelligence and researching a prediction model (인공지능을 이용한 학습부진 특성 추출 및 예측 모델 연구)

  • Yang, Ja-Young;Moon, Kyong-Hi;Park, Seong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.510-518
    • /
    • 2022
  • The diagnostic evaluation conducted at the national level is very important to detect underachievers in school early. This study used an artificial intelligence method to find the characteristics of underachievers that affect learning development for middle school students. In this study an artificial intelligence model was constructed and analyzed to determine whether the Busan Education Longitudinal Data in 2020 by entering data from the first year of middle school in 2019. A predictive model was developed to predict basic middle school Korean, English, and mathematics education with machine learning algorithms, and it was confirmed that the accuracy was 78%, 82%, and 83%, respectively, in the prediction for the next school year. In addition, by drawing an achievement prediction decision tree for each middle school subject we are analyzing the process of prediction. Finally, we examined what characteristics affect achievement prediction.

Residual Blocks-Based Convolutional Neural Network for Age, Gender, and Race Classification (연령, 성별, 인종 구분을 위한 잔차블록 기반 컨볼루션 신경망)

  • Khasanova Nodira Gayrat Kizi;Bong-Kee Sin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.568-570
    • /
    • 2023
  • The problem of classifying of age, gender, and race images still poses challenges. Despite deep and machine learning strides, convolutional neural networks (CNNs) remain pivotal in addressing these issues. This paper introduces a novel CNN-based approach for accurate and efficient age, gender, and race classification. Leveraging CNNs with residual blocks, our method enhances learning while minimizing computational complexity. The model effectively captures low-level and high-level features, yielding improved classification accuracy. Evaluation of the diverse 'fair face' dataset shows our model achieving 56.3%, 94.6%, and 58.4% accuracy for age, gender, and race, respectively.

Towards Effective Analysis and Tracking of Mozilla and Eclipse Defects using Machine Learning Models based on Bugs Data

  • Hassan, Zohaib;Iqbal, Naeem;Zaman, Abnash
    • Soft Computing and Machine Intelligence
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Analysis and Tracking of bug reports is a challenging field in software repositories mining. It is one of the fundamental ways to explores a large amount of data acquired from defect tracking systems to discover patterns and valuable knowledge about the process of bug triaging. Furthermore, bug data is publically accessible and available of the following systems, such as Bugzilla and JIRA. Moreover, with robust machine learning (ML) techniques, it is quite possible to process and analyze a massive amount of data for extracting underlying patterns, knowledge, and insights. Therefore, it is an interesting area to propose innovative and robust solutions to analyze and track bug reports originating from different open source projects, including Mozilla and Eclipse. This research study presents an ML-based classification model to analyze and track bug defects for enhancing software engineering management (SEM) processes. In this work, Artificial Neural Network (ANN) and Naive Bayesian (NB) classifiers are implemented using open-source bug datasets, such as Mozilla and Eclipse. Furthermore, different evaluation measures are employed to analyze and evaluate the experimental results. Moreover, a comparative analysis is given to compare the experimental results of ANN with NB. The experimental results indicate that the ANN achieved high accuracy compared to the NB. The proposed research study will enhance SEM processes and contribute to the body of knowledge of the data mining field.

A DDoS attack Mitigation in IoT Communications Using Machine Learning

  • Hailye Tekleselase
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.170-178
    • /
    • 2024
  • Through the growth of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have appeared to wireless communication system, especially in cybersecurity. And IoT networks are gradually attractive stages for introduction of DDoS attacks due to integral frailer security and resource-constrained nature of IoT devices. This paper emphases on detecting DDoS attack in wireless networks by categorizing inward network packets on the transport layer as either "abnormal" or "normal" using the integration of machine learning algorithms knowledge-based system. In this paper, deep learning algorithms and CNN were autonomously trained for mitigating DDoS attacks. This paper lays importance on misuse based DDOS attacks which comprise TCP SYN-Flood and ICMP flood. The researcher uses CICIDS2017 and NSL-KDD dataset in training and testing the algorithms (model) while the experimentation phase. accuracy score is used to measure the classification performance of the four algorithms. the results display that the 99.93 performance is recorded.

A Study on Efficient Memory Management Using Machine Learning Algorithm

  • Park, Beom-Joo;Kang, Min-Soo;Lee, Minho;Jung, Yong Gyu
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.39-43
    • /
    • 2017
  • As the industry grows, the amount of data grows exponentially, and data analysis using these serves as a predictable solution. As data size increases and processing speed increases, it has begun to be applied to new fields by combining artificial intelligence technology as well as simple big data analysis. In this paper, we propose a method to quickly apply a machine learning based algorithm through efficient resource allocation. The proposed algorithm allocates memory for each attribute. Learning Distinct of Attribute and allocating the right memory. In order to compare the performance of the proposed algorithm, we compared it with the existing K-means algorithm. As a result of measuring the execution time, the speed was improved.

Study on Quantized Learning for Machine Learning Equation in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk;Kim, Jeong-Si
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.110-113
    • /
    • 2019
  • 본 논문에서는 임베디드 시스템에서의 양자화 기계학습을 수행할 경우 발생하는 양자화 오차를 효과적으로 보상하기 위한 방법론을 제안한다. 경사 도함수(Gradient)를 사용하는 기계학습이나 비선형 신호처리 알고리즘에서 양자화 오차는 경사 도함수의 조기 소산(Early Vanishing Gradient)을 야기하여 전체적인 알고리즘의 성능 하락을 가져온다. 이를 보상하기 위하여 경사 도함수의 최대 성분에 대하여 직교하는 방향의 보상 탐색 벡터를 유도하여 양자화 오차로 인한 성능 하락을 보상하도록 한다. 또한, 기존의 고정 학습률 대신, 내부 순환(Inner Loop) 없는 비선형 최적화 알고리즘에 기반한 적응형 학습률 결정 알고리즘을 제안한다. 실험결과 제안한 방식의 알고리즘을 비선형 최적화 문제에 적용할 시 양자화 오차로 인한 성능 하락을 최소화시킬 수 있음을 확인하였다.

  • PDF