• Title/Summary/Keyword: Machine Learning and Artificial Intelligence

검색결과 747건 처리시간 0.03초

다양한 종류의 예측에서 머신러닝 성능 비교 (Performance Comparison of Machine Learning in the Various Kind of Prediction)

  • 박귀만;배영철
    • 한국전자통신학회논문지
    • /
    • 제14권1호
    • /
    • pp.169-178
    • /
    • 2019
  • 현재 인공지능의 한 영역인 머신러닝을 적용하여 다양한 예측을 수행하고 있으나 실제 현장에서 어떤 종류의 알고리즘을 사용하는 것이 가장 좋은 방법인지는 늘 문제가 된다. 본 논문은 여러 머신러닝 지도 학습 알고리즘을 이용하여 월별 전력 거래량, 전력 거래금액, 월별 생산 확산 지수, 최종 에너지 소비, 자동차용 경유를 예측하여 각 경우에 어떤 알고리즘이 가장 적합한 알고리즘인지를 알아본다. 이를 위해 통계청에 나와 있는 월별 전력 거래량과 월별 전력 거래금액, 월별 생산 확산 지수, 최종에너지 소비, 자동차용 경유로 머신 러닝이 예측하는 값의 확률을 보여주고 각각의 예측 값을 평균화 하여 이들 중에서 어떤 기법이 가장 우수한 기법인지를 확인한다.

A Network Packet Analysis Method to Discover Malicious Activities

  • Kwon, Taewoong;Myung, Joonwoo;Lee, Jun;Kim, Kyu-il;Song, Jungsuk
    • Journal of Information Science Theory and Practice
    • /
    • 제10권spc호
    • /
    • pp.143-153
    • /
    • 2022
  • With the development of networks and the increase in the number of network devices, the number of cyber attacks targeting them is also increasing. Since these cyber-attacks aim to steal important information and destroy systems, it is necessary to minimize social and economic damage through early detection and rapid response. Many studies using machine learning (ML) and artificial intelligence (AI) have been conducted, among which payload learning is one of the most intuitive and effective methods to detect malicious behavior. In this study, we propose a preprocessing method to maximize the performance of the model when learning the payload in term units. The proposed method constructs a high-quality learning data set by eliminating unnecessary noise (stopwords) and preserving important features in consideration of the machine language and natural language characteristics of the packet payload. Our method consists of three steps: Preserving significant special characters, Generating a stopword list, and Class label refinement. By processing packets of various and complex structures based on these three processes, it is possible to make high-quality training data that can be helpful to build high-performance ML/AI models for security monitoring. We prove the effectiveness of the proposed method by comparing the performance of the AI model to which the proposed method is applied and not. Forthermore, by evaluating the performance of the AI model applied proposed method in the real-world Security Operating Center (SOC) environment with live network traffic, we demonstrate the applicability of the our method to the real environment.

허밍: DeepJ 구조를 이용한 이미지 기반 자동 작곡 기법 연구 (Humming: Image Based Automatic Music Composition Using DeepJ Architecture)

  • 김태헌;정기철;이인성
    • 한국멀티미디어학회논문지
    • /
    • 제25권5호
    • /
    • pp.748-756
    • /
    • 2022
  • Thanks to the competition of AlphaGo and Sedol Lee, machine learning has received world-wide attention and huge investments. The performance improvement of computing devices greatly contributed to big data processing and the development of neural networks. Artificial intelligence not only imitates human beings in many fields, but also seems to be better than human capabilities. Although humans' creation is still considered to be better and higher, several artificial intelligences continue to challenge human creativity. The quality of some creative outcomes by AI is as good as the real ones produced by human beings. Sometimes they are not distinguishable, because the neural network has the competence to learn the common features contained in big data and copy them. In order to confirm whether artificial intelligence can express the inherent characteristics of different arts, this paper proposes a new neural network model called Humming. It is an experimental model that combines vgg16, which extracts image features, and DeepJ's architecture, which excels in creating various genres of music. A dataset produced by our experiment shows meaningful and valid results. Different results, however, are produced when the amount of data is increased. The neural network produced a similar pattern of music even though it was a different classification of images, which was not what we were aiming for. However, these new attempts may have explicit significance as a starting point for feature transfer that will be further studied.

Data-Driven Approach for Lithium-Ion Battery Remaining Useful Life Prediction: A Literature Review

  • Luon Tran Van;Lam Tran Ha;Deokjai Choi
    • 스마트미디어저널
    • /
    • 제11권11호
    • /
    • pp.63-74
    • /
    • 2022
  • Nowadays, lithium-ion battery has become more popular around the world. Knowing when batteries reach their end of life (EOL) is crucial. Accurately predicting the remaining useful life (RUL) of lithium-ion batteries is needed for battery health management systems and to avoid unexpected accidents. It gives information about the battery status and when we should replace the battery. With the rapid growth of machine learning and deep learning, data-driven approaches are proposed to address this problem. Extracting aging information from battery charge/discharge records, including voltage, current, and temperature, can determine the battery state and predict battery RUL. In this work, we first outlined the charging and discharging processes of lithium-ion batteries. We then summarize the proposed techniques and achievements in all published data-driven RUL prediction studies. From that, we give a discussion about the accomplishments and remaining works with the corresponding challenges in order to provide a direction for further research in this area.

인공지능을 이용한 학습부진 특성 추출 및 예측 모델 연구 (Extracting characteristics of underachievers learning using artificial intelligence and researching a prediction model)

  • 양자영;문경희;박성호
    • 한국정보통신학회논문지
    • /
    • 제26권4호
    • /
    • pp.510-518
    • /
    • 2022
  • 국가수준에서 시행되는 진단평가는 학교에서 학습부진이 있는 학생을 조기 발견하는 것이 매우 중요하다. 본연구는 부산교육종단의 2019년 중학교 1학년의 데이터를 입력하여 2020년 성취여부를 판별하는 인공지능 모델을 구축하고 분석하였다. 머신러닝 알고리즘으로 중학교 국어, 영어, 수학 기초학력을 예측하는 예측모형을 개발하고, 다음 학년 예측에도 78%, 82%, 83% 의 정확도를 보이는 것을 확인하였다. 또한, 중학교 과목별 성취예측 의사결정트리를 그려서 과정을 분석해보면서, 성취 예측에 영향을 미치는 특성들은 어떠한 것들이 있는지 살펴보았다.

연령, 성별, 인종 구분을 위한 잔차블록 기반 컨볼루션 신경망 (Residual Blocks-Based Convolutional Neural Network for Age, Gender, and Race Classification)

  • 하사노바 노디라;신봉기
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.568-570
    • /
    • 2023
  • The problem of classifying of age, gender, and race images still poses challenges. Despite deep and machine learning strides, convolutional neural networks (CNNs) remain pivotal in addressing these issues. This paper introduces a novel CNN-based approach for accurate and efficient age, gender, and race classification. Leveraging CNNs with residual blocks, our method enhances learning while minimizing computational complexity. The model effectively captures low-level and high-level features, yielding improved classification accuracy. Evaluation of the diverse 'fair face' dataset shows our model achieving 56.3%, 94.6%, and 58.4% accuracy for age, gender, and race, respectively.

Towards Effective Analysis and Tracking of Mozilla and Eclipse Defects using Machine Learning Models based on Bugs Data

  • Hassan, Zohaib;Iqbal, Naeem;Zaman, Abnash
    • Soft Computing and Machine Intelligence
    • /
    • 제1권1호
    • /
    • pp.1-10
    • /
    • 2021
  • Analysis and Tracking of bug reports is a challenging field in software repositories mining. It is one of the fundamental ways to explores a large amount of data acquired from defect tracking systems to discover patterns and valuable knowledge about the process of bug triaging. Furthermore, bug data is publically accessible and available of the following systems, such as Bugzilla and JIRA. Moreover, with robust machine learning (ML) techniques, it is quite possible to process and analyze a massive amount of data for extracting underlying patterns, knowledge, and insights. Therefore, it is an interesting area to propose innovative and robust solutions to analyze and track bug reports originating from different open source projects, including Mozilla and Eclipse. This research study presents an ML-based classification model to analyze and track bug defects for enhancing software engineering management (SEM) processes. In this work, Artificial Neural Network (ANN) and Naive Bayesian (NB) classifiers are implemented using open-source bug datasets, such as Mozilla and Eclipse. Furthermore, different evaluation measures are employed to analyze and evaluate the experimental results. Moreover, a comparative analysis is given to compare the experimental results of ANN with NB. The experimental results indicate that the ANN achieved high accuracy compared to the NB. The proposed research study will enhance SEM processes and contribute to the body of knowledge of the data mining field.

A DDoS attack Mitigation in IoT Communications Using Machine Learning

  • Hailye Tekleselase
    • International Journal of Computer Science & Network Security
    • /
    • 제24권4호
    • /
    • pp.170-178
    • /
    • 2024
  • Through the growth of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have appeared to wireless communication system, especially in cybersecurity. And IoT networks are gradually attractive stages for introduction of DDoS attacks due to integral frailer security and resource-constrained nature of IoT devices. This paper emphases on detecting DDoS attack in wireless networks by categorizing inward network packets on the transport layer as either "abnormal" or "normal" using the integration of machine learning algorithms knowledge-based system. In this paper, deep learning algorithms and CNN were autonomously trained for mitigating DDoS attacks. This paper lays importance on misuse based DDOS attacks which comprise TCP SYN-Flood and ICMP flood. The researcher uses CICIDS2017 and NSL-KDD dataset in training and testing the algorithms (model) while the experimentation phase. accuracy score is used to measure the classification performance of the four algorithms. the results display that the 99.93 performance is recorded.

A Study on Efficient Memory Management Using Machine Learning Algorithm

  • Park, Beom-Joo;Kang, Min-Soo;Lee, Minho;Jung, Yong Gyu
    • International journal of advanced smart convergence
    • /
    • 제6권1호
    • /
    • pp.39-43
    • /
    • 2017
  • As the industry grows, the amount of data grows exponentially, and data analysis using these serves as a predictable solution. As data size increases and processing speed increases, it has begun to be applied to new fields by combining artificial intelligence technology as well as simple big data analysis. In this paper, we propose a method to quickly apply a machine learning based algorithm through efficient resource allocation. The proposed algorithm allocates memory for each attribute. Learning Distinct of Attribute and allocating the right memory. In order to compare the performance of the proposed algorithm, we compared it with the existing K-means algorithm. As a result of measuring the execution time, the speed was improved.

임베디드 시스템에서의 양자화 기계학습을 위한 양자화 오차보상에 관한 연구 (Study on Quantized Learning for Machine Learning Equation in an Embedded System)

  • 석진욱;김정시
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2019년도 추계학술대회
    • /
    • pp.110-113
    • /
    • 2019
  • 본 논문에서는 임베디드 시스템에서의 양자화 기계학습을 수행할 경우 발생하는 양자화 오차를 효과적으로 보상하기 위한 방법론을 제안한다. 경사 도함수(Gradient)를 사용하는 기계학습이나 비선형 신호처리 알고리즘에서 양자화 오차는 경사 도함수의 조기 소산(Early Vanishing Gradient)을 야기하여 전체적인 알고리즘의 성능 하락을 가져온다. 이를 보상하기 위하여 경사 도함수의 최대 성분에 대하여 직교하는 방향의 보상 탐색 벡터를 유도하여 양자화 오차로 인한 성능 하락을 보상하도록 한다. 또한, 기존의 고정 학습률 대신, 내부 순환(Inner Loop) 없는 비선형 최적화 알고리즘에 기반한 적응형 학습률 결정 알고리즘을 제안한다. 실험결과 제안한 방식의 알고리즘을 비선형 최적화 문제에 적용할 시 양자화 오차로 인한 성능 하락을 최소화시킬 수 있음을 확인하였다.

  • PDF