• 제목/요약/키워드: and machine-learning

Search Result 5,428, Processing Time 0.033 seconds

Machine Learning Based Malware Detection Using API Call Time Interval (API Call Time Interval을 활용한 머신러닝 기반의 악성코드 탐지)

  • Cho, Young Min;Kwon, Hun Yeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.51-58
    • /
    • 2020
  • The use of malware in cyber threats continues to be used in all ages, and will continue to be a major attack method even if IT technology advances. Therefore, researches for detecting such malicious codes are constantly tried in various ways. Recently, with the development of AI-related technology, many researches related to machine learning have been conducted to detect malware. In this paper, we propose a method to detect malware using machine learning. For machine learning detection, we create a feature around each call interval, ie Time Interval, in which API calls occur among dynamic analysis data, and then apply the result to machine learning techniques.

A Study on Ontology Generation by Machine Learning in Big Data (빅 데이터에서 기계학습을 통한 온톨로지 생성에 관한 연구)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.645-646
    • /
    • 2018
  • Recently, the concept of machine learning has been introduced as a decision making method through data processing. Machine learning uses the results of running based on existing data as a means of decision making. The data generated by the development of technology is vast. This data is called big data. It is important to extract the necessary data from these data. In this paper, we propose a method for extracting related data for constructing an ontology through machine learning. The results of machine learning can be given a relationship from a semantic perspective. it can be added to the ontology to support relationships depending on the needs of the application.

  • PDF

Early Diagnosis of anxiety Disorder Using Artificial Intelligence

  • Choi DongOun;Huan-Meng;Yun-Jeong, Kang
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.1
    • /
    • pp.242-248
    • /
    • 2024
  • Contemporary societal and environmental transformations coincide with the emergence of novel mental health challenges. anxiety disorder, a chronic and highly debilitating illness, presents with diverse clinical manifestations. Epidemiological investigations indicate a global prevalence of 5%, with an additional 10% exhibiting subclinical symptoms. Notably, 9% of adolescents demonstrate clinical features. Untreated, anxiety disorder exerts profound detrimental effects on individuals, families, and the broader community. Therefore, it is very meaningful to predict anxiety disorder through machine learning algorithm analysis model. The main research content of this paper is the analysis of the prediction model of anxiety disorder by machine learning algorithms. The research purpose of machine learning algorithms is to use computers to simulate human learning activities. It is a method to locate existing knowledge, acquire new knowledge, continuously improve performance, and achieve self-improvement by learning computers. This article analyzes the relevant theories and characteristics of machine learning algorithms and integrates them into anxiety disorder prediction analysis. The final results of the study show that the AUC of the artificial neural network model is the largest, reaching 0.8255, indicating that it is better than the other two models in prediction accuracy. In terms of running time, the time of the three models is less than 1 second, which is within the acceptable range.

A novel visual tracking system with adaptive incremental extreme learning machine

  • Wang, Zhihui;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.451-465
    • /
    • 2017
  • This paper presents a novel discriminative visual tracking algorithm with an adaptive incremental extreme learning machine. The parameters for an adaptive incremental extreme learning machine are initialized at the first frame with a target that is manually assigned. At each frame, the training samples are collected and random Haar-like features are extracted. The proposed tracker updates the overall output weights for each frame, and the updated tracker is used to estimate the new location of the target in the next frame. The adaptive learning rate for the update of the overall output weights is estimated by using the confidence of the predicted target location at the current frame. Our experimental results indicate that the proposed tracker can manage various difficulties and can achieve better performance than other state-of-the-art trackers.

Machine Learning-based Bedscore Stage Classification Algorithm (머신러닝 기반 욕창 단계 분류 알고리즘)

  • Cho, Young-bok;Yoo, Ha-na
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.326-327
    • /
    • 2022
  • This study is an algorithm for clinical decision-making using machine learning, and it is an algorithm to classify pressure sores to be used in the development of a system to help prevent pressure sores when nursing staff care for patients who lie down for a long time. As a result of machine learning, the learning accuracy of the algorithm was 82.14% and the test accuracy was 82.58%.

  • PDF

Sentiment Orientation Using Deep Learning Sequential and Bidirectional Models

  • Alyamani, Hasan J.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.23-30
    • /
    • 2021
  • Sentiment Analysis has become very important field of research because posting of reviews is becoming a trend. Supervised, unsupervised and semi supervised machine learning methods done lot of work to mine this data. Feature engineering is complex and technical part of machine learning. Deep learning is a new trend, where this laborious work can be done automatically. Many researchers have done many works on Deep learning Convolutional Neural Network (CNN) and Long Shor Term Memory (LSTM) Neural Network. These requires high processing speed and memory. Here author suggested two models simple & bidirectional deep leaning, which can work on text data with normal processing speed. At end both models are compared and found bidirectional model is best, because simple model achieve 50% accuracy and bidirectional deep learning model achieve 99% accuracy on trained data while 78% accuracy on test data. But this is based on 10-epochs and 40-batch size. This accuracy can also be increased by making different attempts on epochs and batch size.

Extraction of the OLED Device Parameter based on Randomly Generated Monte Carlo Simulation with Deep Learning (무작위 생성 심층신경망 기반 유기발광다이오드 흑점 성장가속 전산모사를 통한 소자 변수 추출)

  • You, Seung Yeol;Park, Il-Hoo;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.131-135
    • /
    • 2021
  • Numbers of studies related to optimization of design of organic light emitting diodes(OLED) through machine learning are increasing. We propose the generative method of the image to assess the performance of the device combining with machine learning technique. Principle parameter regarding dark spot growth mechanism of the OLED can be the key factor to determine the long-time performance. Captured images from actual device and randomly generated images at specific time and initial pinhole state are fed into the deep neural network system. The simulation reinforced by the machine learning technique can predict the device parameters accurately and faster. Similarly, the inverse design using multiple layer perceptron(MLP) system can infer the initial degradation factors at manufacturing with given device parameter to feedback the design of manufacturing process.

COMPARATIVE STUDY OF THE PERFORMANCE OF SUPPORT VECTOR MACHINES WITH VARIOUS KERNELS

  • Nam, Seong-Uk;Kim, Sangil;Kim, HyunMin;Yu, YongBin
    • East Asian mathematical journal
    • /
    • v.37 no.3
    • /
    • pp.333-354
    • /
    • 2021
  • A support vector machine (SVM) is a state-of-the-art machine learning model rooted in structural risk minimization. SVM is underestimated with regards to its application to real world problems because of the difficulties associated with its use. We aim at showing that the performance of SVM highly depends on which kernel function to use. To achieve these, after providing a summary of support vector machines and kernel function, we constructed experiments with various benchmark datasets to compare the performance of various kernel functions. For evaluating the performance of SVM, the F1-score and its Standard Deviation with 10-cross validation was used. Furthermore, we used taylor diagrams to reveal the difference between kernels. Finally, we provided Python codes for all our experiments to enable re-implementation of the experiments.

Parameter Optimization of Extreme Learning Machine Using Bacterial Foraging Algorithm (Bacterial Foraging Algorithm을 이용한 Extreme Learning Machine의 파라미터 최적화)

  • Cho, Jae-Hoon;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.807-812
    • /
    • 2007
  • Recently, Extreme learning machine(ELM), a novel learning algorithm which is much faster than conventional gradient-based learning algorithm, was proposed for single-hidden-layer feedforward neural networks. The initial input weights and hidden biases of ELM are usually randomly chosen, and the output weights are analytically determined by using Moore-Penrose(MP) generalized inverse. But it has the difficulties to choose initial input weights and hidden biases. In this paper, an advanced method using the bacterial foraging algorithm to adjust the input weights and hidden biases is proposed. Experiment at results show that this method can achieve better performance for problems having higher dimension than others.

Comparative characteristic of ensemble machine learning and deep learning models for turbidity prediction in a river (딥러닝과 앙상블 머신러닝 모형의 하천 탁도 예측 특성 비교 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.1
    • /
    • pp.83-91
    • /
    • 2021
  • The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.