• Title/Summary/Keyword: Regularization

Search Result 487, Processing Time 0.027 seconds

Malware Detection Using Deep Recurrent Neural Networks with no Random Initialization

  • Amir Namavar Jahromi;Sattar Hashemi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.177-189
    • /
    • 2023
  • Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.

Estimation of bubble size distribution using deep ensemble physics-informed neural network (딥앙상블 물리 정보 신경망을 이용한 기포 크기 분포 추정)

  • Sunyoung Ko;Geunhwan Kim;Jaehyuk Lee;Hongju Gu;Kwangho Moon;Youngmin Choo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.305-312
    • /
    • 2023
  • Physics-Informed Neural Network (PINN) is used to invert bubble size distributions from attenuation losses. By considering a linear system for the bubble population inversion, Adaptive Learned Iterative Shrinkage Thresholding Algorithm (Ada-LISTA), which has been solved linear systems in image processing, is used as a neural network architecture in PINN. Furthermore, a regularization based on the linear system is added to a loss function of PINN and it makes a PINN have better generalization by a solution satisfying the bubble physics. To evaluate an uncertainty of bubble estimation, deep ensemble is adopted. 20 Ada-LISTAs with different initial values are trained using the same training dataset. During test with attenuation losses different from those in the training dataset, the bubble size distribution and corresponding uncertainty are indicated by average and variance of 20 estimations, respectively. Deep ensemble Ada-LISTA demonstrate superior performance in inverting bubble size distributions than the conventional convex optimization solver of CVX.

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

Management strategy through analysis of habitat suitability for otter (Lutra lutra) in Hwangguji Stream (황구지천 내 수달(Lutra lutra) 서식지 적합성 분석을 통한 관리 전략 제안)

  • Song, Won-Kyong
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.27 no.4
    • /
    • pp.1-14
    • /
    • 2024
  • Otters, designated as Class I endangered wildlife due to population declines resulting from urban development and stream burial, have seen increased appearances in freshwater environments since the nationwide ban on stream filling in 2020 and the implementation of urban stream restoration projects. There is a pressing need for scientific and strategic conservation measures for otters, an umbrella and vulnerable species in aquatic ecosystems. Therefore, this study predicts potential otter habitats using the species distribution model MaxEnt, focusing on Hwangguji Stream in Suwon, and proposes conservation strategies. Otter signs were surveyed over three years from 2019 to 2021 with citizen scientists, serving as presence data for the model. The model's outcomes were enhanced by analyzing 'river nature map' as a boundary. MaxEnt compared the performance of 60 combinations of feature classes and regularization multipliers to prevent model complexity and overfitting. Additionally, unmanned sensor cameras observed otter density for model validation, confirming correlations with the species distribution model results. The 'LQ-5.0' parameter combination showed the highest explanatory power with an AUC of 0.853. The model indicated that the 'adjacent land use' variable accounted for 31.5% of the explanation, with a preference for areas around cultivated lands. Otters were found to prefer shelter rates of 10-30% in riparian forests within 2 km of bridges. Higher otter densities observed by unmanned sensors correlated with increasing model values. Based on these results, the study suggests three conservation strategies: establishing stable buffer zones to enhance ecological connectivity, improving water quality against non-point source pollution, and raising public awareness. The study provides a scientific basis for potential otter habitat management, effective conservation through governance linking local governments, sustainable biodiversity goals, and civil organizations.

Evaluating the Impact of Training Conditions on the Performance of GPT-2-Small Based Korean-English Bilingual Models

  • Euhee Kim;Keonwoo Koo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.69-77
    • /
    • 2024
  • This study evaluates the performance of second language acquisition models learning Korean and English using the GPT-2-Small model, analyzing the impact of various training conditions on performance. Four training conditions were used: monolingual learning, sequential learning, sequential-interleaved learning, and sequential-EWC learning. The model was trained using datasets from the National Institute of Korean Language and English from BabyLM Challenge, with performance measured through PPL and BLiMP metrics. Results showed that monolingual learning had the best performance with a PPL of 16.2 and BLiMP accuracy of 73.7%. In contrast, sequential-EWC learning had the highest PPL of 41.9 and the lowest BLiMP accuracy of 66.3%(p < 0.05). Monolingual learning proved most effective for optimizing model performance. The EWC regularization in sequential-EWC learning degraded performance by limiting weight updates, hindering new language learning. This research improves understanding of language modeling and contributes to cognitive similarity in AI language learning.

Prediction of the Following BCI Performance by Means of Spectral EEG Characteristics in the Prior Resting State (뇌신호 주파수 특성을 이용한 CNN 기반 BCI 성능 예측)

  • Kang, Jae-Hwan;Kim, Sung-Hee;Youn, Joosang;Kim, Junsuk
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.265-272
    • /
    • 2020
  • In the research of brain computer interface (BCI) technology, one of the big problems encountered is how to deal with some people as called the BCI-illiteracy group who could not control the BCI system. To approach this problem efficiently, we investigated a kind of spectral EEG characteristics in the prior resting state in association with BCI performance in the following BCI tasks. First, spectral powers of EEG signals in the resting state with both eyes-open and eyes-closed conditions were respectively extracted. Second, a convolution neural network (CNN) based binary classifier discriminated the binary motor imagery intention in the BCI task. Both the linear correlation and binary prediction methods confirmed that the spectral EEG characteristics in the prior resting state were highly related to the BCI performance in the following BCI task. Linear regression analysis demonstrated that the relative ratio of the 13 Hz below and above the spectral power in the resting state with only eyes-open, not eyes-closed condition, were significantly correlated with the quantified metrics of the BCI performance (r=0.544). A binary classifier based on the linear regression with L1 regularization method was able to discriminate the high-performance group and low-performance group in the following BCI task by using the spectral-based EEG features in the precedent resting state (AUC=0.817). These results strongly support that the spectral EEG characteristics in the frontal regions during the resting state with eyes-open condition should be used as a good predictor of the following BCI task performance.

Network Anomaly Detection Technologies Using Unsupervised Learning AutoEncoders (비지도학습 오토 엔코더를 활용한 네트워크 이상 검출 기술)

  • Kang, Koohong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.617-629
    • /
    • 2020
  • In order to overcome the limitations of the rule-based intrusion detection system due to changes in Internet computing environments, the emergence of new services, and creativity of attackers, network anomaly detection (NAD) using machine learning and deep learning technologies has received much attention. Most of these existing machine learning and deep learning technologies for NAD use supervised learning methods to learn a set of training data set labeled 'normal' and 'attack'. This paper presents the feasibility of the unsupervised learning AutoEncoder(AE) to NAD from data sets collecting of secured network traffic without labeled responses. To verify the performance of the proposed AE mode, we present the experimental results in terms of accuracy, precision, recall, f1-score, and ROC AUC value on the NSL-KDD training and test data sets. In particular, we model a reference AE through the deep analysis of diverse AEs varying hyper-parameters such as the number of layers as well as considering the regularization and denoising effects. The reference model shows the f1-scores 90.4% and 89% of binary classification on the KDDTest+ and KDDTest-21 test data sets based on the threshold of the 82-th percentile of the AE reconstruction error of the training data set.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

Research Trend analysis for Seismic Data Interpolation Methods using Machine Learning (머신러닝을 사용한 탄성파 자료 보간법 기술 연구 동향 분석)

  • Bae, Wooram;Kwon, Yeji;Ha, Wansoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.192-207
    • /
    • 2020
  • We acquire seismic data with regularly or irregularly missing traces, due to economic, environmental, and mechanical problems. Since these missing data adversely affect the results of seismic data processing and analysis, we need to reconstruct the missing data before subsequent processing. However, there are economic and temporal burdens to conducting further exploration and reconstructing missing parts. Many researchers have been studying interpolation methods to accurately reconstruct missing data. Recently, various machine learning technologies such as support vector regression, autoencoder, U-Net, ResNet, and generative adversarial network (GAN) have been applied in seismic data interpolation. In this study, by reviewing these studies, we found that not only neural network models, but also support vector regression models that have relatively simple structures can interpolate missing parts of seismic data effectively. We expect that future research can improve the interpolation performance of these machine learning models by using open-source field data, data augmentation, transfer learning, and regularization based on conventional interpolation technologies.