• 제목/요약/키워드: Lasso

검색결과 169건 처리시간 0.03초

랜섬웨어 탐지를 위한 동적 분석 자료에서의 변수 선택 및 분류에 관한 연구 (A study on variable selection and classification in dynamic analysis data for ransomware detection)

  • 이승환;황진수
    • 응용통계연구
    • /
    • 제31권4호
    • /
    • pp.497-505
    • /
    • 2018
  • 최근 랜섬웨어는 일반 PC 사용자에 비해 상대적으로 수준 높은 보안 체계를 갖추고 있는 기업과 정부 기관에 침입하여 상당한 피해를 입히는 등 기존 보안 체계의 허점을 찾아 진화하는 모습을 보이고 있다. 이처럼 계속해서 변화하는 랜섬웨어를 탐지하기 위해 랜섬웨어의 특징을 파악하는 정적 분석과 동적 분석과 관련된 연구가 활발히 이루어지고 있다. 본 연구에서는 582개의 랜섬웨어 샘플과 942개의 정상 샘플 프로그램을 쿠쿠 샌드박스 가상환경 내에서 실행시킨 뒤, PC에서 이루어지는 30,967가지의 행동 여부를 기록한 동적 분석 자료를 활용하여 랜섬웨어 분류에 유의한 변수를 탐색하기 위한 여러 변수 선택 방법의 적용과 랜섬웨어 분류를 위한 기계학습 모형들을 구축하고자 하였다. 변수 선택법으로 LASSO와 이항변수 만으로 이루어진 고차원 자료라는 특성을 활용하기 위한 카이제곱검정을 이용한 변수 선택, 선행 연구에서 이용된 방법인 상호정보를 이용한 변수 선택법을 적용하였으며 기계 학습 모형으로는 능형 로지스틱 회귀, 서포트 벡터 머신, 랜덤 포레스트, XGBoost가 활용되었다. 연구 결과, 정상 프로그램과 구별되는 랜섬웨어 프로그램만의 특징적인 행동을 확인할 수 있었으며 여러 변수 선택법과 기계학습 분류 모형들의 조합 중, 주어진 자료에서 카이제곱검정을 이용한 변수 선택법과 랜덤 포레스트 모형의 조합이 가장 높은 탐지율과 정분류율을 보이는 것을 확인하였다.

사례연구: 대구 파티마 병원 폐렴 입원 환자 수에 영향을 미치는 날씨 변수 선택 (Case study: Selection of the weather variables influencing the number of pneumonia patients in Daegu Fatima Hospital)

  • 최소현;이학래;박천건;이경은
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권1호
    • /
    • pp.131-142
    • /
    • 2017
  • 매년 폐렴 입원 환자 수는 증가하는 추세이며, 국내 질환 중 입원율 1위이기도 하다. 주로 박테리아와 바이러스가 주된 원인인 폐렴은 날씨의 영향을 받기도 한다. 본 연구에서는 날씨 변수로는 습도, 일조량, 일교차, 평균온도, 미세먼지 농도를 각각 1일 전부터 27일 전까지의 총 135개 변수를 고려하였다. 날씨와 입원 환자 수에 잠재적으로 영향을 미치는 위험 요인으로 연도 효과, 휴일 효과, 계절 효과를 추가적으로 고려하였다. 벌점화 일반화 선형 모형을 이용하여 폐렴 입원 환자 수와 관련된 변수를 선택하였다.

계층적 벌점함수를 이용한 주성분분석 (Hierarchically penalized sparse principal component analysis)

  • 강종경;박재신;방성완
    • 응용통계연구
    • /
    • 제30권1호
    • /
    • pp.135-145
    • /
    • 2017
  • 주성분 분석(principal component analysis; PCA)은 서로 상관되어 있는 다변량 자료의 차원을 축소하는 대표적인 기법으로 많은 다변량 분석에서 활용되고 있다. 하지만 주성분은 모든 변수들의 선형결합으로 이루어지므로, 그 결과의 해석이 어렵다는 한계가 있다. sparse PCA(SPCA) 방법은 elastic net 형태의 벌점함수를 이용하여 보다 성긴(sparse) 적재를 가진 수정된 주성분을 만들어주지만, 변수들의 그룹구조를 이용하지 못한다는 한계가 있다. 이에 본 연구에서는 기존 SPCA를 개선하여, 자료가 그룹화되어 있는 경우에 유의한 그룹을 선택함과 동시에 그룹 내 불필요한 변수를 제거할 수 있는 새로운 주성분 분석 방법을 제시하고자 한다. 그룹과 그룹 내 변수 구조를 모형 적합에 이용하기 위하여, sparse 주성분 분석에서의 elastic net 벌점함수 대신에 계층적 벌점함수 형태를 고려하였다. 또한 실제 자료의 분석을 통해 제안 방법의 성능 및 유용성을 입증하였다.

머신러닝 분석을 활용한 초등학교 1학년 ADHD 위험군 아동 종단 예측모형 개발 (Development of a Machine-Learning Predictive Model for First-Grade Children at Risk for ADHD)

  • 이동미;장혜인;김호정;배진;박주희
    • 한국보육지원학회지
    • /
    • 제17권5호
    • /
    • pp.83-103
    • /
    • 2021
  • Objective: This study aimed to develop a longitudinal predictive model that identifies first-grade children who are at risk for ADHD and to investigate the factors that predict the probability of belonging to the at-risk group for ADHD by using machine learning. Methods: The data of 1,445 first-grade children from the 1st, 3rd, 6th, 7th, and 8th waves of the Korean Children's Panel were analyzed. The output factors were the at-risk and non-risk group for ADHD divided by the CBCL DSM-ADHD scale. Prenatal as well as developmental factors during infancy and early childhood were used as input factors. Results: The model that best classifies the at-risk and the non-risk group for ADHD was the LASSO model. The input factors which increased the probability of being in the at-risk group for ADHD were temperament of negative emotionality, communication abilities, gross motor skills, social competences, and academic readiness. Conclusion/Implications: The outcomes indicate that children who showed specific risk indicators during infancy and early childhood are likely to be classified as being at risk for ADHD when entering elementary schools. The results may enable parents and clinicians to identify children with ADHD early by observing early signs and thus provide interventions as early as possible.

혼합회귀모형에서 콤포넌트 및 설명변수에 대한 벌점함수의 적용 (Joint penalization of components and predictors in mixture of regressions)

  • 박종선;모은비
    • 응용통계연구
    • /
    • 제32권2호
    • /
    • pp.199-211
    • /
    • 2019
  • 주어진 회귀자료에 유한혼합회귀모형을 적합하는 경우 적절한 성분의 수를 선택하고 선택된 각각의 회귀모형에서 의미있는 예측변수들의 집합을 선택하며 동시에 편의와 변동이 작은 회귀계수 추정치들을 얻는 것은 매우 중요하다. 본 연구에서는 혼합선형회귀모형에서 성분의 개수와 회귀계수에 벌점함수를 적용하여 적절한 성분의 수와 각 성분의 회귀모형에 필요한 설명변수들을 동시에 선택하는 방법을 제시하였다. 성분에 대한 벌점은 성분들의 로그값에 SCAD 벌점함수를 적용하였고 회귀계수들에는 SCAD와 더불어 MCP 및 Adplasso 벌점함수들을 사용하여 가상자료와 실제자료들에 대한 결과를 비교하였다. SCAD-SCAD 벌점함수 조합과 SCAD-MCP 조합의 경우 기존의 Luo 등 (2008)의 방법에서 문제가 되었던 과적합 문제를 해결함과 동시에 선택된 성분의 수와 회귀계수들을 효과적으로 선택하였으며 회귀계수들의 추정치에 대한 편의도 크지 않았다. 본 연구는 성분의 수가 알려져 있지 않은 회귀자료에서 적절한 성분의 수와 더불어 각 성분에 대한 회귀모형에서 모형에 필요한 예측변수들을 동시에 선택하는 방법을 제시하였다는데 의미가 있다고 하겠다.

쾌삭 303계 스테인리스강 소형 압연 선재 제조 공정의 생산품질 예측 모형 (Quality Prediction Model for Manufacturing Process of Free-Machining 303-series Stainless Steel Small Rolling Wire Rods)

  • 서석준;김흥섭
    • 산업경영시스템학회지
    • /
    • 제44권4호
    • /
    • pp.12-22
    • /
    • 2021
  • This article suggests the machine learning model, i.e., classifier, for predicting the production quality of free-machining 303-series stainless steel(STS303) small rolling wire rods according to the operating condition of the manufacturing process. For the development of the classifier, manufacturing data for 37 operating variables were collected from the manufacturing execution system(MES) of Company S, and the 12 types of derived variables were generated based on literature review and interviews with field experts. This research was performed with data preprocessing, exploratory data analysis, feature selection, machine learning modeling, and the evaluation of alternative models. In the preprocessing stage, missing values and outliers are removed, and oversampling using SMOTE(Synthetic oversampling technique) to resolve data imbalance. Features are selected by variable importance of LASSO(Least absolute shrinkage and selection operator) regression, extreme gradient boosting(XGBoost), and random forest models. Finally, logistic regression, support vector machine(SVM), random forest, and XGBoost are developed as a classifier to predict the adequate or defective products with new operating conditions. The optimal hyper-parameters for each model are investigated by the grid search and random search methods based on k-fold cross-validation. As a result of the experiment, XGBoost showed relatively high predictive performance compared to other models with an accuracy of 0.9929, specificity of 0.9372, F1-score of 0.9963, and logarithmic loss of 0.0209. The classifier developed in this study is expected to improve productivity by enabling effective management of the manufacturing process for the STS303 small rolling wire rods.

Elastic net 기반 특징 선택을 적용한 fNIRS 기반 뇌-컴퓨터 인터페이스 데이터셋 분류 정확도 평가 (Assessment of Classification Accuracy of fNIRS-Based Brain-computer Interface Dataset Employing Elastic Net-Based Feature Selection)

  • 신재영
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권6호
    • /
    • pp.268-276
    • /
    • 2021
  • Functional near-infrared spectroscopy-based brain-computer interface (fNIRS-based BCI) has been receiving much attention. However, we are practically constrained to obtain a lot of fNIRS data by inherent hemodynamic delay. For this reason, when employing machine learning techniques, a problem due to the high-dimensional feature vector may be encountered, such as deteriorated classification accuracy. In this study, we employ an elastic net-based feature selection which is one of the embedded methods and demonstrate the utility of which by analyzing the results. Using the fNIRS dataset obtained from 18 participants for classifying brain activation induced by mental arithmetic and idle state, we calculated classification accuracies after performing feature selection while changing the parameter α (weight of lasso vs. ridge regularization). Grand averages of classification accuracy are 80.0 ± 9.4%, 79.3 ± 9.6%, 79.0 ± 9.2%, 79.7 ± 10.1%, 77.6 ± 10.3%, 79.2 ± 8.9%, and 80.0 ± 7.8% for the various values of α = 0.001, 0.005, 0.01, 0.05, 0.1, 0.2, and 0.5, respectively, and are not statistically different from the grand average of classification accuracy estimated with all features (80.1 ± 9.5%). As a result, no difference in classification accuracy is revealed for all considered parameter α values. Especially for α = 0.5, we are able to achieve the statistically same level of classification accuracy with even 16.4% features of the total features. Since elastic net-based feature selection can be easily applied to other cases without complicated initialization and parameter fine-tuning, we can be looking forward to seeing that the elastic-based feature selection can be actively applied to fNIRS data.

Genome-Based Insights into the Thermotolerant Adaptations of Neobacillus endophyticus BRMEA1T

  • Lingmin Jiang;Ho Le Han;Yuxin Peng;Doeun Jeon;Donghyun Cho;Cha Young Kim;Jiyoung Lee
    • 식물병연구
    • /
    • 제29권3호
    • /
    • pp.321-329
    • /
    • 2023
  • The bacterium Neobacillus endophyticus BRMEA1T, isolated from the medicinal plant Selaginella involvens, known as its thermotolerant can grow at 50℃. To explore the genetic basis for its heat tolerance response and its potential for producing valuable natural compounds, the genomes of two thermotolerant and four mesophilic strains in the genus Neobacillus were analyzed using a bioinformatic software platform. The whole genome was annotated using RAST SEED and OrthVenn2, with a focus on identifying potential heattolerance-related genes. N. endophyticus BRMEA1T was found to possess more stress response genes compared to other mesophilic members of the genus, and it was the only strain that had genes for the synthesis of osmoregulated periplasmic glucans. This study sheds light on the potential value of N. endophyticus BRMEA1T, as it reveals the mechanism of heat resistance and the application of secondary metabolites produced by this bacterium through whole-genome sequencing and comparative analysis.

The Prediction Ability of Genomic Selection in the Wheat Core Collection

  • Yuna Kang;Changsoo Kim
    • 한국작물학회:학술대회논문집
    • /
    • 한국작물학회 2022년도 추계학술대회
    • /
    • pp.235-235
    • /
    • 2022
  • Genome selection is a promising tool for plant and animal breeding, which uses genome-wide molecular marker data to capture large and small effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction accuracies than conventional marker-assisted selection (MAS) for quantitative traits. In this study, the prediction accuracy of 10 agricultural traits in the wheat core group with 567 points was compared. We used a cross-validation approach to train and validate prediction accuracy to evaluate the effects of training population size and training model.As for the prediction accuracy according to the model, the prediction accuracy of 0.4 or more was evaluated except for the SVN model among the 6 models (GBLUP, LASSO, BayseA, RKHS, SVN, RF) used in most all traits. For traits such as days to heading and days to maturity, the prediction accuracy was very high, over 0.8. As for the prediction accuracy according to the training group, the prediction accuracy increased as the number of training groups increased in all traits. It was confirmed that the prediction accuracy was different in the training population according to the genetic composition regardless of the number. All training models were verified through 5-fold cross-validation. To verify the prediction ability of the training population of the wheat core collection, we compared the actual phenotype and genomic estimated breeding value using 35 breeding population. In fact, out of 10 individuals with the fastest days to heading, 5 individuals were selected through genomic selection, and 6 individuals were selected through genomic selection out of the 10 individuals with the slowest days to heading. Therefore, we confirmed the possibility of selecting individuals according to traits with only the genotype for a shorter period of time through genomic selection.

  • PDF

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]