• Title/Summary/Keyword: Lasso 모형

Search Result 52, Processing Time 0.022 seconds

Prediction of golf scores on the PGA tour using statistical models (PGA 투어의 골프 스코어 예측 및 분석)

  • Lim, Jungeun;Lim, Youngin;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.41-55
    • /
    • 2017
  • This study predicts the average scores of top 150 PGA golf players on 132 PGA Tour tournaments (2013-2015) using data mining techniques and statistical analysis. This study also aims to predict the Top 10 and Top 25 best players in 4 different playoffs. Linear and nonlinear regression methods were used to predict average scores. Stepwise regression, all best subset, LASSO, ridge regression and principal component regression were used for the linear regression method. Tree, bagging, gradient boosting, neural network, random forests and KNN were used for nonlinear regression method. We found that the average score increases as fairway firmness or green height or average maximum wind speed increases. We also found that the average score decreases as the number of one-putts or scrambling variable or longest driving distance increases. All 11 different models have low prediction error when predicting the average scores of PGA Tournaments in 2015 which is not included in the training set. However, the performances of Bagging and Random Forest models are the best among all models and these two models have the highest prediction accuracy when predicting the Top 10 and Top 25 best players in 4 different playoffs.

Youtube Mukbang and Online Delivery Orders: Analysis of Impacts and Predictive Model (유튜브 먹방과 온라인 배달 주문: 영향력 분석과 예측 모형)

  • Choi, Sarah;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.119-133
    • /
    • 2022
  • One of the most important current features of food related industry is the growth of food delivery service. Another notable food related culture is, with the advent of Youtube, the popularity of Mukbang, which refers to content that records eating. Based on these background, this study intended to focus on two things. First, we tried to see the impact of Youtube Mukbang and the sentiments of Mukbang comments on the number of related food deliveries. Next, we tried to set up the predictive modeling of chicken delivery order with machine learning method. We used Youtube Mukbang comments data as well as weather related data as main independent variables. The dependent variable used in this study is the number of delivery order of fried chicken. The period of data used in this study is from June 3, 2015 to September 30, 2019, and a total of 1,580 data were used. For the predictive modeling, we used machine learning methods such as linear regression, ridge, lasso, random forest, and gradient boost. We found that the sentiment of Youtube Mukbang and comments have impacts on the number of delivery orders. The prediction model with Mukban data we set up in this study had better performances than the existing models without Mukbang data. We also tried to suggest managerial implications to the food delivery service industry.

Value at Risk calculation using sparse vine copula models (성근 바인 코풀라 모형을 이용한 고차원 금융 자료의 VaR 추정)

  • An, Kwangjoon;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.875-887
    • /
    • 2021
  • Value at Risk (VaR) is the most popular measure for market risk. In this paper, we consider the VaR estimation of portfolio consisting of a variety of assets based on multivariate copula model known as vine copula. In particular, sparse vine copula which penalizes too many parameters is considered. We show in the simulation study that sparsity indeed improves out-of-sample forecasting of VaR. Empirical analysis on 60 KOSPI stocks during the last 5 years also demonstrates that sparse vine copula outperforms regular copula model.

Joint penalization of components and predictors in mixture of regressions (혼합회귀모형에서 콤포넌트 및 설명변수에 대한 벌점함수의 적용)

  • Park, Chongsun;Mo, Eun Bi
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.199-211
    • /
    • 2019
  • This paper is concerned with issues in the finite mixture of regression modeling as well as the simultaneous selection of the number of mixing components and relevant predictors. We propose a penalized likelihood method for both mixture components and regression coefficients that enable the simultaneous identification of significant variables and the determination of important mixture components in mixture of regression models. To avoid over-fitting and bias problems, we applied smoothly clipped absolute deviation (SCAD) penalties on the logarithm of component probabilities suggested by Huang et al. (Statistical Sinica, 27, 147-169, 2013) as well as several well-known penalty functions for coefficients in regression models. Simulation studies reveal that our method is satisfactory with well-known penalties such as SCAD, MCP, and adaptive lasso.

An empirical evidence of inconsistency of the ℓ1 trend filtering in change point detection (1 추세필터의 변화점 식별에 있어서의 비일치성)

  • Yu, Donghyeon;Lim, Johan;Son, Won
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.3
    • /
    • pp.371-384
    • /
    • 2022
  • The fused LASSO signal approximator (FLSA) can be applied to find change points from the data having piecewise constant mean structure. It is well-known that the FLSA is inconsistent in change points detection. This inconsistency is due to a total-variation denoising penalty of the FLSA. ℓ1 trend filter, one of the popular tools for finding an underlying trend from data, can be used to identify change points of piecewise linear trends. Since the ℓ1 trend filter applies the sum of absolute values of slope differences, it can be inconsistent for change points recovery as the FLSA. However, there are few studies on the inconsistency of the ℓ1 trend filtering. In this paper, we demonstrate the inconsistency of the ℓ1 trend filtering with a numerical study.

Cox Model Improvement Using Residual Blocks in Neural Networks: A Study on the Predictive Model of Cervical Cancer Mortality (신경망 내 잔여 블록을 활용한 콕스 모델 개선: 자궁경부암 사망률 예측모형 연구)

  • Nang Kyeong Lee;Joo Young Kim;Ji Soo Tak;Hyeong Rok Lee;Hyun Ji Jeon;Jee Myung Yang;Seung Won Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.260-268
    • /
    • 2024
  • Cervical cancer is the fourth most common cancer in women worldwide, and more than 604,000 new cases were reported in 2020 alone, resulting in approximately 341,831 deaths. The Cox regression model is a major model widely adopted in cancer research, but considering the existence of nonlinear associations, it faces limitations due to linear assumptions. To address this problem, this paper proposes ResSurvNet, a new model that improves the accuracy of cervical cancer mortality prediction using ResNet's residual learning framework. This model showed accuracy that outperforms the DNN, CPH, CoxLasso, Cox Gradient Boost, and RSF models compared in this study. As this model showed accuracy that outperformed the DNN, CPH, CoxLasso, Cox Gradient Boost, and RSF models compared in this study, this excellent predictive performance demonstrates great value in early diagnosis and treatment strategy establishment in the management of cervical cancer patients and represents significant progress in the field of survival analysis.

A study on variable selection and classification in dynamic analysis data for ransomware detection (랜섬웨어 탐지를 위한 동적 분석 자료에서의 변수 선택 및 분류에 관한 연구)

  • Lee, Seunghwan;Hwang, Jinsoo
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.4
    • /
    • pp.497-505
    • /
    • 2018
  • Attacking computer systems using ransomware is very common all over the world. Since antivirus and detection methods are constantly improved in order to detect and mitigate ransomware, the ransomware itself becomes equally better to avoid detection. Several new methods are implemented and tested in order to optimize the protection against ransomware. In our work, 582 of ransomware and 942 of normalware sample data along with 30,967 dynamic action sequence variables are used to detect ransomware efficiently. Several variable selection techniques combined with various machine learning based classification techniques are tried to protect systems from ransomwares. Among various combinations, chi-square variable selection and random forest gives the best detection rates and accuracy.

Case study: Selection of the weather variables influencing the number of pneumonia patients in Daegu Fatima Hospital (사례연구: 대구 파티마 병원 폐렴 입원 환자 수에 영향을 미치는 날씨 변수 선택)

  • Choi, Sohyun;Lee, Hag Lae;Park, Chungun;Lee, Kyeong Eun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.1
    • /
    • pp.131-142
    • /
    • 2017
  • The number of hospital admissions for pneumonia tends to increase annually and even more, pneumonia, the fifth leading causes of death among elder adults, is one of top diseases in terms of hospitalization rate. Although mainly bacteria and viruses cause pneumonia, the weather is also related to the occurrence of pneumonia. The candidate weather variables are humidity, amount of sunshine, diurnal temperature range, daily mean temperatures and density of particles. Due to the delayed occurrence of pneumonia, lagged weather variables are also considered. Additionally, year effects, holiday effects and seasonal effects are considered. We select the related variables that influence the occurrence of pneumonia using penalized generalized linear models.

Hierarchically penalized sparse principal component analysis (계층적 벌점함수를 이용한 주성분분석)

  • Kang, Jongkyeong;Park, Jaeshin;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.135-145
    • /
    • 2017
  • Principal component analysis (PCA) describes the variation of multivariate data in terms of a set of uncorrelated variables. Since each principal component is a linear combination of all variables and the loadings are typically non-zero, it is difficult to interpret the derived principal components. Sparse principal component analysis (SPCA) is a specialized technique using the elastic net penalty function to produce sparse loadings in principal component analysis. When data are structured by groups of variables, it is desirable to select variables in a grouped manner. In this paper, we propose a new PCA method to improve variable selection performance when variables are grouped, which not only selects important groups but also removes unimportant variables within identified groups. To incorporate group information into model fitting, we consider a hierarchical lasso penalty instead of the elastic net penalty in SPCA. Real data analyses demonstrate the performance and usefulness of the proposed method.

Development of a Machine-Learning Predictive Model for First-Grade Children at Risk for ADHD (머신러닝 분석을 활용한 초등학교 1학년 ADHD 위험군 아동 종단 예측모형 개발)

  • Lee, Dongmee;Jang, Hye In;Kim, Ho Jung;Bae, Jin;Park, Ju Hee
    • Korean Journal of Childcare and Education
    • /
    • v.17 no.5
    • /
    • pp.83-103
    • /
    • 2021
  • Objective: This study aimed to develop a longitudinal predictive model that identifies first-grade children who are at risk for ADHD and to investigate the factors that predict the probability of belonging to the at-risk group for ADHD by using machine learning. Methods: The data of 1,445 first-grade children from the 1st, 3rd, 6th, 7th, and 8th waves of the Korean Children's Panel were analyzed. The output factors were the at-risk and non-risk group for ADHD divided by the CBCL DSM-ADHD scale. Prenatal as well as developmental factors during infancy and early childhood were used as input factors. Results: The model that best classifies the at-risk and the non-risk group for ADHD was the LASSO model. The input factors which increased the probability of being in the at-risk group for ADHD were temperament of negative emotionality, communication abilities, gross motor skills, social competences, and academic readiness. Conclusion/Implications: The outcomes indicate that children who showed specific risk indicators during infancy and early childhood are likely to be classified as being at risk for ADHD when entering elementary schools. The results may enable parents and clinicians to identify children with ADHD early by observing early signs and thus provide interventions as early as possible.