• Title/Summary/Keyword: predictive power

Search Result 703, Processing Time 0.035 seconds

A study on solar radiation prediction using medium-range weather forecasts (중기예보를 이용한 태양광 일사량 예측 연구)

  • Sujin Park;Hyojeoung Kim;Sahm Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.49-62
    • /
    • 2023
  • Solar energy, which is rapidly increasing in proportion, is being continuously developed and invested. As the installation of new and renewable energy policy green new deal and home solar panels increases, the supply of solar energy in Korea is gradually expanding, and research on accurate demand prediction of power generation is actively underway. In addition, the importance of solar radiation prediction was identified in that solar radiation prediction is acting as a factor that most influences power generation demand prediction. In addition, this study can confirm the biggest difference in that it attempted to predict solar radiation using medium-term forecast weather data not used in previous studies. In this paper, we combined the multi-linear regression model, KNN, random fores, and SVR model and the clustering technique, K-means, to predict solar radiation by hour, by calculating the probability density function for each cluster. Before using medium-term forecast data, mean absolute error (MAE) and root mean squared error (RMSE) were used as indicators to compare model prediction results. The data were converted into daily data according to the medium-term forecast data format from March 1, 2017 to February 28, 2022. As a result of comparing the predictive performance of the model, the method showed the best performance by predicting daily solar radiation with random forest, classifying dates with similar climate factors, and calculating the probability density function of solar radiation by cluster. In addition, when the prediction results were checked after fitting the model to the medium-term forecast data using this methodology, it was confirmed that the prediction error increased by date. This seems to be due to a prediction error in the mid-term forecast weather data. In future studies, among the weather factors that can be used in the mid-term forecast data, studies that add exogenous variables such as precipitation or apply time series clustering techniques should be conducted.

Moderating effects of perceived behavioral control on the relationships among exhibition sales promotions and purchase intention (전시회 판매촉진 활동이 지각된 행동통제의 조절효과와 구매의도에 미치는 영향연구)

  • Kim, Hyun Su;Kim, Mi So;Kim, Chul Won
    • Korea Science and Art Forum
    • /
    • v.31
    • /
    • pp.105-118
    • /
    • 2017
  • The purpose of study is to examine the effectiveness of exhibition sales promotions and purchase intention for reasonable visitors. Perceived behavior control determining moderating effects on the relationship among their sales promotions and purchase intention is used as a predictive variable of unexpected impulsive purchases or negative purchase intention contrary to business intention. A total of 315 visitors who experienced the sales promotions of G-Star 2016 in Busan respond to the questionnaire and 259 forms are used to analyze the data. The main results of this study were as follows. First, except to value-added sales promotion, all of sales promotions positively impact on visitors' purchase intention. Second, as a result of analyzing the moderating effects of the perceived behavioral control consisting of control belief and perceived power on the relationships among the sales promotions and purchase intention, the control belief moderated the sales promotions such as price-off and education on purchase intention. In addition, the perceived power moderated the sales promotions such as escape and entertainment on purchase intention. In a nutshell, the degree of perceived behavior control makes critically impact on the effectiveness of exhibition sales promotions. Based on this results, it yields new insights into the way of developing various sales promotion strategies according to different features of visitors.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Effects of Nutritional Status, Activities Daily Living, Instruments Activities Daily Living, and Social Network on the Life Satisfaction of the Elderly in Home (재가노인의 영양상태, 일상생활 수행능력, 도구적 일상생활 수행능력 및 사회적 연결망이 삶의 만족도에 미치는 영향)

  • Yang, Kyoung Mi
    • Journal of the Korean Applied Science and Technology
    • /
    • v.36 no.4
    • /
    • pp.1472-1484
    • /
    • 2019
  • This study aimed to verify the effects of nutritional status, K-ADL, K-IADL, and social network on the life satisfaction of the elderly in home. Total 213 research subjects participated in this study, and their average age was 71.38±5.59. As the methods of analysis, using the SPSS 21.0, this study examined the differences between variables in accordance with the general characteristics, and then verified the correlations between independent variables of nutritional status, K-ADL, K-IADL, social network(family networks, friends networks), and life satisfaction. In order to verify the factors having effects on the life satisfaction of the elderly in home, the stepwise multiple regression analysis was conducted. In the results of this study, in the general characteristics, the life satisfaction showed statistically significant differences in accordance with education(F=5.280, p=.002), economic condition(F=22.407, p<.001), monthly income(F=3.181, p=.015), and subjective health status(F=14.933, p<.001). In the results of verifying the correlation between independent variables, the life satisfaction showed positive correlations with family networks(r=268, p<.001) and friends networks(r=.286, p<.001) while the nutritional status(r=-.222, p=.001), K-IADL(r=-.235, p=.001), and interdependent social support(r=-.283, p<.001) showed negative correlations. The predictive factors on the life satisfaction of the elderly in home included the economic condition(β=.358, p<.001), subjective health status(β=.245, p<.001), interdependent social support(β=-.158, p=.009), and K-IADL(β=-.153, p=.012), and the explanatory power was 30.1%. The regression model was statistically significant(F=23.778, p<.001). Based on such results of this study, it would be necessary to develop programs that could maintain and improve the health of the elderly, and also provide financial support to the elderly suffering from economic hardship, in order to improve the life satisfaction of the elderly in home. Moreover, there should be the concrete measures for vitalizing the community-connected activities for interdependent social support.

Online news-based stock price forecasting considering homogeneity in the industrial sector (산업군 내 동질성을 고려한 온라인 뉴스 기반 주가예측)

  • Seong, Nohyoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.1-19
    • /
    • 2018
  • Since stock movements forecasting is an important issue both academically and practically, studies related to stock price prediction have been actively conducted. The stock price forecasting research is classified into structured data and unstructured data, and it is divided into technical analysis, fundamental analysis and media effect analysis in detail. In the big data era, research on stock price prediction combining big data is actively underway. Based on a large number of data, stock prediction research mainly focuses on machine learning techniques. Especially, research methods that combine the effects of media are attracting attention recently, among which researches that analyze online news and utilize online news to forecast stock prices are becoming main. Previous studies predicting stock prices through online news are mostly sentiment analysis of news, making different corpus for each company, and making a dictionary that predicts stock prices by recording responses according to the past stock price. Therefore, existing studies have examined the impact of online news on individual companies. For example, stock movements of Samsung Electronics are predicted with only online news of Samsung Electronics. In addition, a method of considering influences among highly relevant companies has also been studied recently. For example, stock movements of Samsung Electronics are predicted with news of Samsung Electronics and a highly related company like LG Electronics.These previous studies examine the effects of news of industrial sector with homogeneity on the individual company. In the previous studies, homogeneous industries are classified according to the Global Industrial Classification Standard. In other words, the existing studies were analyzed under the assumption that industries divided into Global Industrial Classification Standard have homogeneity. However, existing studies have limitations in that they do not take into account influential companies with high relevance or reflect the existence of heterogeneity within the same Global Industrial Classification Standard sectors. As a result of our examining the various sectors, it can be seen that there are sectors that show the industrial sectors are not a homogeneous group. To overcome these limitations of existing studies that do not reflect heterogeneity, our study suggests a methodology that reflects the heterogeneous effects of the industrial sector that affect the stock price by applying k-means clustering. Multiple Kernel Learning is mainly used to integrate data with various characteristics. Multiple Kernel Learning has several kernels, each of which receives and predicts different data. To incorporate effects of target firm and its relevant firms simultaneously, we used Multiple Kernel Learning. Each kernel was assigned to predict stock prices with variables of financial news of the industrial group divided by the target firm, K-means cluster analysis. In order to prove that the suggested methodology is appropriate, experiments were conducted through three years of online news and stock prices. The results of this study are as follows. (1) We confirmed that the information of the industrial sectors related to target company also contains meaningful information to predict stock movements of target company and confirmed that machine learning algorithm has better predictive power when considering the news of the relevant companies and target company's news together. (2) It is important to predict stock movements with varying number of clusters according to the level of homogeneity in the industrial sector. In other words, when stock prices are homogeneous in industrial sectors, it is important to use relational effect at the level of industry group without analyzing clusters or to use it in small number of clusters. When the stock price is heterogeneous in industry group, it is important to cluster them into groups. This study has a contribution that we testified firms classified as Global Industrial Classification Standard have heterogeneity and suggested it is necessary to define the relevance through machine learning and statistical analysis methodology rather than simply defining it in the Global Industrial Classification Standard. It has also contribution that we proved the efficiency of the prediction model reflecting heterogeneity.

A Validation Study for the Practical Use of Screening Scale for Potential Drug-use Adolescents(SPDA) (청소년 약물사용 잠재군 선별척도(SPDA) 활용을 위한 타당화 연구)

  • Lee, Ki-Young;Kim, Young-Mi;Im, Hyuk;Park, Mi-Jin;Park, Sun-Hee
    • Korean Journal of Social Welfare
    • /
    • v.57 no.3
    • /
    • pp.305-335
    • /
    • 2005
  • This paper is a result from validation study for SPDA(A Screening Scale For Potential Drug-use Adolescents) created in 2003 and newly developed during 2004. SPDA aims to screen adolescents in their early stage of drug-use and to help practitioners make a preventive approach for the adolescents. 4307 junior and senior high school students were selected as primary research subjects by stratified and quota sampling methods. 305 adolescents on probation were also selected as a comparison group and asked to answer the same questionnaire. Reliability for SPDA recorded 0.914, which proved to be better than previous year's (0.898). Exploratory and confirmatory factor analyses to test construct validity proved that SPDA could be divided into 7 factors and that each factor structure of SPDA could be a proper measurement model with high level of fitness and factor loadings. Discriminant analysis to test predictive validity confirmed that SPDA could classify the adolescents excellently by the frequency of drug-use, with hit ratio of 86.6 percent(78.8% and 87.4% for junior and senior high school students respectively). For concurrent validity test, Hare Home Self-Esteem Scale, Hare School Self-Esteem, Zuckerman-Kuhlman Sensation-seeking Scale were employed to find correlation with SPDA and all the three scales had significant Pearson correlation coefficients with SPDA. Known-groups validity test indicated that SPDA had an adequate power to classify out adolescents on probation from those in schooling, with a hit ratio of 71.8 percent. Cut-off point to detect adolescents with high risk of substance use was 77, which indicated approximately T score, 55 (0.5 SD), satisfying sensitivity, specificity, and efficiency criteria.

  • PDF

Impact of Sulfur Dioxide Impurity on Process Design of $CO_2$ Offshore Geological Storage: Evaluation of Physical Property Models and Optimization of Binary Parameter (이산화황 불순물이 이산화탄소 해양 지중저장 공정설계에 미치는 영향 평가: 상태량 모델의 비교 분석 및 이성분 매개변수 최적화)

  • Huh, Cheol;Kang, Seong-Gil;Cho, Mang-Ik
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.187-197
    • /
    • 2010
  • Carbon dioxide Capture and Storage(CCS) is regarded as one of the most promising options to response climate change. CCS is a three-stage process consisting of the capture of carbon dioxide($CO_2$), the transport of $CO_2$ to a storage location, and the long term isolation of $CO_2$ from the atmosphere for the purpose of carbon emission mitigation. Up to now, process design for this $CO_2$ marine geological storage has been carried out mainly on pure $CO_2$. Unfortunately the $CO_2$ mixture captured from the power plants and steel making plants contains many impurities such as $N_2$, $O_2$, Ar, $H_2O$, $SO_2$, $H_2S$. A small amount of impurities can change the thermodynamic properties and then significantly affect the compression, purification, transport and injection processes. In order to design a reliable $CO_2$ marine geological storage system, it is necessary to analyze the impact of these impurities on the whole CCS process at initial design stage. The purpose of the present paper is to compare and analyse the relevant physical property models including BWRS, PR, PRBM, RKS and SRK equations of state, and NRTL-RK model which are crucial numerical process simulation tools. To evaluate the predictive accuracy of the equation of the state for $CO_2-SO_2$ mixture, we compared numerical calculation results with reference experimental data. In addition, optimum binary parameter to consider the interaction of $CO_2$ and $SO_2$ molecules was suggested based on the mean absolute percent error. In conclusion, we suggest the most reliable physical property model with optimized binary parameter in designing the $CO_2-SO_2$ mixture marine geological storage process.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

A Longitudinal Validation Study of the Korean Version of PCL-5(Post-traumatic Stress Disorder Checklist for DSM-5) (PCL-5(DSM-5 기준 외상 후 스트레스 장애 체크리스트) 한국판 종단 타당화 연구)

  • Lee, DongHun;Lee, DeokHee;Kim, SungHyun;Jung, DaSong
    • Korean Journal of Culture and Social Issue
    • /
    • v.28 no.2
    • /
    • pp.187-217
    • /
    • 2022
  • The aim of this study is to examine the psychometric properties of the Korean version of the Post-traumatic Stress Disorder Checklist for DSM-5(PCL-5). For this purpose, online surveys were conducted for two times with a one year interval using the data from 1,077 Korean adults at time 1, and 563 Korean adults at time 2. First, from the result of the confirmatory factor analysis, comparing the model fit of the 1, 4, 6, and 7-factor model, the 4, 6, and 7-factor model showed a acceptable fit, and the best fit was seen in the order of the 7, 6, 4-factor model. Second, the internal consistency, omega coefficient, construct validity, average variance extracted, and test-retest reliability results were all satisfactory.. Third, a correlation analysis with the K-PC-PTSD-5 and the sub-factors of BSI-18 was conducted to check the validity of the Korean Version of PCL-5. As a result, a positive correlation was seen with both K-PC-PTSD-5 and BSI-18. Fourth, a hierarchical multiple regression was performed to examine whether the Korean Version of PCL-5 predicts future PTSD, depression, anxiety, and somatization. As a result, the Korean Version of PCL-5 measured at time 1 significantly predicted PTSD, depression, anxiety, and somatization symptoms at time 2. Fifth, by analyzing the ROC curve, the discriminant power of PCL-5 for screening PTSD symptom groups was confirmed, and the best cut-off score was suggested. As a result of the longitudinal validation of Korean version of PCL-5, it was found that this scale is a reliable and valid measure for Korean adults. By looking into the predictive validity of the scale, it was found that the Korean version of PCL-5 can predict not only PTSD symptoms but also PTSD-related symptoms such as depression, anxiety, and somatization. Also, this study differs from previous validation studies measuring PTSD symptoms in that it suggested a cut-off score to help differentiate PTSD symptom groups.