• Title/Summary/Keyword: Robust Optimization

Search Result 705, Processing Time 0.02 seconds

Central Sarcopenia, Frailty and Comorbidity as Predictor of Surgical Outcome in Elderly Patients with Degenerative Spine Disease

  • Kim, Dong Uk;Park, Hyung Ki;Lee, Gyeoung Hae;Chang, Jae Chil;Park, Hye Ran;Park, Sukh Que;Cho, Sung Jin
    • Journal of Korean Neurosurgical Society
    • /
    • v.64 no.6
    • /
    • pp.995-1003
    • /
    • 2021
  • Objective : People are living longer and the elderly population continues to increase. The incidence of degenerative spinal diseases (DSDs) in the elderly population is quite high. Therefore, we are facing more cases of DSD and offering more surgical solutions in geriatric patients. Understanding the significance and association of frailty and central sarcopenia as risk factors for spinal surgery in elderly patients will be helpful in improving surgical outcomes. We conducted a retrospective cohort analysis of prospectively collected data to assess the impact of preoperative central sarcopenia, frailty, and comorbidity on surgical outcome in elderly patients with DSD. Methods : We conducted a retrospective analysis of patients who underwent elective spinal surgery performed from January 1, 2019 to September 30, 2020 at our hospital. We included patients aged 65 and over who underwent surgery on the thoracic or lumbar spine and were diagnosed as DSD. Central sarcopenia was measured by the 50th percentile of psoas : L4 vertebral index (PLVI) using the cross-sectional area of the psoas muscle. We used the Korean version of the fatigue, resistance, ambulation, illnesses, and loss of weight (K-FRAIL) scale to measure frailty. Comorbidity was confirmed and scored using the Charlson Comorbidity Index (CCI). As a tool for measuring surgical outcome, we used the Clavien-Dindo (CD) classification for postoperative complications and the length of stay (LOS). Results : This study included 85 patients (35 males and 50 females). The mean age was 74.05±6.47 years. Using the K-FRAIL scale, four patients were scored as robust, 44 patients were pre-frail and 37 patients were frail. The mean PLVI was 0.61±0.19. According to the CD classification, 50 patients were classified as grade 1, 19 as grade 2, and four as grade 4. The mean LOS was 12.35±8.17 days. Multivariate stepwise regression analysis showed that postoperative complication was significantly associated with surgical invasiveness and K-FRAIL scale. LOS was significantly associated with surgical invasiveness and CCI. K-FRAIL scale showed a significant correlation with CCI and PLVI. Conclusion : The present study demonstrates that frailty, comorbidity, and surgical invasiveness are important risk factors for postoperative complications and LOS in elderly patients with DSD. Preoperative recognition of these factors may be useful for perioperative optimization, risk stratification, and patient counseling.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

The Optimization of Ensembles for Bankruptcy Prediction (기업부도 예측 앙상블 모형의 최적화)

  • Myoung Jong Kim;Woo Seob Yun
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.39-57
    • /
    • 2022
  • This paper proposes the GMOPTBoost algorithm to improve the performance of the AdaBoost algorithm for bankruptcy prediction in which class imbalance problem is inherent. AdaBoost algorithm has the advantage of providing a robust learning opportunity for misclassified samples. However, there is a limitation in addressing class imbalance problem because the concept of arithmetic mean accuracy is embedded in AdaBoost algorithm. GMOPTBoost can optimize the geometric mean accuracy and effectively solve the category imbalance problem by applying Gaussian gradient descent. The samples are constructed according to the following two phases. First, five class imbalance datasets are constructed to verify the effect of the class imbalance problem on the performance of the prediction model and the performance improvement effect of GMOPTBoost. Second, class balanced data are constituted through data sampling techniques to verify the performance improvement effect of GMOPTBoost. The main results of 30 times of cross-validation analyzes are as follows. First, the class imbalance problem degrades the performance of ensembles. Second, GMOPTBoost contributes to performance improvements of AdaBoost ensembles trained on imbalanced datasets. Third, Data sampling techniques have a positive impact on performance improvement. Finally, GMOPTBoost contributes to significant performance improvement of AdaBoost ensembles trained on balanced datasets.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.