• Title/Summary/Keyword: moment methods

Search Result 984, Processing Time 0.023 seconds

Comparison of Treatment Planning System(TPS) and actual Measurement on the surface under the electron beam therapy with bolus (전자선 치료 시 Bolus를 적용한 경우 표면선량의 Treatment Planning System(TPS) 계산 값과 실제 측정값의 비교)

  • Kim, Byeong Soo;Park, Ju Young;Park, Byoung Suk;Song, Yong Min;Park, Byung Soo;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.163-170
    • /
    • 2014
  • Purpose : If electron, chosen for superficial oncotherapy, was applied with bolus, it could work as an important factor to a therapy result by showing a drastic change in surface dose. Hence the calculation value and the actual measurement value of surface dose of Treatment Planning System (TPS) according to four variables influencing surface dose when using bolus on an electron therapy were compared and analyzed in this paper. Materials and Methods : Four variables which frequently occur during the actual therapies (A: bolus thickness - 3, 5, 10 mm, B: field size - $6{\time}6$, $10{\time}10$, $15{\time}15cm2$, C: energy - 6, 9, 12 MeV, D: gantry angle - $0^{\circ}$, $15^{\circ}$) were set to compare the actual measurement value with TPS(Pinnacle 9.2, philips, USA). A computed tomography (lightspeed ultra 16, General Electric, USA) was performed using 16 cm-thick solid water phantom without bolus and total 54 beams where A, B, C, and D were combined after creating 3, 5 and 10 mm bolus on TPS were planned for a therapy. At this moment SSD 100 cm, 300 MU was investigated and measured twice repeatedly by placing it on iso-center by using EBT3 film(International Specialty Products, NJ, USA) to compare and analyze the actual measurement value and TPS. Measured film was analyzed with each average value and standard deviation value using digital flat bed scanner (Expression 10000XL, EPSON, USA) and dose density analyzing system (Complete Version 6.1, RIT, USA). Results : For the values according to the thickness of bolus, the actual measured values for 3, 5 and 10 mm were 101.41%, 99.58% and 101.28% higher respectively than the calculation values of TPS and the standard deviations were 0.0219, 0.0115 and 0.0190 respectively. The actual values according to the field size were $6{\time}6$, $10{\time}10$ and $15{\time}15cm2$ which were 99.63%, 101.40% and 101.24% higher respectively than the calculation values and the standard deviations were 0.0138, 0.0176 and 0.0220. The values according to energy were 6, 9, and 12 MeV which were 99.72%, 100.60% and 101.96% higher respectively and the standard deviations were 0.0200, 0.0160 and 0.0164. The actual measurement value according to beam angle were measured 100.45% and 101.07% higher at $0^{\circ}$ and $15^{\circ}$ respectively and standard deviations were 0.0199 and 0.0190 so they were measured 0.62% higher at $15^{\circ}$ than $0^{\circ}$. Conclusion : As a result of analyzing the calculation value of TPS and measurement value according to the used variables in this paper, the values calculated with TPS on 5 mm bolus, $6{\time}6cm2$ field size and low-energy electron at $0^{\circ}$ gantry angle were closer to the measured values, however, it showed a modest difference within the error bound of maximum 2%. If it was beyond the bounds of variables selected in this paper using electron and bolus simultaneously, the actual measurement value could differ from TPS according to each variable, therefore QA for the accurate surface dose would have to be performed.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Radioimmunoassay Reagent Survey and Evaluation (검사별 radioimmunoassay시약 조사 및 비교실험)

  • Kim, Ji-Na;An, Jae-seok;Jeon, Young-woo;Yoon, Sang-hyuk;Kim, Yoon-cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.1
    • /
    • pp.34-40
    • /
    • 2021
  • Purpose If a new test is introduced or reagents are changed in the laboratory of a medical institution, the characteristics of the test should be analyzed according to the procedure and the assessment of reagents should be made. However, several necessary conditions must be met to perform all required comparative evaluations, first enough samples should be prepared for each test, and secondly, various reagents applicable to the comparative evaluations must be supplied. Even if enough comparative evaluations have been done, there is a limit to the fact that the data variation for the new reagent represents the overall patient data variation, The fact puts a burden on the laboratory to the change the reagent. Due to these various difficulties, reagent changes in the laboratory are limited. In order to introduce a competitive bid, the institute conducted a full investigation of Radioimmunoassay(RIA) reagents for each test and established the range of reagents available in the laboratory through comparative evaluations. We wanted to share this process. Materials and Methods There are 20 items of tests conducted in our laboratory except for consignment tests. For each test, RIA reagents that can be used were fully investigated with the reference to external quality control report. and the manuals for each reagent were obtained. Each reagent was checked for the manual to check the test method, Incubation time, sample volume needed for the test. After that, the primary selection was made according to whether it was available in this laboratory. The primary selected reagents were supplied with 2kits based on 100tests, and the data correlation test, sensitivity measurement, recovery rate measurement, and dilution test were conducted. The secondary selection was performed according to the results of the comparative evaluation. The reagents that passed the primary and secondary selections were submitted to the competitive bidding list. In the case of reagent is designated as a singular, we submitted a explanatory statement with the data obtained during the primary and secondary selection processes. Results Excluded from the primary selection was the case where TAT was expected to be delayed at the moment, and it was impossible to apply to our equipment due to the large volume of reagents used during the test. In the primary selection, there were five items which only one reagent was available.(squamous cell carcinoma Ag(SCC Ag), β-human chorionic gonadotropin(β-HCG), vitamin B12, folate, free testosterone), two reagents were available(CA19-9, CA125, CA72-4, ferritin, thyroglobulin antibody(TG Ab), microsomal antibody(Mic Ab), thyroid stimulating hormone-receptor-antibody(TSH-R-Ab), calcitonin), three reagents were available (triiodothyronine(T3), Tree T3, Free T4, TSH, intact parathyroid hormone(intact PTH)) and four reagents were available are carcinoembryonic antigen(CEA), TG. In the secondary selection, there were eight items which only one reagent was available.(ferritin, TG, CA19-9, SCC, β-HCG, vitaminB12, folate, free testosterone), two reagents were available(TG Ab, Mic Ab, TSH-R-Ab, CA125, CA72-4, intact PTH, calcitonin), three reagents were available(T3, Tree T3, Free T4, TSH, CEA). Reasons excluded from the secondary selection were the lack of reagent supply for comparative evaluations, the problems with data reproducibility, and the inability to accept data variations. The most problematic part of comparative evaluations was sample collection. It didn't matter if the number of samples requested was large and the capacity needed for the test was small. It was difficult to collect various concentration samples in the case of a small number of tests(100 cases per month or less), and it was difficult to conduct a recovery rate test in the case of a relatively large volume of samples required for a single test(more than 100 uL). In addition, the lack of dilution solution or standard zero material for sensitivity measurement or dilution tests was one of the problems. Conclusion Comparative evaluation for changing test reagents require appropriate preparation time to collect diverse and sufficient samples. In addition, setting the total sample volume and reagent volume range required for comparative evaluations, depending on the sample volume and reagent volume required for one test, will reduce the burden of sample collection and planning for each comparative evaluation.