• Title/Summary/Keyword: Linear System

Search Result 9,890, Processing Time 0.049 seconds

Experimental investigation of the photoneutron production out of the high-energy photon fields at linear accelerator (고에너지 방사선치료 시 치료변수에 따른 광중성자 선량 변화 연구)

  • Kim, Yeon Su;Yoon, In Ha;Bae, Sun Myeong;Kang, Tae Young;Baek, Geum Mun;Kim, Sung Hwan;Nam, Uk Won;Lee, Jae Jin;Park, Yeong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.257-264
    • /
    • 2014
  • Purpose : Photoneutron dose in high-energy photon radiotherapy at linear accelerator increase the risk for secondary cancer. The purpose of this investigation is to evaluate the dose variation of photoneutron with different treatment method, flattening filter, dose rate and gantry angle in radiation therapy with high-energy photon beam ($E{\geq}8MeV$). Materials and Methods : TrueBeam $ST{\time}TM$(Ver1.5, Varian, USA) and Korea Tissue Equivalent Proportional Counter (KTEPC) were used to detect the photoneutron dose out of the high-energy photon field. Complex Patient plans using Eclipse planning system (Version 10.0, Varian, USA) was used to experiment with different treatment technique(IMRT, VMAT), condition of flattening filter and three different dose rate. Scattered photoneutron dose was measured at eight different gantry angles with open field (Field size : $5{\time}5cm$). Results : The mean values of the detected photoneutron dose from IMRT and VMAT were $449.7{\mu}Sv$, $2940.7{\mu}Sv$. The mean values of the detected photoneutron dose with Flattening Filter(FF) and Flattening Filter Free(FFF) were measured as $2940.7{\mu}Sv$, $232.0{\mu}Sv$. The mean values of the photoneutron dose for each test plan (case 1, case 2 and case 3) with FFF at the three different dose rate (400, 1200, 2400 MU/min) were $3242.5{\mu}Sv$, $3189.4{\mu}Sv$, $3191.2{\mu}Sv$ with case 1, $3493.2{\mu}Sv$, $3482.6{\mu}Sv$, $3477.2{\mu}Sv$ with case 2 and $4592.2{\mu}Sv$, $4580.0{\mu}Sv$, $4542.3{\mu}Sv$ with case 3, respectively. The mean values of the photoneutron dose at eight different gantry angles ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, $135^{\circ}$, $180^{\circ}$, $225^{\circ}$, $270^{\circ}$, $315^{\circ}$) were measured as $3.2{\mu}Sv$, $4.3{\mu}Sv$, $5.3{\mu}Sv$, $11.3{\mu}Sv$, $14.7{\mu}Sv$, $11.2{\mu}Sv$, $3.7{\mu}Sv$, $3.0{\mu}Sv$ at 10MV and as $373.7{\mu}Sv$, $369.6{\mu}Sv$, $384.4{\mu}Sv$, $423.6{\mu}Sv$, $447.1{\mu}Sv$, $448.0{\mu}Sv$, $384.5{\mu}Sv$, $377.3{\mu}Sv$ at 15MV. Conclusion : As a result, it is possible to reduce photoneutron dose using FFF mode and VMAT method with TrueBeam $ST{\time}TM$. The risk for secondary cancer of the patients will be decreased with continuous evaluation of the photoneutron dose.

A Review Examining the Dating, Analysis of the Painting Style, Identification of the Painter, and Investigation of the Documentary Records of Samsaebulhoedo at Yongjusa Temple (용주사(龍珠寺) <삼세불회도(三世佛會圖)> 연구의 연대 추정과 양식 분석, 작가 비정, 문헌 해석의 검토)

  • Kang, Kwanshik
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.97
    • /
    • pp.14-54
    • /
    • 2020
  • The overall study of Samsaebulhoedo (painting of the Assembly of Buddhas of Three Ages) at Yongjusa Temple has focused on dating it, analyzing the painting style, identifying its painter, and scrutinizing the related documents. However, its greater coherence could be achieved through additional support from empirical evidence and logical consistency. Recent studies on Samsaebulhoedo at Yongjusa Temple that postulate that the painting could have been produced by a monk-painter in the late nineteenth century and that an original version produced in 1790 could have been retouched by a painter in the 1920s using a Western painting style lack such empirical proof and logic. Although King Jeongjo's son was not yet installed as crown prince, the Samsaebulhoedo at Yongjusa Temple contained a conventional written prayer wishing for a long life for the king, queen, and crown prince: "May his majesty the King live long / May her majesty the Queen live long / May his highness the Crown Prince live long" (主上殿下壽萬歲, 王妃殿下壽萬歲, 世子邸下壽萬歲). Later, this phrase was erased using cinnabar and revised to include unusual content in an exceptional order: "May his majesty the King live long / May his highness the King's Affectionate Mother (Jagung) live long / May her majesty the Queen live long / May his highness the Crown Prince live long" (主上殿下壽萬歲, 慈宮邸下壽萬歲, 王妃殿下壽萬歲, 世子邸下壽萬歲). A comprehensive comparison of the formats and contents in written prayers found on late Joseon Buddhist paintings and a careful analysis of royal liturgy during the reign of King Jeongjo reveal Samsaebulhoedo at Yongjusa Temple to be an original version produced at the time of the founding of Yongjusa Temple in 1790. According to a comparative analysis of formats, iconography, styles, aesthetic sensibilities, and techniques found in Buddhist paintings and paintings by Joseon court painters from the eighteenth and nineteenth centuries, Samsaebulhoedo at Yongjusa Temple bears features characteristic of paintings produced around 1790, which corresponds to the result of analysis on the written prayer. Buddhist paintings created up to the early eighteenth century show deities with their sizes determined by their religious status and a two-dimensional conceptual composition based on the traditional perspective of depicting close objects in the lower section and distant objects above. This Samsaebulhoedo, however, systematically places the Buddhist deities within a threedimensional space constructed by applying a linear perspective. Through the extensive employment of chiaroscuro as found in Western painting, it expresses white highlights and shadows, evoking a feeling that the magnificent world of the Buddhas of the Three Ages actually unfolds in front of viewers. Since the inner order of a linear perspective and the outer illusion of chiaroscuro shading are intimately related to each other, it is difficult to believe that the white highlights were a later addition. Moreover, the creative convergence of highly-developed Western painting style and techniques that is on display in this Samsaebulhoedo could only have been achieved by late-Joseon court painters working during the reign of King Jeongjo, including Kim Hongdo, Yi Myeong-gi, and Kim Deuksin. Deungun, the head monk of Yongjusa Temple, wrote Yongjusa sajeok (History of Yongjusa Temple) by compiling the historical records on the temple that had been transmitted since its founding. In Yongjusa sajeok, Deungun recorded that Kim Hongdo painted Samsaebulhoedo as if it were a historical fact. The Joseon royal court's official records, Ilseongnok (Daily Records of the Royal Court and Important Officials) and Suwonbu jiryeong deungnok (Suwon Construction Records), indicate that Kim Hongdo, Yi Myeong-gi, and Kim Deuksin all served as a supervisor (gamdong) for the production of Buddhist paintings. Since within Joseon's hierarchical administrative system it was considered improper to allow court painters of government position to create Buddhist paintings which had previously been produced by monk-painters, they were appointed as gamdong in name only to avoid a political liability. In reality, court painters were ordered to create Buddhist paintings. During their reigns, King Yeongjo and King Jeongjo summoned the literati painters Jo Yeongseok and Kang Sehwang to serve as gamdong for the production of royal portraits and requested that they paint these portraits as well. Thus, the boundary between the concept of supervision and that of painting occasionally blurred. Supervision did not completely preclude painting, and a gamdong could also serve as a painter. In this light, the historical records in Yongjusa sajeok are not inconsistent with those in Ilseongnok, Suwonbu jiryeong deungnok, and a prayer written by Hwang Deok-sun, which was found inside the canopy in Daeungjeon Hall at Yongjusa Temple. These records provided the same content in different forms as required for their purposes and according to the context. This approach to the Samsaebulhoedo at Yongjusa Temple will lead to a more coherent explanation of dating the painting, analyzing its style, identifying its painter, and interpreting the relevant documents based on empirical grounds and logical consistency.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

An Analysis on Factors Affecting Local Control and Survival in Nasopharvngeal Carcinoma (비인두암의 국소 종양 치유와 생존율에 관한 예후 인자 분석)

  • Chung Woong-Ki;Cho Jae-Shik;Park Seung Jin;Lee Jae-Hong;Ahn Sung Ja;Nam Taek Keun;Choi Chan;Noh Young Hee;Nah Byung Sik
    • Radiation Oncology Journal
    • /
    • v.17 no.2
    • /
    • pp.91-99
    • /
    • 1999
  • Propose : This study was performed to find out the prognostic factors affecting local control, survival and disease free survival rate in nasopharyngeal carcinomas treated with chemotherapy and radiation therapy. Materials and Methods : We analysed 47 patients of nasopharyngeal carcinomas, histologically confirmed and treated at Chonnam University Hospital between July 1986 and June 1996, retrospectively. Range of patients' age were from 16 to 80 years (median; 52 years). Thirty three (70$\%$) patients was male. Histological types were composed of 3 (6$\%$) keratinizing, 30 (64$\%$) nonkeratinizing squamous cell carcinoma and 13 (28$\%$) undifferentiated carcinoma. Histoiogicai type was not known in 1 patient (2$\%$). We restaged according to the staging system of 1997 American Joint Committee on Cancer Forty seven patients were recorded as follows: 71: 11 (23$\%$), T2a; 6 (13$\%$), T2b; 9 (19$\%$), 73; 7 (15$\%$), 74: 14 (30$\%$), and NO; 7 (15$\%$), Nl: 14 (30$\%$), N2; 21 (45%), N3: 5 (10%). Clinical staging was grouped as follows: Stage 1; 2 (4$\%$), IIA: 2 (4$\%$), IIB; 10 (21$\%$), III; 14 (30$\%$), IVA; 14 (30$\%$) and IVB; 5 (11$\%$). Radiation therapy was done using 6 MV and 10 MV X- ray of linear accelerator. Electron beam was used for the Iymph nodes of posterior neck after 4500 cGy. The range of total radiation dose delivered to the primary tumor was from 6120 to 7920 cGy (median; 7020 cGy). Neoadjuvant chemotherapy was performed with cisplatin +5-fluorouracil (25 patients) or cisplatin+pepleomycin (17 patients) with one to three cycles. Five patients did not received chemotherapy. Local control rate, survival and disease free suwival rate were calculated by Kaplan-Meier method. Generalized Wilcoxon test was used to evaluate the difference of survival rates between groups. multivariate analysis using Cox proportional hazard model was done for finding prognostic factors. Results: Local control rate was 81$\%$ in 5 year. Five year survival rate was 60$\%$ (median survival; 100 months). We included age, sex, cranial nerve deflicit, histologic type, stage group, chemotherapy, elapsed days between chemotherapy and radiotherapy, total radiation dose, period of radiotherapy as potential prognostic factors in multivariate analysis. As a result, cranial none deficit (P=0.004) had statistical significance in local control rate. Stage group and total radiation dose were significant prognostic factors in survival (P=0.000, P=0.012), and in disease free survival rates (P=0.003, P=0.008), respectively. Common complications were xerostomia, tooth and ear problems. Hypothyroidism was developed in 2 patients. Conclusion : In our study, cranial none deficit was a significant prognostic factor in local control rate, and stage group and total radiation dose were significant factors in both survival and disease free survival of nasopharyngeal carcinoma. We have concluded that chemotherapy and radiotherapy used in our patients were effective without any serious complication.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Prediction of Salvaged Myocardium in Patients with Acute Myocardial Infarction after Primary Percutaneous Coronary Angioplasty using early Thallium-201 Redistribution Myocardial Perfusion Imaging (급성심근경색증의 일차적 관동맥성형술 후 조기 Tl-201 재분포영상을 이용한 구조심근 예측)

  • Choi, Joon-Young;Yang, You-Jung;Choi, Seung-Jin;Yeo, Jeong-Seok;Park, Seong-Wook;Song, Jae-Kwan;Moon, Dae-Hyuk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.4
    • /
    • pp.219-228
    • /
    • 2003
  • Purpose: The amount of salvaged myocardium is an important prognostic factor in patients with acute myocardial infarction (MI). We investigated if early Tl-201 SPECT imaging could be used to predict the salvaged myocardium and functional recovery in acute MI after primary PTCA. Materials and Methods: In 36 patients with first acute MI treated with primary PTCA, serial echocardiography and Tl-201 SPECT imaging ($5.8{\pm}2.1$ days after PTDA) were performed. Regional wall motion and perfusion were quantified with on 16-segment myocardial model with 5-point and 4-point scaling system, respectively. Results: Wall motion was improved in 78 of the 212 dyssynergic segments on 1 month follow-up echocardiography and 97 on 7 months follow-up echocardiography, which were proved to be salvaged myocardium. The areas under receiver operating characteristic curves of Tl-201 perfusion score for detecting salvaged myocardial segments were 0.79 for 1 month follow-up and 0.83 for 7 months follow-up. The sensitivity and specificity of Tl-201 redistribution images with optimum cutoff of 40% of peak thallium activity for detecting salvaged myocardium were 84.6% and 55.2% for 1 month follow-up, and 87.6% and 64.3% for 7 months follow-up, respectively. There was a linear relationship between the percentage of peak thallium activity on early redistribution imaging and the likelihood of segmental functional improvement 7 months after reperfusion. Conclusion: Tl-201 myocardial perfusion SPECT imaging performed early within 10 days after reperfusion can be used to predict the salvaged myocardium and functional recovery with high sensitivity during the 7 months following primary PTCA in patients with acute MI.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.