• Title/Summary/Keyword: Order system

Search Result 39,968, Processing Time 0.075 seconds

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

An Analysis of the Specialist's Preference for the Model of Park-Based Mixed-Use Districts in Securing Urban Parks and Green Spaces Via Private Development (민간개발 주도형 도시공원.녹지 확보를 위한 공원복합용도지구 모형에 대한 전문가 선호도 분석)

  • Lee, Jeung-Eun;Cho, Se-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.39 no.6
    • /
    • pp.1-11
    • /
    • 2011
  • The research was aimed to verify the feasibility of the model of Park-Based Mixed-Use Districts(PBMUD) around urban large park to secure private-based urban parks through the revision of the urban zoning system. The PBMUD is a type of urban zoning district in which park-oriented land use is mixed with the urban land uses of residents, advertising, business, culture, education and research. The PBMUD, delineated from and based on a new paradigm of landscape urbanism, is a new urban strategy to secure urban parks and to cultivate urban regeneration around parks and green spaces to enhance the quality of the urban landscape and to ameliorate urban environmental disasters like climate change. This study performed a questionnaire survey and analysis after a review of literature related to PBMUD. The study looked for specialists in the fields of urban planning and landscape architecture such as officials, researchers and engineers to respond to the questionnaire, which asked about degree of preference. The conclusions of this study were as follows. Firstly, specialists prefer the PBMUD at 79.3% for to 20.7% against ratio, indicating the feasibility of the model of PBMUD. The second, the most preferable reasons for the model, were the possibility of securing park space around urban parks and green spaces that assures access to park and communication with each area. The third, the main reason for non-preference for the model, was a lack of understanding of PBMUD added to the problems of unprofitable laws and regulations related to urban planning and development. These proposed a revision of the related laws and regulations such as the laws for planning and use of national land, laws for architecture etc. The fourth, the most preferred type of PBMUD, was cultural use mixed with park use in every kind of mix of land use. The degree of preference was lower in the order of use of commercial, residential, business, and education(research) when mixed with park use. The number of mixed-use amenities with in the park was found to be an indicator determining preference. The greater the number, the lower was preference frequencies, especially when related to research and business use. The fifth, the preference frequencies of the more than 70% among the respondents to the mixed-use ratio between park use and the others, was in a ratio of 60% park use and 40% other urban use. These research results will help to launch new future research subjects on the revision of zoning regulations in the laws for the planning and uses of national land and architectural law as well as criteria and indicators of subdivision planning as related to a PBMUD model.

The Relationship between Expression of EGFR, MMP-9, and C-erbB-2 and Survival Time in Resected Non-Small Cell Lung Cancer (수술을 시행한 비소세포 폐암 환자에서 EGFR, MMP-9 및 C-erbB-2의 발현과 환자 생존율과의 관계)

  • Lee, Seung Heon;Jung, Jin Yong;Lee, Kyoung Ju;Lee, Seung Hyeun;Kim, Se Joong;Ha, Eun Sil;Kim, Jeong-Ha;Lee, Eun Joo;Hur, Gyu Young;Jung, Ki Hwan;Jung, Hye Cheol;Lee, Sung Yong;Lee, Sang Yeub;Kim, Je Hyeong;Shin, Chol;Shim, Jae Jeong;In, Kwang Ho;Kang, Kyung Ho;Yoo, Se Hwa;Kim, Chul Hwan
    • Tuberculosis and Respiratory Diseases
    • /
    • v.59 no.3
    • /
    • pp.286-297
    • /
    • 2005
  • Background : Non-small cell lung cancer (NSCLC) is a common cause of cancer-related death in North America and Korea, with an overall 5-year survival rate of between 4 and 14%. The TNM staging system is the best prognostic index for operable NSCLC . However, epidermal growth factor receptor (EGFR), matrix metalloproteinase-9(MMP-9), and C-erbB-2 have all been implicated in the pathogenesis of NSCLC and might provide prognostic information. Methods : Immunohistochemical staining of 81 specimens from a resected primary non-small cell lung cancer was evaluated in order to determine the role of the biological markers on NSCLC . Immunohistochemical staining for EGFR, MMP-9, and C-erbB-2 was performed on paraffin-embedded tissue sections to observe the expression pattern according to the pathologic type and surgical staging. The correlations between the expression of each biological marker and the survival time was determined. Results : When positive immunohistochemical staining was defined as the extent area>20%(more than Grade 2), the positive rates for EGFR, MMP-9, and C-erbB-2 staining were 71.6%, 44.3%, and 24.1% of the 81 patients, respectively. The positive rates of EGFR and MMP-9 stain for NSCLC according to the surgical stages I, II, and IIIa were 75.0% and 41.7%, 66.7% and 47.6%, and 76.9% and 46.2%, respectively. The median survival time of the EGFR(-) group, 71.8 months, was significantly longer than that of the EGFR(+) group, 33.5 months.(p=0.018, Kaplan-Meier Method, log-rank test).. The MMP-9(+) group had a shorter median survival time than the MMP-9(-) group, 35.0 and 65.3 months, respectively (p=0.2). The co-expression of EGFR and MMP-9 was associated with a worse prognosis with a median survival time of 26.9 months, when compared with the 77 months for both negative-expression groups (p=0.0023). There were no significant differences between the C-erbB-2(+) and C-erbB-2 (-) groups. Conclusion : In NSCLC, the expression of EGFR might be a prognostic factor, and the co-expression of EGFR and MMP-9 was found to be associated with a poor prognosis. However, C-erbB-2 expression had no prognostic significance.

Efficacy and Accuracy of Patient Specific Customize Bolus Using a 3-Dimensional Printer for Electron Beam Therapy (전자선 빔 치료 시 삼차원프린터를 이용하여 제작한 환자맞춤형 볼루스의 유용성 및 선량 정확도 평가)

  • Choi, Woo Keun;Chun, Jun Chul;Ju, Sang Gyu;Min, Byung Jun;Park, Su Yeon;Nam, Hee Rim;Hong, Chae-Seon;Kim, MinKyu;Koo, Bum Yong;Lim, Do Hoon
    • Progress in Medical Physics
    • /
    • v.27 no.2
    • /
    • pp.64-71
    • /
    • 2016
  • We develop a manufacture procedure for the production of a patient specific customized bolus (PSCB) using a 3D printer (3DP). The dosimetric accuracy of the 3D-PSCB is evaluated for electron beam therapy. In order to cover the required planning target volume (PTV), we select the proper electron beam energy and the field size through initial dose calculation using a treatment planning system. The PSCB is delineated based on the initial dose distribution. The dose calculation is repeated after applying the PSCB. We iteratively fine-tune the PSCB shape until the plan quality is sufficient to meet the required clinical criteria. Then the contour data of the PSCB is transferred to an in-house conversion software through the DICOMRT protocol. This contour data is converted into the 3DP data format, STereoLithography data format and then printed using a 3DP. Two virtual patients, having concave and convex shapes, were generated with a virtual PTV and an organ at risk (OAR). Then, two corresponding electron treatment plans with and without a PSCB were generated to evaluate the dosimetric effect of the PSCB. The dosimetric characteristics and dose volume histograms for the PTV and OAR are compared in both plans. Film dosimetry is performed to verify the dosimetric accuracy of the 3D-PSCB. The calculated planar dose distribution is compared to that measured using film dosimetry taken from the beam central axis. We compare the percent depth dose curve and gamma analysis (the dose difference is 3%, and the distance to agreement is 3 mm) results. No significant difference in the PTV dose is observed in the plan with the PSCB compared to that without the PSCB. The maximum, minimum, and mean doses of the OAR in the plan with the PSCB were significantly reduced by 9.7%, 36.6%, and 28.3%, respectively, compared to those in the plan without the PSCB. By applying the PSCB, the OAR volumes receiving 90% and 80% of the prescribed dose were reduced from $14.40cm^3$ to $0.1cm^3$ and from $42.6cm^3$ to $3.7cm^3$, respectively, in comparison to that without using the PSCB. The gamma pass rates of the concave and convex plans were 95% and 98%, respectively. A new procedure of the fabrication of a PSCB is developed using a 3DP. We confirm the usefulness and dosimetric accuracy of the 3D-PSCB for the clinical use. Thus, rapidly advancing 3DP technology is able to ease and expand clinical implementation of the PSCB.

The Expression of Vascular Endothelial Growth Factor (VEGF) is a Highly Significant Prognostic Factor in Stage IB Carcinoma of the Cervix (병기 IB 자궁경부암에서 혈관내피세포성장인자(VEGF)의 발현이 예후에 미치는 영향)

  • Lee Ik Jae;Park Kyung Ran;Lee Jong Young;Lee Kang Kyoo;Song Ji Sun;Lee Kwang Gil;Cha Dong Soo;Choi Hyun Il
    • Radiation Oncology Journal
    • /
    • v.19 no.4
    • /
    • pp.335-344
    • /
    • 2001
  • Purpose : The aim of this study was to clarify the role of VEGF expression as an independent prognostic factor and to identify the patients at high risk for poor prognosis in stage IB cervical cancer. Materials and methods : A total of 118 patients with stage IB cervical cancer who had radical hysterectomy and pelvic lymph node dissection were included in the study. All known high risk factors of the patients were pathologically confirmed from the surgical specimen. Of the 118 patients, n patients were treated with postoperative radiotherapy and/or chemotherapy. VEGF expression was examined using immunohistochemistry in formalin-fixed, paraffin-embedded specimens of post-hysterectomy surgical materials. A semiquantitative analysis was made using a scoring system of 0, +, ++, and +++ for increasing intensity of stain. We classified the patients with scores from 0 to ++ as low VEGF expression and the patients with a score of +++ as high VEGF expression. Results : Of the 118 patients, 35 patients $(29.7\%)$ showed high VEGF expression. Strong correlations were found between the high VEGF expression and both deep stromal invasion (p=0.01) and the positive pelvic node (p=0.03). The 5-year overall and disease-free survival rates for all 118 patients were $95.5\%\;and\;93.8\%$. The 5-year overall (p=0.03) and disease-free survival (p<0.001) rates were $98.5\%\;and\;100%$ for low VEGF expression (0, +, and ++) and $85.5\%\;and\;79.7\%$ for high VEGF expression, respectively. Pelvic and distant failures for low versus high VEGF expression were $1.2\%$ versus $17.1\%$, (p=0.001) and $0\%$ versus $14.3\%$ (p<0.001), respectively. In a Cox multivariate analysis of survival, the high VEGF expression (p=0.02) and the bulky mass (p=0.02) were significant prognostic factors for overall survival. The high VEGF expression (p=0.002), and bulky mass (p=0.01) demonstrated as significant prognostic indicators for disease free survival. Conclusion : These results showed that VEGF expression was a highly significant predictor for pelvic and distant failure and the most significant prognostic factor of overall and disease free survival for the patients with stage IB cervix cancer treated with radical surgery. We strongly suggest that the immune-histochemistry for VEGF expression be performed in a routine clinical setting in order to identify the patients at high risk for poor prognosis in early stage cervical cancer. Furthermore, postoperative and/or chemotherapy did not reduce the pelvic failure and distant metastasis. To improve the cure rate for the patients with high VEGF expression in stage IB cervical cancer, antiangiogenic therapy including anti-VEGF Ab may be new treatment option.

  • PDF

A Study on the Influence of IT Education Service Quality on Educational Satisfaction, Work Application Intention, and Recommendation Intention: Focusing on the Moderating Effects of Learner Position and Participation Motivation (IT교육 서비스품질이 교육만족도, 현업적용의도 및 추천의도에 미치는 영향에 관한 연구: 학습자 직위 및 참여동기의 조절효과를 중심으로)

  • Kang, Ryeo-Eun;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.169-196
    • /
    • 2017
  • The fourth industrial revolution represents a revolutionary change in the business environment and its ecosystem, which is a fusion of Information Technology (IT) and other industries. In line with these recent changes, the Ministry of Employment and Labor of South Korea announced 'the Fourth Industrial Revolution Leader Training Program,' which includes five key support areas such as (1) smart manufacturing, (2) Internet of Things (IoT), (3) big data including Artificial Intelligence (AI), (4) information security, and (5) bio innovation. Based on this program, we can get a glimpse of the South Korean government's efforts and willingness to emit leading human resource with advanced IT knowledge in various fusion technology-related and newly emerging industries. On the other hand, in order to nurture excellent IT manpower in preparation for the fourth industrial revolution, the role of educational institutions capable of providing high quality IT education services is most of importance. However, these days, most IT educational institutions have had difficulties in providing customized IT education services that meet the needs of consumers (i.e., learners), without breaking away from the traditional framework of providing supplier-oriented education services. From previous studies, it has been found that the provision of customized education services centered on learners leads to high satisfaction of learners, and that higher satisfaction increases not only task performance and the possibility of business application but also learners' recommendation intention. However, since research has not yet been conducted in a comprehensive way that consider both antecedent and consequent factors of the learner's satisfaction, more empirical research on this is highly desirable. With the advent of the fourth industrial revolution, a rising interest in various convergence technologies utilizing information technology (IT) has brought with the growing realization of the important role played by IT-related education services. However, research on the role of IT education service quality in the context of IT education is relatively scarce in spite of the fact that research on general education service quality and satisfaction has been actively conducted in various contexts. In this study, therefore, the five dimensions of IT education service quality (i.e., tangibles, reliability, responsiveness, assurance, and empathy) are derived from the context of IT education, based on the SERVPERF model and related previous studies. In addition, the effects of these detailed IT education service quality factors on learners' educational satisfaction and their work application/recommendation intentions are examined. Furthermore, the moderating roles of learner position (i.e., practitioner group vs. manager group) and participation motivation (i.e., voluntary participation vs. involuntary participation) in relationships between IT education service quality factors and learners' educational satisfaction, work application intention, and recommendation intention are also investigated. In an analysis using the structural equation model (SEM) technique based on a questionnaire given to 203 participants of IT education programs in an 'M' IT educational institution in Seoul, South Korea, tangibles, reliability, and assurance were found to have a significant effect on educational satisfaction. This educational satisfaction was found to have a significant effect on both work application intention and recommendation intention. Moreover, it was discovered that learner position and participation motivation have a partial moderating impact on the relationship between IT education service quality factors and educational satisfaction. This study holds academic implications in that it is one of the first studies to apply the SERVPERF model (rather than the SERVQUAL model, which has been widely adopted by prior studies) is to demonstrate the influence of IT education service quality on learners' educational satisfaction, work application intention, and recommendation intention in an IT education environment. The results of this study are expected to provide practical guidance for IT education service providers who wish to enhance learners' educational satisfaction and service management efficiency.