• Title/Summary/Keyword: 시스템 개선

Search Result 14,450, Processing Time 0.05 seconds

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Effect of Partially Used High Energy Photon on Intensity-modulated Radiation Therapy Plan for Head and Neck Cancer (두경부암 세기변조방사선치료 계획 시 부분적 고에너지 광자선 사용에 따른 치료계획 평가)

  • Chang, Nam Joon;Seok, Jin Yong;Won, Hui Su;Hong, Joo Wan;Choi, Ji Hun;Park, Jin Hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 2013
  • Purpose: A selection of proper energy in treatment planning is very important because of having different dose distribution in body as photon energy. In generally, the low energy photon has been used in intensity-modulated radiation therapy (IMRT) for head and neck (H&N) cancer. The aim of this study was to evaluate the effect of partially used high energy photon at posterior oblique fields on IMRT plan for H&N cancer. Materials and Methods: The study was carried out on 10 patients (nasopharyngeal cancer 5, tonsilar cancer 5) treated with IMRT in Seoul National University Bundang Hospital. CT images were acquired 3 mm of thickness in the same condition and the treatment plan was performed by Eclipse (Ver.7.1, Varian, Palo Alto, USA). Two plans were generated under same planing objectives, dose volume constraints, and eight fields setting: (1) The low energy plan (LEP) created using 6 MV beam alone, (2) the partially used high energy plan (PHEP) created partially using 15 MV beam at two posterior oblique fields with deeper penetration depths, while 6 MV beam was used at the rest of fields. The plans for LEP and PHEP were compared in terms of coverage, conformity index (CI) and homogeneity index (HI) for planning target volume (PTV). For organs at risk (OARs), $D_{mean}$ and $D_{50%}$ were analyzed on both parotid glands and $D_{max}$, $D_{1%}$ for spinal cord were analyzed. Integral dose (ID) and total monitor unit (MU) were compared as addition parameters. For the comparing dose to normal tissue of posterior neck, the posterior-normal tissue volume (P-NTV) was set on the patients respectively. The $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ for P-NTV were evaluated by using dose volume histogram (DVH). Results: The dose distributions were similar with regard to coverage, CI and HI for PTV between the LEP and PHEP. No evident difference was observed in the spinal cord. However, the $D_{mean}$, $D_{50%}$ for both parotid gland were slightly reduced by 0.6%, 0.7% in PHEP. The ID was reduced by 1.1% in PHEP, and total MU for PHEP was 1.8% lower than that for LEP. In the P-NTV, the $D_{mean}$, $V_{20Gy}$ and $V_{25Gy}$ of the PHEP were 1.6%, 1.8% and 2.9% lower than those of LEP. Conclusion: Dose to some OARs and a normal tissue, total monitor unit were reduced in IMRT plan with partially used high energy photon. Although these reduction are unclear how have a clinical benefit to patient, application of the partially used high energy photon could improve the overall plan quality of IMRT for head and neck cancer.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

The Accuracy of Tuberculosis Notification Reports at a Private General Hospital after Enforcement of New Korean Tuberculosis Surveillance System (새로운 국가결핵감시체계 시행 후 한 민간종합병원에서 작성된 결핵정보관리보고서의 정확도 조사)

  • Kim, Cheol Hong;Koh, Won-Jung;Kwon, O Jung;Ahn, Young Mee;Lim, Seong Young;An, Chang Hyeok;Youn, Jong Wook;Hwang, Jung Hye;Suh, Gee Young;Chung, Man Pyo;Kim, Hojoong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.54 no.2
    • /
    • pp.178-190
    • /
    • 2003
  • Background : The committee of tuberculosis(TB) survey planning for the year 2000 decided to construct the Korean Tuberculosis Surveillance System (KTBS), based on a doctor's routine reporting method. The successful keys of the KTBS rely on the precision of the recorded TB notification forms. The purpose of this study was to determine that the accuracy of the TB notification form written at a private general hospital given to the corresponding health center and to improve the comprehensiveness of these reporting systems. Materials and Methods : 291 adult TB patients who had been diagnosed from August 2000 to January 2001, were enrolled in this study. The lists of TB notification forms were compared with the medical records and the various laboratory results; case characteristics, history of previous treatment, examinations for diagnosis, site of the TB by the international classification of the disease, and treatment. Results : In the list of examinations for a diagnosis in 222 pulmonary TB patients, the concordance rate of the 'sputum smear exam' was 76% but that of the 'sputum culture exam' was only 23%. Among the 198 cases of the sputum culture exam labeled 'not examined', 43(21.7%) cases proved to be true 'not examined', 70 cases(35.4%) were proven to be 'culture positive', and 85(43.0%) cases were proven to be 'culture negative'. In the list of examinations for a diagnosis in 69 extrapulmonary TB patients, the concordance rate of the 'smear exam other than sputum' was 54%. In the list of treatments, the overall concordance rate of the 'type of registration' in the TB notification form was 85%. Among the 246 'new' cases on the TB notification form, 217(88%) cases were true 'new' cases and 13 were proven to be 'relapse', 2 were proven to be 'treatment after failure', one was proven to be 'treatment after default', 12 were proven to be 'transferred-in' and one was proven to be 'chronic'. Among the 204 HREZ prescribed regimen, 172(84.3%) patients were taking the HREZ regimen, and the others were prescribed other drug regimens. Conclusion : Correct recording of the TB notification form at the private sectors is necessary for supporting the effective TB surveillance system in Korea.

The Present State of Domestic Acceptance of Various International Conventions for the Prevention of Marine Pollution (해양오염방지를 위한 각종 국제협약의 국내 수용 현황)

  • Kim, Kwang-Soo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.12 no.4 s.27
    • /
    • pp.293-300
    • /
    • 2006
  • Domestic laws such as Korea Marine Pollution Prevention Law (KMPPL) which has been mae and amended according to the conclusions and amendments of various international conventions for the prevention a marine pollution such as MARPOL 73/78 were reviewed and compared with the major contents of the relevant international conventions. Alternative measures for legislating new laws or amending existing laws such as KMPPL for the acceptance of major contents of existing international conventions were proposed. Annex VI of MARPOL 73/78 into which the regulations for the prevention of air pollution from ship have been adopted has been recently accepted in KMPPL which should be applied to ships which are the moving sources of air pollution at sea rather tlnn in Korea Air Environment Conservation Law which should be applied to automobiles and industrial installations in land. The major contents of LC 72/95 have been accepted in KMPPL However, a few of substances requiring special care in Annex II of 72LC, a few of items in characteristics and composition for the matter in relation to criteria governing the issue of permits for the dumping of matter at sea in Annex III of 72LC, and a few of items in wastes or other matter that may be considered for dumping in Annex I of 96 Protocol have not been accepted in KMPPL yet. The major contents of OPRC 90 have been accepted in KMPPL. However, oil pollution emergency plans for sea ports and oil handling facilities, and national contingency plan for preparedness and response have not been accepted in KMPPL yet. The waste oil related articles if Basel Convention, which shall regulate and prohibit transboundary movement of hazardous waste, should be accepted in KMPPL in order to prevent the transfer if scrap-purpose tanker ships containing oil/water mixtures and chemicals remained on beard from advanced countries to developing and/or underdeveloped countries. International Convention for the Control if Harmful Anti-Fouling Systems on the Ships should be accepted in KMPPL rather tlnn in Korea Noxious Chemicals Management Law. International Convention for Ship's Ballast Water/Sediment Management should be accepted in KMPPL or by a new law in order to prevent domestic marine ecosystem and costal environment from the invasion of harmful exotic species through the discharge of ship's ballast water.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

The Effect of Recombinant Human Epidermal Growth Factor on Cisplatin and Radiotherapy Induced Oral Mucositis in Mice (마우스에서 Cisplatin과 방사선조사로 유발된 구내염에 대한 재조합 표피성장인자의 효과)

  • Na, Jae-Boem;Kim, Hye-Jung;Chai, Gyu-Young;Lee, Sang-Wook;Lee, Kang-Kyoo;Chang, Ki-Churl;Choi, Byung-Ock;Jang, Hong-Seok;Jeong, Bea-Keon;Kang, Ki-Mun
    • Radiation Oncology Journal
    • /
    • v.25 no.4
    • /
    • pp.242-248
    • /
    • 2007
  • Purpose: To study the effect of recombinant human epidermal growth factor (rhEGF) on oral mucositis induced by cisplatin and radiotherapy in a mouse model. Materials and Methods: Twenty-four ICR mice were divided into three groups-the normal control group, the no rhEGF group (treatment with cisplatin and radiation) and the rhEGF group (treatment with cisplatin, radiation and rhEGF). A model of mucositis induced by cisplatin and radiotherapy was established by injecting mice with cisplatin (10 mg/kg) on day 1 and with radiation exposure (5 Gy/day) to the head and neck on days $1{\sim}5$. rhEGF was administered subcutaneously on days -1 to 0 (1 mg/kg/day) and on days 3 to 5 (1 mg/kg/day). Evaluation included body weight, oral intake, and histology. Results: For the comparison of the change of body weight between the rhEGF group and the no rhEGF group, a statistically significant difference was observed in the rhEGF group for the 5 days after day 3 of. the experiment. The rhEGF group and no rhEGF group had reduced food intake until day 5 of the experiment, and then the mice demonstrated increased food intake after day 13 of the of experiment. When the histological examination was conducted on day 7 after treatment with cisplatin and radiation, the rhEGF group showed a focal cellular reaction in the epidermal layer of the mucosa, while the no rhEGF group did not show inflammation of the oral mucosa. Conclusion: These findings suggest that rhEGF has a potential to reduce the oral mucositis burden in mice after treatment with cisplatin and radiation. The optimal dose, number and timing of the administration of rhEGF require further investigation.

Analysis and Improvement Strategies for Korea's Cyber Security Systems Regulations and Policies

  • Park, Dong-Kyun;Cho, Sung-Je;Soung, Jea-Hyen
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.169-190
    • /
    • 2009
  • Today, the rapid advance of scientific technologies has brought about fundamental changes to the types and levels of terrorism while the war against the world more than one thousand small and big terrorists and crime organizations has already begun. A method highly likely to be employed by terrorist groups that are using 21st Century state of the art technology is cyber terrorism. In many instances, things that you could only imagine in reality could be made possible in the cyber space. An easy example would be to randomly alter a letter in the blood type of a terrorism subject in the health care data system, which could inflict harm to subjects and impact the overturning of the opponent's system or regime. The CIH Virus Crisis which occurred on April 26, 1999 had significant implications in various aspects. A virus program made of just a few lines by Taiwanese college students without any specific objective ended up spreading widely throughout the Internet, causing damage to 30,000 PCs in Korea and over 2 billion won in monetary damages in repairs and data recovery. Despite of such risks of cyber terrorism, a great number of Korean sites are employing loose security measures. In fact, there are many cases where a company with millions of subscribers has very slackened security systems. A nationwide preparation for cyber terrorism is called for. In this context, this research will analyze the current status of Korea's cyber security systems and its laws from a policy perspective, and move on to propose improvement strategies. This research suggests the following solutions. First, the National Cyber Security Management Act should be passed to have its effectiveness as the national cyber security management regulation. With the Act's establishment, a more efficient and proactive response to cyber security management will be made possible within a nationwide cyber security framework, and define its relationship with other related laws. The newly passed National Cyber Security Management Act will eliminate inefficiencies that are caused by functional redundancies dispersed across individual sectors in current legislation. Second, to ensure efficient nationwide cyber security management, national cyber security standards and models should be proposed; while at the same time a national cyber security management organizational structure should be established to implement national cyber security policies at each government-agencies and social-components. The National Cyber Security Center must serve as the comprehensive collection, analysis and processing point for national cyber crisis related information, oversee each government agency, and build collaborative relations with the private sector. Also, national and comprehensive response system in which both the private and public sectors participate should be set up, for advance detection and prevention of cyber crisis risks and for a consolidated and timely response using national resources in times of crisis.

  • PDF