• Title/Summary/Keyword: 데이터 논문

Search Result 41,279, Processing Time 0.076 seconds

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on the Archives and Records Management in Korea - Overview and Future Direction - (한국의 기록관리 현황 및 발전방향에 관한 연구)

  • Han, Sang-Wan;Kim, Sung-Soo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.2 no.2
    • /
    • pp.1-38
    • /
    • 2002
  • This study examines the status quo of Korean archives and records management from the Governmental as well as professional activities for the development of the field in relation to the new legislation on records management. Among many concerns, this study primarily explores the following four perspectives: 1) the Government Archives and Records Services; 2) the Korean Association of Archives; 3) the Korean Society of Archives and Records Management; 4) the Journal of Korean Society of Archives and Records Management. One of the primary tasks of the is to build the special depository within which the Presidential Library should be located. As a result, the position of the GARS can be elevated and directed by an official at the level of vice-minister right under a president as a governmental representative of managing the public records. In this manner, GARS can sustain its independency and take custody of public records across government agencies. made efforts in regard to the preservation of paper records, the preservation of digital resources in new media formats, facilities and equipments, education of archivists and continuing, training of practitioners, and policy-making of records preservation. For further development, academia and corporate should cooperate continuously to face with the current problems. has held three international conferences to date. The topics of conferences include respectively: 1) records management and archival education of Korea, Japan, and China; 2) knowledge management and metadata for the fulfillment of archives and information science; and 3) electronic records management and preservation with the understanding of ongoing archival research in the States, Europe, and Asia. The Society continues to play a leading role in both of theory and practice for the development of archival science in Korea. It should also suggest an educational model of archival curricula that fits into the Korean context. The Journals of Records Management & Archives Society of Korea have been published on the six major topics to date. Findings suggest that "Special Archives" on regional or topical collections are desirable because it can house subject holdings on specialty or particular figures in that region. In addition, archival education at the undergraduate level is more desirable for Korean situations where practitioners are strongly needed and professionals with master degrees go to manager positions. Departments of Library and Information Science in universities, therefore, are needed to open archival science major or track at the undergraduate level in order to meet current market demands. The qualification of professional archivists should be moderate as well.

The Evaluation of the Difference of the SUV Caused by DFOV Change in PET/CT (PET/CT 검사에서 확대된 표시시야가 표준섭취계수에 미치는 영향 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.13-20
    • /
    • 2011
  • Purpose: The limited FOV(Field of View) of CT (Computed Tomography) can cause truncation artifact at external DFOV (Display Field of View) in PET/CT image. In our study, we measured the difference of SUV and compared the influence affecting to the image reconstructed with the extended DFOV. Materials and Methods: NEMA 1994 PET Phantom was filled with $^{18}F$(FDG) of 5.3 kBq/mL and placed at the center of FOV. Phantom images were acquired through emission scan. Shift the phantom's location to the external edge of DFOV and images were acquired with same method. All of acquired data through each experiment were reconstructed with same method, DFOV was applied 50 cm and 70 cm respectively. Then ROI was set up on the emission image, performed the comparative analysis SUV. In the clinical test, patient group shown truncation artifact was selected. ROI was set up at the liver of patient's image and performed the comparative analysis SUV according to the change of DFOV. Results: The pixel size was increase from 3.91 mm to 5.47 mm according to the DFOV increment in the centered location phantom study. When extended DFOV was applied, $_{max}SUV$ of ROI was decreased from 1.49 to 1.35. In case of shifted the center of phantom location study, $_{max}SUV$ was decreased from 1.30 to 1.20. The $_{max}SUV$ was 1.51 at the truncated region in the extended DFOV. The difference of the $_{max}SUV$ was 25.9% higher at the outside of the truncated region than inside. When the extended DFOV was applied, $_{max}SUV$ was decreased from 3.38 to 3.13. Conclusion: When the extended DFOV was applied, $_{max}SUV$ decreasing phenomenon can cause pixel to pixel noise by increasing of pixel size. In this reason, $_{max}SUV$ was underestimated. Therefore, We should consider the underestimation of quantitative result in the whole image plane in case of patient study applied extended DFOV protocol. Consequently, the result of the quantitative analysis may show more higher than inside at the truncated region.

  • PDF

Measurements of Dissociation Enthalpy for Simple Gas Hydrates Using High Pressure Differential Scanning Calorimetry (고압 시차 주사 열량계를 이용한 단일 객체 가스 하이드레이트의 해리 엔탈피 측정)

  • Lee, Seungmin;Park, Sungwon;Lee, Youngjun;Kim, Yunju;Lee, Ju Dong;Lee, Jaehyoung;Seo, Yongwon
    • Korean Chemical Engineering Research
    • /
    • v.50 no.4
    • /
    • pp.666-671
    • /
    • 2012
  • Gas hydrates are inclusion compounds formed when small-sized guest molecules are incorporated into the well defined cages made up of hydrogen bonded water molecules. Since large masses of natural gas hydrates exist in permafrost regions or beneath deep oceans, these naturally occurring gas hydrates in the earth containing mostly $CH_4$ are regarded as future energy resources. The heat of dissociation is one of the most important thermal properties in exploiting natural gas hydrates. The accurate and direct method to measure the dissociation enthalpies of gas hydrates is to use a calorimeter. In this study, the high pressure micro DSC (Differential Scanning Calorimeter) was used to measure the dissociation enthalpies of methane, ethane, and propane hydrates. The accuracy and repeatability of the data obtained from the DSC was confirmed by measuring the dissociation enthalpy of ice. The dissociation enthalpies of methane, ethane, and propane hydrates were found to be 54.2, 73.8, and 127.7 kJ/mol-gas, respectively. For each gas hydrate, at given pressures the dissociation temperatures which were obtained in the process of enthalpy measurement were compared with three-phase (hydrate (H) - liquid water (Lw) - vapor (V)) equilibrium data in the literature and found to be in good agreement with literature values.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

The Present State and Solutions for Archival Arrangement and Description of National Archives & Records Service of Korea (국가기록원의 기록물 정리기술의 현황과 개선방안)

  • Yoon, Ju-Bom
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.4 no.2
    • /
    • pp.118-162
    • /
    • 2004
  • Archival description in archives has an important role in document control and reference service. Archives has made an effort to do archival description. But we have some differences and problems about a theory and practical processes comparing with advanced countries. The serious difference in a theory is that a function classification, maintenance of an original order, arrangement of multi-level description are not reflected in practical process. they are arranged in shelves after they are arranged by registration order in a unit of a volume like an arrangement of book. In addition, there are problems in history of agency change or control of index. So these can cause inconvenience for users. For improving, in this study we introduced the meaning and importance of arrangement of description, the situation and problem of arrangement of description in The National Archives, and a description guideline in other foreign countries. The next is an example for ISAD(G). This paper has chapter 8, the chapter 1 is introduction, the chapter 2 is the meaning and importance of arrangement of description, excluding the chapter 8 is conclusion we can say like this from the chapter 3 to the chapter 7. In the chapter 3, we explain GOVT we are using now and description element category in situation and problem of arrangement of description in Archives. In the chapter 4, this is about guideline from Archives in U.S.A, England and Australia. 1. Lifecycle Date Requirement Guide from NARA is introduced and of the description field, the way of the description about just one title element is introduced. 2. This is about the guideline of the description from Public Record Office. That name is National Archives Cataloguing Guidelines Introduction. We are saying "PROCAT" from this guideline and the seven procedure of description. 3. This is about Commomon Record Series from National Archives of Australia. we studied Registration & description procedures for CRS system. In the chapter 5, This is about the example which applied ISAD to. Archives introduce description of documents produced from Appeals Commission in the Ministry of Government Administration. In the chapter 6, 7. These are about the problems we pointed after using ISAD, naming for the document at procedure section in every institution, the lack of description fields category, the sort or classification of the kind or form, the reference or identified number, the absence description rule about the details, function classification, multi-level description, input format, arrangement of book shelf, authority control. The plan for improving are that problems. The best way for arrangement and description in Archives is to examine the standard, guideline, manual from archives in the advanced countries. So we suggested we need many research and study about this in the academic field.

Analyses of the indispensible Indices in Evaluating Gamma Knife Radiosurgery Treatment Plans (감마나이프 방사선수술 치료계획의 평가에 필수불가결한 지표들의 분석)

  • Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.5
    • /
    • pp.303-312
    • /
    • 2017
  • The central goal of Gamma Knife radiosurgery(GKRS) is to maximize the conformity of the prescription isodose surface, and to minimize the radiation effect of the normal tissue surrounding the target volume. There are the various kinds of indices related with the quality of treatment plans such as conformity index, coverage, selectivity, beam-on time, gradient index(GI), and conformity/gradient index(CGI). As the best treatment plan evaluation tool, we must check by all means conformity index, GI, and CGI among them. Specially, GI and CGI related with complication of healthy normal tissue is more indispensible than conformity index. Then author calculated and statistically analysed CGI, the newly defined conformity/gradient index as well as GI being applied widely using the treatment planning system Leksell GammaPlan(LGP) and the verification method Variable Ellipsoid Modeling Technique(VEMT). In the study 10 patients with intracranial lesion treated by GKRS were included. Author computed the indices from LGP and VEMT requiring only four parameters: the prescribed isodose volume, the volume with dose > 30%, the target volume, and the volume of half the prescription isodose. All data were analyzed by paired t-test, which is statistical method used to compare two different measurement techniques. No statistical significance in GI at 10 cases was observed between LGP and VEMT. Differences in GI ranged from -0.14 to 0.01. The newly defined gradient index calculated by two methods LGP and VEMT was not statistically significant either. Author did not find out the statistical difference for the prescribed isodose volume between LGP and VEMT. CGI as the evaluation index for determining the best treatment plan is not significant statistically also. Differences in CGI ranged from -4 to 3. Similarly newly defined Conformity/Gradient index for GKRS was also estimated as the metric for the evaluation of the treatment plans through statistical analysis. Statistical analyses demonstrated that VEMT was in excellent agreement with LGP when considering GI, new gradient index, CGI, and new CGI for evaluating the best plans of GKRS. Due to the application of the fast and easy evaluation tool through LGP and VEMT author hopes CGI and newly defined CGI as well as gradient indices will be widely used.