• Title/Summary/Keyword: experimental validation

Search Result 1,264, Processing Time 0.024 seconds

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Verification of Gated Radiation Therapy: Dosimetric Impact of Residual Motion (여닫이형 방사선 치료의 검증: 잔여 움직임의 선량적 영향)

  • Yeo, Inhwan;Jung, Jae Won
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.128-138
    • /
    • 2014
  • In gated radiation therapy (gRT), due to residual motion, beam delivery is intended to irradiate not only the true extent of disease, but also neighboring normal tissues. It is desired that the delivery covers the true extent (i.e. clinical target volume or CTV) as a minimum, although target moves under dose delivery. The objectives of our study are to validate if the intended dose is surely delivered to the true target in gRT and to quantitatively understand the trend of dose delivery on it and neighboring normal tissues when gating window (GW), motion amplitude (MA), and CTV size changes. To fulfill the objectives, experimental and computational studies have been designed and performed. A custom-made phantom with rectangle- and pyramid-shaped targets (CTVs) on a moving platform was scanned for four-dimensional imaging. Various GWs were selected and image integration was performed to generate targets (internal target volume or ITV) for planning that included the CTVs and internal margins (IM). The planning was done conventionally for the rectangle target and IMRT optimization was done for the pyramid target. Dose evaluation was then performed on a diode array aligned perpendicularly to the gated beams through measurements and computational modeling of dose delivery under motion. This study has quantitatively demonstrated and analytically interpreted the impact of residual motion including penumbral broadening for both targets, perturbed but secured dose coverage on the CTV, and significant doses delivered in the neighboring normal tissues. Dose volume histogram analyses also demonstrated and interpreted the trend of dose coverage: for ITV, it increased as GW or MA decreased or CTV size increased; for IM, it increased as GW or MA decreased; for the neighboring normal tissue, opposite trend to that of IM was observed. This study has provided a clear understanding on the impact of the residual motion and proved that if breathing is reproducible gRT is secure despite discontinuous delivery and target motion. The procedures and computational model can be used for commissioning, routine quality assurance, and patient-specific validation of gRT. More work needs to be done for patient-specific dose reconstruction on CT images.

Application of The Semi-Distributed Hydrological Model(TOPMODEL) for Prediction of Discharge at the Deciduous and Coniferous Forest Catchments in Gwangneung, Gyeonggi-do, Republic of Korea (경기도(京畿道) 광릉(光陵)의 활엽수림(闊葉樹林)과 침엽수림(針葉樹林) 유역(流域)의 유출량(流出量) 산정(算定)을 위한 준분포형(準分布型) 수문모형(水文模型)(TOPMODEL)의 적용(適用))

  • Kim, Kyongha;Jeong, Yongho;Park, Jaehyeon
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.2
    • /
    • pp.197-209
    • /
    • 2001
  • TOPMODEL, semi-distributed hydrological model, is frequently applied to predict the amount of discharge, main flow pathways and water quality in a forested catchment, especially in a spatial dimension. TOPMODEL is a kind of conceptual model, not physical one. The main concept of TOPMODEL is constituted by the topographic index and soil transmissivity. Two components can be used for predicting the surface and subsurface contributing area. This study is conducted for the validation of applicability of TOPMODEL at small forested catchments in Korea. The experimental area is located at Gwangneung forest operated by Korea Forest Research Institute, Gyeonggi-do near Seoul metropolitan. Two study catchments in this area have been working since 1979 ; one is the natural mature deciduous forest(22.0 ha) about 80 years old and the other is the planted young coniferous forest(13.6 ha) about 22 years old. The data collected during the two events in July 1995 and June 2000 at the mature deciduous forest and the three events in July 1995 and 1999, August 2000 at the young coniferous forest were used as the observed data set, respectively. The topographic index was calculated using $10m{\times}10m$ resolution raster digital elevation map(DEM). The distribution of the topographic index ranged from 2.6 to 11.1 at the deciduous and 2.7 to 16.0 at the coniferous catchment. The result of the optimization using the forecasting efficiency as the objective function showed that the model parameter, m and the mean catchment value of surface saturated transmissivity, $lnT_0$ had a high sensitivity. The values of the optimized parameters for m and InT_0 were 0.034 and 0.038; 8.672 and 9.475 at the deciduous and 0.031, 0.032 and 0.033; 5.969, 7.129 and 7.575 at the coniferous catchment, respectively. The forecasting efficiencies resulted from the simulation using the optimized parameter were comparatively high ; 0.958 and 0.909 at the deciduous and 0.825, 0.922 and 0.961 at the coniferous catchment. The observed and simulated hyeto-hydrograph shoed that the time of lag to peak coincided well. Though the total runoff and peakflow of some events showed a discrepancy between the observed and simulated output, TOPMODEL could overall predict a hydrologic output at the estimation error less than 10 %. Therefore, TOPMODEL is useful tool for the prediction of runoff at an ungaged forested catchment in Korea.

  • PDF

Establishment of Biotin Analysis by LC-MS/MS Method in Infant Milk Formulas (LC-MS/MS를 이용한 조제유류 중 비오틴 함량 분석법 연구)

  • Shin, Yong Woon;Lee, Hwa Jung;Ham, Hyeon Suk;Shin, Sung Cheol;Kang, Yoon Jung;Hwang, Kyung Mi;Kwon, Yong Kwan;Seo, Il Won;Oh, Jae Myoung;Koo, Yong Eui
    • Journal of Food Hygiene and Safety
    • /
    • v.31 no.5
    • /
    • pp.327-334
    • /
    • 2016
  • This study was conducted to establish the standard method for the contents of biotin in milk formulas. To optimize the method, we compared several conditions for liquid extraction, purification and instrumental measurement using spiked samples and certified reference material (NIST SRM 1849a) as test materials. LC-MS/MS method for biotin was established using $C_{18}$ column and binary gradient 0.1% formic acid/acetonitrile, 0.1% formic acid/water mobile phase is applied for biotin. Product-ion traces at m/z 245.1 ${\rightarrow}$ 227.1, 166.1 are used for quantitative analysis of biotin. The linearity was over $R^2=0.999$ in range of $5{\sim}60{\mu}g/L$. For purification, chloroform was used as a solvent for eliminating lipids in milk formula. The linearity was over 0.999 in range of 5~60 ng/mL. The detection limit and quantification limit were 0.10, 0.31 ng/mL. The accuracy and precision of LC-MS/MS method using CRM were 103%, 2.5% respectively. Optimized methods were applied in sample analysis to verify the reliability. All the tested milk formulas were acceptable contents of biotin compared with component specification and standards for nutrition labeling. The standard operating procedures were prepared for biotin to provide experimental information and to strengthen the management of nutrient in milk formula.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Investigation of Automated Neonatal Hearing Screening for Early Detection of Childhood Hearing Impairment (소아 난청의 조기진단을 위한 신생아 청력 선별검사에 대한 평가)

  • Seo, Jeong Il;Yoo, Si Uk;Gong, Sung Hyeon;Hwang, Gwang Su;Lee, Hyeon Jung;Kim, Joong Pyo;Choi, Hyeon;Lee, Bo Young;Mok, Ji Sun
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.7
    • /
    • pp.706-710
    • /
    • 2005
  • Purpose : Early diagnosis of congenital hearing loss through the neonatal hearing screening test minimizes language defect. This research intends to identify frequency of congenital hearing loss in infants through neonatal hearing screening test with the aim of communicating the importance of hearing test for infants. Methods : From May 20, 2003 to May 19, 2004, infants were subjected to Automated Auditory Brainstem Response test during one month of birth to conduct the test with 35 dB sound. Infants who passed the 1st round of hearing test, were classified into 'pass' group whereas those who did not were classified into 'refer' group. Infants who did not 'pass' in the hearing test conducted within one month of birth were subjected to re-test one month later, and if classified as 'refer' during the re-test, they were subjected to the diagnosis for validation of hearing loss by requesting test to the hearing loss clinic. Results : There was no difference among the 'pass' and 'refer' group in terms of form of childbirth, weight at birth and gestational age. In the 1st test, total of 45 infants were classified into 'refer' group. Six among 35 who were subjected to re-test(17%) did not pass the re-test, and all were diagnosed with congenital hearing loss. This corresponds to 0.35%(3.5 per 1,000) among total number of 1,718 subjects. Conclusion : In our study the congenital hearing loss tends to be considerably more frequently than congenital metabolic disorder. Accordingly, newly born infants are strongly recommended to undergo neonatal hearing screening test.

Design and Optimization of Pilot-Scale Bunsen Process in Sulfur-Iodine (SI) Cycle for Hydrogen Production (수소 생산을 위한 Sulfur-Iodine Cycle 분젠반응의 Pilot-Scale 공정 모델 개발 및 공정 최적화)

  • Park, Junkyu;Nam, KiJeon;Heo, SungKu;Lee, Jonggyu;Lee, In-Beum;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.58 no.2
    • /
    • pp.235-247
    • /
    • 2020
  • Simulation study and validation on 50 L/hr pilot-scale Bunsen process was carried out in order to investigate thermodynamics parameters, suitable reactor type, separator configuration, and the optimal conditions of reactors and separation. Sulfur-Iodine is thermochemical process using iodine and sulfur compounds for producing hydrogen from decomposition of water as net reaction. Understanding in phase separation and reaction of Bunsen Process is crucial since Bunsen Process acts as an intermediate process among three reactions. Electrolyte Non-Random Two-Liquid model is implemented in simulation as thermodynamic model. The simulation results are validated with the thermodynamic parameters and the 50 L/hr pilot-scale experimental data. The SO2 conversions of PFR and CSTR were compared as varying the temperature and reactor volume in order to investigate suitable type of reactor. Impurities in H2SO4 phase and HIX phase were investigated for 3-phase separator (vapor-liquid-liquid) and two 2-phase separators (vapor-liquid & liquid-liquid) in order to select separation configuration with better performance. The process optimization on reactor and phase separator is carried out to find the operating conditions and feed conditions that can reach the maximum SO2 conversion and the minimum H2SO4 impurities in HIX phase. For reactor optimization, the maximum 98% SO2 conversion was obtained with fixed iodine and water inlet flow rate when the diameter and length of PFR reactor are 0.20 m and 7.6m. Inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion with fixed temperature and PFR size (diameter: 3/8", length:3 m). When temperature (121℃) and PFR size (diameter: 0.2, length:7.6 m) are applied to the feed composition optimization, inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion.

A Computer Simulation for Small Animal Iodine-125 SPECT Development (소동물 Iodine-125 SPECT 개발을 위한 컴퓨터 시뮬레이션)

  • Jung, Jin-Ho;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jeong, Myung-Hwan;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.74-84
    • /
    • 2004
  • Purpose: Since I-125 emits low energy (27-35 keV) radiation, thinner crystal and collimator could be employed and, hence, it is favorable to obtain high quality images. The purpose of this study was to derive the optimized parameters of I-125 SPECT using a new simulation tool, GATE (Geant4 Application for Tomographic Emission). Materials and Methods: To validate the simulation method, gamma camera developed by Weisenberger et al. was modeled. Nal(T1) plate crystal was used and its thickness was determined by calculating detection efficiency. Spatial resolution and sensitivity curves were estimated by changing variable parameters for parallel-hole and pinhole collimator. Peformances of I-125 SPECT equipped with the optimal collimator were also estimated. Results: in the validation study, simulations were found to agree well with experimental measurements in spatial resolution (4%) and sensitivity (3%). In order to acquire 98% gamma ray detection efficiency, Nal(T1) thickness was determined to be 1 mm. Hole diameter (mm), length (mm) and shape were chosen to be 0.2:5:square and 0.5:10:hexagonal for high resolution (HR) and general purpose (GP) parallel-hole collimator, respectively. Hole diameter, channel height and acceptance angle of pinhole (PH) collimator were determined to be 0.25 mm, 0.1 mm and 90 degree. The spatial resolutions of reconstructed image of the I-125 SPECT employing HR:GP:PH were 1.2:1.7:0.8 mm. The sensitivities of HR:GP:PH were 39.7:71.9:5.5 cps/MBq. Conclusion: The optimal crystal and collimator parameters for I-125 Imaging were derived by simulation using GATE. The results indicate that excellent resolution and sensitivity imaging is feasible using I-125 SPECT.