• Title/Summary/Keyword: Sequential analysis.

Search Result 1,303, Processing Time 0.028 seconds

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

Integrated Rotary Genetic Analysis Microsystem for Influenza A Virus Detection

  • Jung, Jae Hwan;Park, Byung Hyun;Choi, Seok Jin;Seo, Tae Seok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.88-89
    • /
    • 2013
  • A variety of influenza A viruses from animal hosts are continuously prevalent throughout the world which cause human epidemics resulting millions of human infections and enormous industrial and economic damages. Thus, early diagnosis of such pathogen is of paramount importance for biomedical examination and public healthcare screening. To approach this issue, here we propose a fully integrated Rotary genetic analysis system, called Rotary Genetic Analyzer, for on-site detection of influenza A viruses with high speed. The Rotary Genetic Analyzer is made up of four parts including a disposable microchip, a servo motor for precise and high rate spinning of the chip, thermal blocks for temperature control, and a miniaturized optical fluorescence detector as shown Fig. 1. A thermal block made from duralumin is integrated with a film heater at the bottom and a resistance temperature detector (RTD) in the middle. For the efficient performance of RT-PCR, three thermal blocks are placed on the Rotary stage and the temperature of each block is corresponded to the thermal cycling, namely $95^{\circ}C$ (denature), $58^{\circ}C$ (annealing), and $72^{\circ}C$ (extension). Rotary RT-PCR was performed to amplify the target gene which was monitored by an optical fluorescent detector above the extension block. A disposable microdevice (10 cm diameter) consists of a solid-phase extraction based sample pretreatment unit, bead chamber, and 4 ${\mu}L$ of the PCR chamber as shown Fig. 2. The microchip is fabricated using a patterned polycarbonate (PC) sheet with 1 mm thickness and a PC film with 130 ${\mu}m$ thickness, which layers are thermally bonded at $138^{\circ}C$ using acetone vapour. Silicatreated microglass beads with 150~212 ${\mu}L$ diameter are introduced into the sample pretreatment chambers and held in place by weir structure for construction of solid-phase extraction system. Fig. 3 shows strobed images of sequential loading of three samples. Three samples were loaded into the reservoir simultaneously (Fig. 3A), then the influenza A H3N2 viral RNA sample was loaded at 5000 RPM for 10 sec (Fig. 3B). Washing buffer was followed at 5000 RPM for 5 min (Fig. 3C), and angular frequency was decreased to 100 RPM for siphon priming of PCR cocktail to the channel as shown in Figure 3D. Finally the PCR cocktail was loaded to the bead chamber at 2000 RPM for 10 sec, and then RPM was increased up to 5000 RPM for 1 min to obtain the as much as PCR cocktail containing the RNA template (Fig. 3E). In this system, the wastes from RNA samples and washing buffer were transported to the waste chamber, which is fully filled to the chamber with precise optimization. Then, the PCR cocktail was able to transport to the PCR chamber. Fig. 3F shows the final image of the sample pretreatment. PCR cocktail containing RNA template is successfully isolated from waste. To detect the influenza A H3N2 virus, the purified RNA with PCR cocktail in the PCR chamber was amplified by using performed the RNA capture on the proposed microdevice. The fluorescence images were described in Figure 4A at the 0, 40 cycles. The fluorescence signal (40 cycle) was drastically increased confirming the influenza A H3N2 virus. The real-time profiles were successfully obtained using the optical fluorescence detector as shown in Figure 4B. The Rotary PCR and off-chip PCR were compared with same amount of influenza A H3N2 virus. The Ct value of Rotary PCR was smaller than the off-chip PCR without contamination. The whole process of the sample pretreatment and RT-PCR could be accomplished in 30 min on the fully integrated Rotary Genetic Analyzer system. We have demonstrated a fully integrated and portable Rotary Genetic Analyzer for detection of the gene expression of influenza A virus, which has 'Sample-in-answer-out' capability including sample pretreatment, rotary amplification, and optical detection. Target gene amplification was real-time monitored using the integrated Rotary Genetic Analyzer system.

  • PDF

Key Methodologies to Effective Site-specific Accessment in Contaminated Soils : A Review (오염토양의 효과적 현장조사에 대한 주요 방법론의 검토)

  • Chung, Doug-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.32 no.4
    • /
    • pp.383-397
    • /
    • 1999
  • For sites to be investigated, the results of such an investigation can be used in determining foals for cleanup, quantifying risks, determining acceptable and unacceptable risk, and developing cleanup plans t hat do not cause unnecessary delays in the redevelopment and reuse of the property. To do this, it is essential that an appropriately detailed study of the site be performed to identify the cause, nature, and extent of contamination and the possible threats to the environment or to any people living or working nearby through the analysis of samples of soil and soil gas, groundwater, surface water, and sediment. The migration pathways of contaminants also are examined during this phase. Key aspects of cost-effective site assessment to help standardize and accelerate the evaluation of contaminated soils at sites are to provide a simple step-by-step methodology for environmental science/engineering professionals to calculate risk-based, site-specific soil levels for contaminants in soil. Its use may significantly reduce the time it takes to complete soil investigations and cleanup actions at some sites, as well as improve the consistency of these actions across the nation. To achieve the effective site assessment, it requires the criteria for choosing the type of standard and setting the magnitude of the standard come from different sources, depending on many factors including the nature of the contamination. A general scheme for site-specific assessment consists of sequential Phase I, II, and III, which is defined by workplan and soil screening levels. Phase I are conducted to identify and confirm a site's recognized environmental conditions resulting from past actions. If a Phase 1 identifies potential hazardous substances, a Phase II is usually conducted to confirm the absence, or presence and extent, of contamination. Phase II involve the collection and analysis of samples. And Phase III is to remediate the contaminated soils determined by Phase I and Phase II. However, important factors in determining whether a assessment standard is site-specific and suitable are (1) the spatial extent of the sampling and the size of the sample area; (2) the number of samples taken: (3) the strategy of taking samples: and (4) the way the data are analyzed. Although selected methods are recommended, application of quantitative methods is directed by users having prior training or experience for the dynamic site investigation process.

  • PDF

Characteristics and outcomes of patients with septic shock who transferred to the emergency department in tertiary referral center: multicenter, retrospective, observational study (상급종합병원 및 종합병원 응급실로 전원된 패혈성 쇼크 환자의 특성과 예후: 다기관 후향적 관찰연구)

  • Kim, Min Gyun;Shin, Tae Gun;Jo, Ik Joon;Kim, Won Young;Ryoo, Seung Mok;Chung, Sung Phil;Beom, Jin Ho;Choi, Sung-Hyuk;Kim, Kyuseok;Jo, You Hwan;Kang, Gu Hyun;Suh, Gil Joon;Shin, Jonghwan;Lim, Tae Ho;Han, Kap Su;Hwang, Sung Yeon;Korean Shock Society (KoSS)
    • Journal of The Korean Society of Emergency Medicine
    • /
    • v.29 no.5
    • /
    • pp.465-473
    • /
    • 2018
  • Objective: We evaluated the clinical characteristics and prognoses of patients with septic shock who transferred to the emergency department (ED) in a tertiary referral center. Methods: This study was performed using a prospective, multi-center registry of septic shock, with the participation of 11 tertiary referral centers in the Korean Shock Society between October 2015 and February 2017. We classified the patients as a transferred group who transferred from other hospitals after meeting the inclusion criteria upon ED arrival and a non-transferred group who presented directly to the ED. Primary outcome was hospital mortality. We conducted multiple logistic regression analysis to assess variables related to in-hospital mortality. Results: A total of 2,098 patients were included, and we assigned 717 patients to the transferred group and 1,381 patients to the non-transferred group. The initial Sequential Organ Failure Assessment score was higher in the transferred group than the non-transferred group (6; interquartile range [IQR], 4-9 vs. 6; IQR, 4-8; P<0.001). Mechanical ventilator (29% vs. 21%, P<0.001) and renal replacement therapy (12% vs. 9%, P=0.034) within 24 hours after ED arrival were more frequently applied in the transferred group than the non-transferred group. Overall hospital mortality was 22% and there was no significant difference between transferred and non-transferred groups (23% vs. 22%, P=0.820). Multivariable analysis showed an odds ratio for in-hospital mortality of 1.00 (95% confidence interval, 0.78-1.28; P=0.999) for the transferred group compared with the non-transferred group. Conclusion: The transferred group showed higher severity and needed more organ support procedures than the non-transferred group. However, inter-hospital transfer did not affect in-hospital mortality.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

Development of sequential sampling plan for Frankliniella occidentalis in greenhouse pepper (고추 온실에서 꽃노랑총채벌레의 축차표본조사법 개발)

  • SoEun Eom;Taechul Park;Kimoon Son;Jung-Joon Park
    • Korean Journal of Environmental Biology
    • /
    • v.40 no.2
    • /
    • pp.164-171
    • /
    • 2022
  • Frankliniella occidentalis is an invasive pest insect, which affects over 500 different species of host plants and transmits viruses (tomato spotted wilt virus; TSWV). Despite their efficiency in controling insect pests, pesticides are limited by residence, cost and environmental burden. Therefore, a fixed-precision level sampling plan was developed. The sampling method for F. occidentalis adults in pepper greenhouses consists of spatial distribution analysis, sampling stop line, and control decision making. For sampling, the plant was divided into the upper part(180 cm above ground), middle part (120-160 cm above ground), and lower part (70-110 cm above ground). Through ANCOVA, the P values of intercept and slope were estimated to be 0.94 and 0.87, respectively, which meant there were no significant differences between values of all the levels of the pepper plant. In spatial distribution analysis, the coefficients were derived from Taylor's power law (TPL) at pooling data of each level in the plant, based on the 3-flowers sampling unit. F. occidentalis adults showed aggregated distribution in greenhouse peppers. TPL coefficients were used to develop a fixed-precision sampling stop line. For control decision making, the pre-referred action thresholds were set at 3 and 18. With two action thresholds, Nmax values were calculated at 97 and 1149, respectively. Using the Resampling Validation for Sampling Program (RVSP) and the results gained from the greenhouses, the simulated validation of our sampling method showed a reasonable level of precision.

An Analysis of the Differences in Management Performance by Business Categories from the Perspective of Small Business Systematization (영세 소상공인 조직화에 대한 직능업종별 차이분석과 경영성과)

  • Suh, Geun-Ha;Seo, Mi-Ok;Yoon, Sung-Wook
    • Journal of Distribution Science
    • /
    • v.9 no.2
    • /
    • pp.111-122
    • /
    • 2011
  • The purpose of this study is to survey the successful cases of small and medium Business Systematization Cognition by examining their entrepreneurial characteristics and analysing the factors affecting their success. To that end, previous studies on the association types of small businesses were studied. A research model was developed, and research hypotheses for an empirical analysis were established upon it. Suh et al. (2010) insist on the importance of Small Business Systematization in Korea but also show that small business performance is suffering: they are too small to stand alone. That is why association is so crucial for them: they must stand together. Unfortunately, association is difficult, as they have few specific links and little motivation. Even in franchising networks, association tends to be initiated by big franchisers, not small ones. In that sense, association among small businesses is crucial for their long-term survival. With this in mind, this study examines how they think and feel about the issue of 'Industrial Classification', how important Industrial Classification is to their business success, and what kinds of problems it raises in the markets. This study seeks the different cognitions among the association types of small businesses from the perspectives of participation motivation, systematization expectation, policy demand level, and management performance. We assume that different industrial classification types of small businesses will have different cognitions concerning these factors. There are four basic industrial classification types of small businesses: retail sales, restaurant, service, and manufacturing. To date, most of the studies in this area have focused on collecting data on the external environments of small businesses or performing statistical analyses on their status. In this study, we surveyed 4 market areas in Busan, Masan, and Changwon in Korea, where business associations consist of merchants, shop owners, and traders. We surveyed 330 shops and merchants by sending a questionnaire or visiting. Finally, 268 questionnaires were collected and used for the analysis. An ANOVA, T-test, and regression analyses were conducted to test the research hypotheses. The results demonstrate that there are differences in cognition depending upon the industrial classification type. Restaurants generally have a higher cognition concerning job offer problems and a lower cognition concerning their competitiveness. Restaurants also depend more on systematization expectation than do the other industrial classification types. On the policy demand level, restaurants have a higher cognition. This study identifies several factors that are contributing to management performance through differences in cognition that depend upon association type: systematization expectation and policy demand level have positive effects on management performance; participation motivation has a negative effect on management performance. We confirm also that the image factors of different cognitions are linked to an awareness of the value of systematization and that these factors show sequential and continual patterns in the course of generating performances. In conclusion, this study carries significant implications in its classifying of small businesses into the four different associational types (retail sales, restaurant, services, and manufacturing). We believe our study to be the first one to conduct an empirical survey in this subject area. More studies in this area will likely use our research frameworks. The data show that regionally based industrial classification associations such as those in rural cities or less developed areas tend to suffer more problems than those in urban areas. Moreover, restaurants suffer more problems than the norm. Most of the problems raised in this study concern the act of 'associating itself'. Most associations have serious difficulties in associating. On the other hand, the area where they have the least policy demand is that of service types. This study contributes to the argument that associating, rather than financial assistance or management consulting, promotes the start-up and managerial performance of small businesses. This study also has some limitations. The main limitation is the number of questionnaires. We could not survey all the industrial classification types across the country because of budget and time limitations. If we had, we could have produced many more useful results and enhanced the precision of our analysis. The history of systemization is very short and the number of industrial classification associations is relatively low in Korea. We should keep in mind, though, that this is very crucial to systemization entrepreneurs starting their businesses, as it can heavily affect their chances of success. Being strongly associated with each other might be critical to the business success of industrial classification members. Thus, the government needs to put more effort and resources into supporting the drive of industrial classification members to become more strongly associated.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Fractionation of Heavy Metals and Correlation with Their Contents in Rice Plant Grown in Paddy near Smelter Area (제련소 인근 논 토양 중 중금속 형태 분류 및 수도체중 중금속 함량과의 상관성)

  • Kim, Seong-Jo;Baek, Seung-Hwa;Moon, Kwang-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.15 no.1
    • /
    • pp.1-10
    • /
    • 1996
  • The contents of heavy metals in soil near the Janghang smelter area were observed to understand present status and relationship between their fraction and the absorption by rice. The soil samples were taken from the eight sites of the paddy fields in 1982 and 1990, and analysis on heavy metals including Cd, Zn, Cu and Pb was performed. The results were as follows: Total contents of heavy metals in the samples of 1990 were higher than those of 1982. The order of increasing ratio was Cu > Zn > Pb > Cd and the variation of Cd content by sequential differente extracting was residual > exchangeable > dilute acid-extractable fractions and its increasing range was from 38 to 71% during nine years. The ratio of immobile heavy metals bound within an oxide or silicate matrix of Fe-Mn oxide bound and residual in surface soil was that Cd, Pb, Cu and Zn were 31.65, 42.22, 76.57 and 79.49%, respectively, and their mobile ratios of exchangeable, dilute acid-extractable and organically bound were more than 20.28%. Those of mobile Cd, Pb, Cu and Zn were 68.35, 55.78, 23.43 and 20.28%, respectively. Correlation between the heavy metal contents in surface soil and those in tissue of rice plant, such as leaf blade, leaf sheath, stem and panicle axis, were significant, but were not significant in subsurface soil. The dilute acid-extractable and organically bound fractions of Cd, Cu, Pb and Zn in surface soil were more significantly correlated with those in tissues of paddy rice.

  • PDF

Reproducibility of Adenosine Tc-99m sestaMIBI SPECT for the Diagnosis of Coronary Artery Disease (관동맥질환의 진단을 위한 아데노신 Tc-99m sestaMIBI SPECT의 재현성)

  • Lee, Duk-Young;Bae, Jin-Ho;Lee, Sang-Woo;Chun, Kyung-Ah;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Ha, Jeoung-Hee;Chae, Shung-Chull;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.473-480
    • /
    • 2005
  • Purpose: Adenosine myocardial perfusion SPECT has proven to be useful in the detection of coronary artery disease, in the follow up the success of various therapeutic regimens and in assessing the prognosis of coronary artery disease. The purpose of this study is to define the reproducibility of myocardial perfusion SPECT using adenosine stress testing between two consecutive Tc-99m sestaMIBI (MIBI) SPECT studies in the same subjects. Methods: Thirty patients suspected of coronary artery disease in stable condition underwent sequential Tc-99m MIBI SPECT studies using intravenous adenosine. Gamma camera, acquisition and processing protocols used for the two tests were identical and no invasive procedures were performed between two tests. Mean interval between two tests were 4.1 days (range: 2-11 days). The left ventricular wall was divided into na segments and the degree of myocardial tracer uptake was graded with four-point scoring system by visual analysis. Images were interpretated by two independent nuclear medicine physicians and consensus was taken for final decision, if segmental score was not agreeable. Results: Hemodynamic responses to adenosine were not different between two consecutive studies. There were no serious side effects to stop infusion of adenosine and side effects profile was not different. When myocardial uptake was divided into normal and abnormal uptake, 481 of 540 segments were concordant (agreement rate 89%, Kappa index 0.74). With four-grade storing system, exact agreement was 81.3% (439 of 540 segments, tau b=0.73). One and two-grade differences were observed in 97 segments (18%) and 4 segments (0.7%) respectively, but three-grade difference was not observed in any segment. Extent and severity scores were not different between two studios. The extent and severity scores of the perfusion defect revealed excellent positive correlation between two test (r value for percentage extent and severity score is 0.982 and 0.965, p<0.001) Conclusion: Hemodynamic responses and side effects profile were not different between two consecutive adenosine stress tests in the same subjects. Adenosine Tc-99m sestaMIBI SPECT is highly reproducible, and could be used to assess temporal changes in myocardial perfusion in individual patients.