• Title/Summary/Keyword: 성능 검증

Search Result 9,147, Processing Time 0.043 seconds

Adaptive Image Rescaling for Weakly Contrast-Enhanced Lesions in Dedicated Breast CT: A Phantom Study (약하게 조영증강된 병변의 유방 전용 CT 영상의 대조도 개선을 위한 적응적 영상 재조정 방법: 팬텀 연구)

  • Bitbyeol Kim;Ho Kyung Kim;Jinsung Kim;Yongkan Ki;Ji Hyeon Joo;Hosang Jeon;Dahl Park;Wontaek Kim;Jiho Nam;Dong Hyeon Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.6
    • /
    • pp.1477-1492
    • /
    • 2021
  • Purpose Dedicated breast CT is an emerging volumetric X-ray imaging modality for diagnosis that does not require any painful breast compression. To improve the detection rate of weakly enhanced lesions, an adaptive image rescaling (AIR) technique was proposed. Materials and Methods Two disks containing five identical holes and five holes of different diameters were scanned using 60/100 kVp to obtain single-energy CT (SECT), dual-energy CT (DECT), and AIR images. A piece of pork was also scanned as a subclinical trial. The image quality was evaluated using image contrast and contrast-to-noise ratio (CNR). The difference of imaging performances was confirmed using student's t test. Results Total mean image contrast of AIR (0.70) reached 74.5% of that of DECT (0.94) and was higher than that of SECT (0.22) by 318.2%. Total mean CNR of AIR (5.08) was 35.5% of that of SECT (14.30) and was higher than that of DECT (2.28) by 222.8%. A similar trend was observed in the subclinical study. Conclusion The results demonstrated superior image contrast of AIR over SECT, and its higher overall image quality compared to DECT with half the exposure. Therefore, AIR seems to have the potential to improve the detectability of lesions with dedicated breast CT.

Applicability Analysis of Constructing UDM of Cloud and Cloud Shadow in High-Resolution Imagery Using Deep Learning (딥러닝 기반 구름 및 구름 그림자 탐지를 통한 고해상도 위성영상 UDM 구축 가능성 분석)

  • Nayoung Kim;Yerin Yun;Jaewan Choi;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.351-361
    • /
    • 2024
  • Satellite imagery contains various elements such as clouds, cloud shadows, and terrain shadows. Accurately identifying and eliminating these factors that complicate satellite image analysis is essential for maintaining the reliability of remote sensing imagery. For this reason, satellites such as Landsat-8, Sentinel-2, and Compact Advanced Satellite 500-1 (CAS500-1) provide Usable Data Masks(UDMs)with images as part of their Analysis Ready Data (ARD) product. Precise detection of clouds and their shadows is crucial for the accurate construction of these UDMs. Existing cloud and their shadow detection methods are categorized into threshold-based methods and Artificial Intelligence (AI)-based methods. Recently, AI-based methods, particularly deep learning networks, have been preferred due to their advantage in handling large datasets. This study aims to analyze the applicability of constructing UDMs for high-resolution satellite images through deep learning-based cloud and their shadow detection using open-source datasets. To validate the performance of the deep learning network, we compared the detection results generated by the network with pre-existing UDMs from Landsat-8, Sentinel-2, and CAS500-1 satellite images. The results demonstrated that high accuracy in the detection outcomes produced by the deep learning network. Additionally, we applied the network to detect cloud and their shadow in KOMPSAT-3/3A images, which do not provide UDMs. The experiment confirmed that the deep learning network effectively detected cloud and their shadow in high-resolution satellite images. Through this, we could demonstrate the applicability that UDM data for high-resolution satellite imagery can be constructed using the deep learning network.

Development of an Automated Algorithm for Analyzing Rainfall Thresholds Triggering Landslide Based on AWS and AMOS

  • Donghyeon Kim;Song Eu;Kwangyoun Lee;Sukhee Yoon;Jongseo Lee;Donggeun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.125-136
    • /
    • 2024
  • This study presents an automated Python algorithm for analyzing rainfall characteristics to establish critical rainfall thresholds as part of a landslide early warning system. Rainfall data were sourced from the Korea Meteorological Administration's Automatic Weather System (AWS) and the Korea Forest Service's Automatic Mountain Observation System (AMOS), while landslide data from 2020 to 2023 were gathered via the Life Safety Map. The algorithm involves three main steps: 1) processing rainfall data to correct inconsistencies and fill data gaps, 2) identifying the nearest observation station to each landslide location, and 3) conducting statistical analysis of rainfall characteristics. The analysis utilized power law and nonlinear regression, yielding an average R2 of 0.45 for the relationships between rainfall intensity-duration, effective rainfall-duration, antecedent rainfall-duration, and maximum hourly rainfall-duration. The critical thresholds identified were 0.9-1.4 mm/hr for rainfall intensity, 68.5-132.5 mm for effective rainfall, 81.6-151.1 mm for antecedent rainfall, and 17.5-26.5 mm for maximum hourly rainfall. Validation using AUC-ROC analysis showed a low AUC value of 0.5, highlighting the limitations of using rainfall data alone to predict landslides. Additionally, the algorithm's speed performance evaluation revealed a total processing time of 30 minutes, further emphasizing the limitations of relying solely on rainfall data for disaster prediction. However, to mitigate loss of life and property damage due to disasters, it is crucial to establish criteria using quantitative and easily interpretable methods. Thus, the algorithm developed in this study is expected to contribute to reducing damage by providing a quantitative evaluation of critical rainfall thresholds that trigger landslides.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Lipopolysaccharide-induced Synthesis of IL-1beta, IL-6, TNF-alpha and TGF-beta by Peripheral Blood Mononuclear Cells (내독소에 의한 말초혈액 단핵구의 IL-1beta, IL-6, TNF-alpha와 TGF-beta 생성에 관한 연구)

  • Jung, Sung-Hwan;Park, Choon-Sik;Kim, Mi-Ho;Kim, Eun-Young;Chang, Hun-Soo;Ki, Shin-Young;Uh, Soo-Taek;Moon, Seung-Hyuk;Kim, Yang-Hoon;Lee, Hi-Bal
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.846-860
    • /
    • 1998
  • Background: Endotoxin (LPS : lipopolysaccharide), a potent activator of immune system, can induce acute and chronic inflammation through the production of cytokines by a variety of cells, such as monocytes, endothelial cells, lymphocytes, eosinophils, neutrophils and fibroblasts. LPS stimulate the mononucelar cells by two different pathway, the CD14 dependent and independent way, of which the former has been well documented, but not the latter. LPS binds to the LPS-binding protein (LBP), in serum, to make the LPS-LBP complex which interacts with CD14 molecules on the mononuclear cell surface in peripheral blood or is transported to the tissues. In case of high concentration of LPS, LPS can stimulate directly the macrophages without LBP. We investigated to detect the generation of proinflammatory cytokines such as interleukin 1 (IL-1), IL-6 and TNF-$\alpha$ and fibrogenic cytokine, TGF-$\beta$, by peripheral blood mononuclear cells (PBMC) after LPS stimulation under serum-free conditions, which lacks LBPs. Methods : PBMC were obtained by centrifugation on Ficoll Hypaque solution of peripheral venous bloods from healthy normal subjects, then stimulated in the presence of LPS (0.1 ${\mu}g/mL$ to 100 ${\mu}g/mL$ ). The activities of IL-1, IL-6, TNF, and TGF-$\beta$ were measured by bioassaies using cytokines - dependent proliferating or inhibiting cell lines. The cellular sources producing the cytokines was investigated by immunohistochemical stains and in situ hybridization. Results : PBMC started to produce IL-6, TNF-$\alpha$ and TGF-$\beta$ in 1 hr, 4 hrs and 8hrs, respectively, after LPS stimulation. The production of IL-6, TNF-$\alpha$ and TGF-$\beta$ continuously increased 96 hrs after stimulation of LPS. The amount of production was 19.8 ng/ml of IL-6 by $10^5$ PBMC, 4.1 ng/mL of TNF by $10^6$ PBMC and 34.4 pg/mL of TGF-$\beta$ by $2{\times}10^6$ PBMC. The immunoreactivity to IL-6, TNF-$\alpha$ and TGF-$\beta$ were detected on monocytes in LPS-stimulated PBMC. Some of lymphocytes showed positive immunoreactivity to TGF-$\beta$. Double immunohistochemical stain showed that IL-1$\beta$, IL-6, TNF-$\alpha$ expression was not associated with CD14 postivity on monocytes. IL-1$\beta$, IL-6, TNF-$\alpha$ and TGF-$\beta$mRNA expression were same as observed in immunoreactivity for each cytokines. Conclusion: When monocytes are stimulated with LPS under serum-free conditions, IL-6 and TNF-$\alpha$ are secreted in early stage of inflammation. In contrast, the secretion of TGF-$\beta$ arise in the late stages and that is maintained after 96 hrs. The main cells releasing IL-1$\beta$, IL-6, TNF-$\alpha$ and TGF-$\beta$ are monocytes, but also lymphocytes can secret TGF-$\beta$.

  • PDF

Optimum Management Plan for Soil Contamination Facilities (특정토양오염관리대상시설의 최적 관리방안에 관한 연구)

  • Park, Jae-Soo;Kim, Ki-Ho;Kim, Hae-Keum;Choi, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.293-300
    • /
    • 2012
  • This study was to investigate the unsuitable rate of the storage facilities, the changes in corrosion process over time after installation according to the status, the time to install the facilities, years elapsed after facilities installation, inspection of methods and motivation, and so on, based on the results of the inspection at the petroleum storage facilities conducted by domestic soil-relate specialized agency to derive optimal management plans which meet the status of soil contamination facilities. The results showed that the facilities more than 5 years after the initial leak test at the time of the installation need to be inspected periodically by considering costs of leak test and remediation of polluted soil. The inspection period can be decided by cost and leak test methods showing discrepancies for the results obtained from individual test whether it was direct or indirect. To compensate these matters, we suggested that the direct inspection method on regular schedule is recommended. On the other hand, the inspection can be voluntarily completed to ease burden of the results by inspection or equivalent level to this inspection method. Also, it may need improved construction supervision and performance test system to minimize the occurrence of the nature defects in installing the facilities as well as the upgrade program for the facilities during intervals of inspection period.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Study on Preparation of 3'-$[^{18}F]$Fluoro-3'-deoxythymidine and Its Biodistribution in 9L Glioma Bearing Rats (3'-$[^{18}F]$Fluoro-3'-deoxythymidine의 합성과 9L glioma 세포를 이식한 래트에서의 체내동태에 관한 연구)

  • Shim, Ah-Young;Moon, Byung-Seok;Lee, Tae-Sup;Lee, Kyo-Chul;An, Gwang-Il;Yang, Seung-Dae;Yu, Kook-Hyun;Cheon, Gi-Jeong;Choi, Chang-Woon;Lim, Sang-Moo;Chun, Kwon-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.40 no.5
    • /
    • pp.263-270
    • /
    • 2006
  • Purpose: Several radioisotope-labeled thymidine derivatives such as $[^{11}C]$thymidine was developed to demonstrate cell proliferation in tumor. But it is difficult to track metabolism with $[^{11}C]$thymidine due to rapid in vivo degradation and its short physical half-life. 3'-$[^{18}F]$fluoro-3'-deoxythymidine ($[^{18}F]$FLT) was reported to have the longer half life of fluorine-18 and the lack of metabolic degradation in vivo. Here, we described the synthesis of the 3'-$[^{18}F]$fluoro-3'-deoxythymidine ($[^{18}F]$FLT) and compared with $([^{18}F]FET)\;and\;([^{18}F]FDG)$ in cultured 9L cell and obtained the biodistribution and PET image in 9L tumor hearing rats. Material and Methods: For the synthesis of $[^{18}F]$FLT, 3-N-tert-butoxycarbonyl-(5'-O-(4,4'-dimet hoxytriphenylmethyl)-2'-deoxy-3'-O-(4-nitrobenzenesulfonyl)-${\beta}$-D-threopentofuranosyl)thymine was used as a FLT precursor, on which the tert-butyloxycarbonyl group was introduced to protect N3-position and nitrobenzenesulfonyl group. Radiolabeling of nosyl substitued precursor with $^{18}F$ was performed in acetonitrile at $120^{\circ}C$ and deproteced with 0.5 N HCI. The cell uptake was measured in cultured 9L glioma cell. The biodistribution was evaluated in 9L tumor bearing rats after intravenous injection at 10 min, 30 min, 60 min and 120 min and obtained PET image 60 minutes after injection. Results: The radiochemical yield was about 20-30% and radiochemical purity was more than 95% after HPLC purification. Cellular uptake of $[^{18}F]$FLT was increased as time elapsed. At 120 min post-injection, the ratios of tumor/blood, tumor/muscle and tumor/brain were $1.61{\pm}0.34,\;1.70{\pm}0.30\;and\;9.33{\pm}2.22$, respectively. The 9L tumor was well visualized at 60 min post injection in PET image. Conclusion: The uptake of $[^{18}F]$FLT in tumor was higher than in normal brain and PET image of $[^{18}F]$FLT was acceptable. These results suggest the possibility of $[^{18}F]$FLT at an imaging agent for brain tumor.