• Title/Summary/Keyword: Field Evaluation

Search Result 5,822, Processing Time 0.04 seconds

The 1998, 1999 Patterns of Care Study for Breast Irradiation after Mastectomy in Korea (1998, 1999년도 우리나라에서 시행된 근치적 유방 전절제술 후 방사선치료 현황 조사)

  • Keum,, Ki-Chang;Shim, Su-Jung;Lee, Ik-Jae;Park, Won;Lee, Sang-Wook;Shin, Hyun-Soo;Chung, Eun-Ji;Chie, Eui-Kyu;Kim, Il-Han;Oh, Do-Hoon;Ha, Sung-Whan;Lee, Hyung-Sik;Ahn, Sung-Ja
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.7-15
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: To determine the patterns of evaluation and treatment in patients with breast cancer after mastectomy and treated with radiotherapy. A nationwide study was performed with the goal of improving radiotherapy treatment. $\underline{Materials\;and\;Methods}$: A web- based database system for the Korean Patterns of Care Study (PCS) for 6 common cancers was developed. Randomly selected records of 286 eligible patients treated between 1998 and 1999 from 17 hospitals were reviewed. $\underline{Results}$: The ages of the study patients ranged from 20 to 80 years (median age 44 years). The pathologic T stage by the AJCC was T1 in 9.7% of the cases, T2 in 59.2% of the cases, T3 in 25.6% of the cases, and T4 in 5.3% of the cases. For analysis of nodal involvement, N0 was 7.3%, N1 was 14%, N2 was 38.8%, and N3 was 38.5% of the cases. The AJCC stage was stage I in 0.7% of the cases, stage IIa in 3.8% of the cases, stage IIb in 9.8% of the cases, stage IIIa in 43% of the cases, stage IIIb in 2.8% of the cases, and IIIc in 38.5% of the cases. There were various sequences of chemotherapy and radiotherapy after mastectomy. Mastectomy and chemotherapy followed by radiotherapy was the most commonly performed sequence in 47% of the cases. Mastectomy, chemotherapy, and radiotherapy followed by additional chemotherapy was performed in 35% of the cases, and neoadjuvant chemoradiotherapy was performed in 12.5% of the cases. The radiotherapy volume was chest wall only in 5.6% of the cases. The volume was chest wall and supraclavicular fossa (SCL) in 20.3% of the cases; chest wall, SCL and internal mammary lymph node (IMN) in 27.6% of the cases; chest wall, SCL and posterior axillary lymph node in 25.9% of the cases; chest wall, SCL, IMN, and posterior axillary lymph node in 19.9% of the cases. Two patients received IMN only. The method of chest wall irradiation was tangential field in 57.3% of the cases and electron beam in 42% of the cases. A bolus for the chest wall was used in 54.8% of the tangential field cases and 52.5% of the electron beam cases. The radiation dose to the chest wall was $45{\sim}59.4\;Gy$ (median 50.4 Gy), to the SCL was $45{\sim}59.4\;Gy$ (median 50.4 Gy), and to the PAB was $4.8{\sim}38.8\;Gy$, (median 9 Gy) $\underline{Conclusion}$: Different and various treatment methods were used for radiotherapy of the breast cancer patients after mastectomy in each hospital. Most of treatment methods varied in the irradiation of the chest wall. A separate analysis for the details of radiotherapy planning also needs to be followed and the outcome of treatment is needed in order to evaluate the different processes.

The Evaluation of Radiation Dose to Embryo/Fetus and the Design of Shielding in the Treatment of Brain Tumors (임산부의 전뇌 방사선 치료에 있어서의 태아의 방사선량 측정 및 차폐 구조의 설계)

  • Cho, Woong;Huh, Soon-Nyung;Chie, Eui-Kyu;Ha, Sung-Whan;Park, Yang-Gyun;Park, Jong-Min;Park, Suk-Won
    • Journal of Radiation Protection and Research
    • /
    • v.31 no.4
    • /
    • pp.203-210
    • /
    • 2006
  • Purpose : To estimate the dose to the embryo/fetus of a pregnant patient with brain tumors, and to design an shielding device to keep the embryo/fetus dose under acceptable levels Materials and Methods : A shielding wall with the dimension of 1.55 m height, 0.9 m width, and 30 m thickness is fabricated with 4 trolleys under the wall. It is placed between a Patient and the treatment head of a linear accelerator to attenuate the leakage radiation effectively from the treatment head, and is placed 1 cm below the lower margin of the treatment field in order to minimize the dose to a patient from the treatment head. An anti-patient scattering neck supporters with 2 cm thick Cerrobend metal is designed to minimize the scattered radiation from the treatment fields, and it is divided into 2 section. They are installed around the patient neck by attach from right and left sides. A shielding bridge for anti-room scattered radiation is utilized to place 2 sheets of 3 mm lead plates above the abdomen to setup three detectors under the lead sheets. Humanoid phantom is irradiated with the same treatment parameters, and with and without shielding devices using TLD, and ionization chambers with and without a build-up cap. Results : The dose to the embryo/fetus without shielding was 3.20, 3.21, 1.44, 0.90 cGy at off-field distances of 30, 40, 50, and 60 cm. With shielding, the dose to embryo/fetus was reduced to 0.88, 0.60, 0.35, 0.25 cGy, and the ratio of the shielding effect varied from 70% to 80%. TLD results were 1.8, 1.2, 0.8, 1.2, and 0.8 cGy. The dose measured by the survey meter was 10.9 mR/h at the patient's surface of abdomen. The dose to the embryo/fetus was estimated to be about 1 cGy during the entire treatment. Conclusion : According to the AAPM Report No 50 regarding the dose limit of the embryo/fetus during the pregnancy, the dose to the embryo/fetus with little risk is less than 5 cGy. Our measurements satisfy the recommended values. Our shielding technique was proven to be acceptable.

Dose Planning of Forward Intensity Modulated Radiation Therapy for Nasopharyngeal Cancer using Compensating Filters (보상여과판을 이용한 비인강암의 전방위 강도변조 방사선치료계획)

  • Chu Sung Sil;Lee Sang-wook;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.53-65
    • /
    • 2001
  • Purpose : To improve the local control of patients with nasopharyngeal cancer, we have implemented 3-D conformal radiotherapy and forward intensity modulated radiation therapy (IMRT) to used of compensating filters. Three dimension conformal radiotherapy with intensity modulation is a new modality for cancer treatments. We designed 3-D treatment planning with 3-D RTP (radiation treatment planning system) and evaluation dose distribution with tumor control probability (TCP) and normal tissue complication probability (NTCP). Material and Methods : We have developed a treatment plan consisting four intensity modulated photon fields that are delivered through the compensating tilters and block transmission for critical organs. We get a full size CT imaging including head and neck as 3 mm slices, and delineating PTV (planning target volume) and surrounding critical organs, and reconstructed 3D imaging on the computer windows. In the planning stage, the planner specifies the number of beams and their directions including non-coplanar, and the prescribed doses for the target volume and the permissible dose of normal organs and the overlap regions. We designed compensating filter according to tissue deficit and PTV volume shape also dose weighting for each field to obtain adequate dose distribution, and shielding blocks weighting for transmission. Therapeutic gains were evaluated by numerical equation of tumor control probability and normal tissue complication probability. The TCP and NTCP by DVH (dose volume histogram) were compared with the 3-D conformal radiotherapy and forward intensity modulated conformal radiotherapy by compensator and blocks weighting. Optimization for the weight distribution was peformed iteration with initial guess weight or the even weight distribution. The TCP and NTCP by DVH were compared with the 3-D conformal radiotherapy and intensitiy modulated conformal radiotherapy by compensator and blocks weighting. Results : Using a four field IMRT plan, we have customized dose distribution to conform and deliver sufficient dose to the PTV. In addition, in the overlap regions between the PTV and the normal organs (spinal cord, salivary grand, pituitary, optic nerves), the dose is kept within the tolerance of the respective organs. We evaluated to obtain sufficient TCP value and acceptable NTCP using compensating filters. Quality assurance checks show acceptable agreement between the planned and the implemented MLC(multi-leaf collimator). Conclusion : IMRT provides a powerful and efficient solution for complex planning problems where the surrounding normal tissues place severe constraints on the prescription dose. The intensity modulated fields can be efficaciously and accurately delivered using compensating filters.

  • PDF

Agronomic Characteristics and Productivity of Winter Forage Crop in Sihwa Reclaimed Field (시화 간척지에서 월동 사료작물의 초종 및 품종에 따른 생육특성 및 생산성)

  • Kim, Jong Geun;Wei, Sheng Nan;Li, Yan Fen;Kim, Hak Jin;Kim, Meing Joong;Cheong, Eun Chan
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.40 no.1
    • /
    • pp.19-28
    • /
    • 2020
  • This study was conducted to compare the agronomic characteristics and productivity according to the species and varieties of winter forage crops in reclaimed land. Winter forage crops used in this study were developed in National Institute of Crop Science, RDA. Oats ('Samhan', 'Jopung', 'Taehan', 'Dakyung' and 'Hi-early'), forage barley ('Yeongyang', 'Yuyeon', 'Yujin', 'Dacheng' and 'Yeonho'), rye ('Gogu', 'Jogreen' and 'Daegokgreen') and triticale ('Shinyoung', 'Saeyoung', 'Choyoung', 'Sinseong', 'Minpung' and 'Gwangyoung') were planted in the reclaimed land of Sihwa district in Hwaseong, Gyeonggi-do in the autumn of 2018 and cultivated using each standard cultivation method, and harvested in May 2019(oat and rye: 8 May, barley and triticale: 20 May.) The emergency rate was the lowest in rye (84.4%), and forage barley, oat and triticale were in similar levels (92.8 to 98.8%). Triticale was the lowest (416 tiller/㎡) and oat was the highest (603 tiller/㎡) in tiller number. Rye was the earliest in the heading date (April 21), triticale was April 26, and oat and forage barley were in early May (May 2 and May 5). The plant height was the highest in rye (95.6 cm), and triticale and forage barley was similar (76.3 and 68.3cm) and oat was the lowest (54.2 cm). Dry matter(DM) content of rye was the highest in the average of 46.04% and the others were similar at 35.09~37.54%. Productivity was different among species and varieties, with the highest dry matter yield of forage barley (4,344 kg/ha), oat was similar to barley, and rye and triticale were lowest. 'Dakyoung' and 'Hi-early' were higher in DM yield (4,283 and 5,490 kg/ha), and forage barley were higher in 'Yeonho', 'Yujin' and 'Dacheng' varieties (4,888, 5,433 and 5,582 kg/ha). Crude protein content of oat (6.58%) tended to be the highest, and TDN(total digectible nutrient) content (63.61%) was higher than other varieties. In the RFV(relative feed value), oats averaged 119, while the other three species averaged 92~105. The weight of 1,000 grain was the highest in triticale (43.03 g) and the lowest in rye (31.61 g). In the evaluation of germination rate according to the salt concentration (salinity), the germination rate was maintained at about 80% from 0.2 to 0.4% salinity. The correlation coefficient between germination and salt concentration was high in oat and barley (-0.91 and -0.92) and lowest in rye (-0.66). In conclusion, forage barley and oats showed good productivity in reclaimed land. Adaptability is also different among varieties of forage crops. When growing forage crops in reclaimed land, the selection of highly adaptable species and varieties was recommended.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF