• Title/Summary/Keyword: Quantitative parameters

Search Result 1,116, Processing Time 0.031 seconds

International case study comparing PSA modeling approaches for nuclear digital I&C - OECD/NEA task DIGMAP

  • Markus Porthin;Sung-Min Shin;Richard Quatrain;Tero Tyrvainen;Jiri Sedlak;Hans Brinkman;Christian Muller;Paolo Picca;Milan Jaros;Venkat Natarajan;Ewgenij Piljugin;Jeanne Demgne
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4367-4381
    • /
    • 2023
  • Nuclear power plants are increasingly being equipped with digital I&C systems. Although some probabilistic safety assessment (PSA) models for the digital I&C of nuclear power plants have been constructed, there is currently no specific internationally agreed guidance for their modeling. This paper presents an initiative by the OECD Nuclear Energy Agency called "Digital I&C PSA - Comparative application of DIGital I&C Modelling Approaches for PSA (DIGMAP)", which aimed to advance the field towards practical and defendable modeling principles. The task, carried out in 2017-2021, used a simplified description of a plant focusing on the digital I&C systems important to safety, for which the participating organizations independently developed their own PSA models. Through comparison of the PSA models, sensitivity analyses as well as observations throughout the whole activity, both qualitative and quantitative lessons were learned. These include insights on failure behavior of digital I&C systems, experience from models with different levels of abstraction, benefits from benchmarking as well as major contributors to the core damage frequency and those with minor effect. The study also highlighted the challenges with modeling of large common cause component groups and the difficulties associated with estimation of key software and common cause failure parameters.

Elemental Composition of the Soils using LIBS Laser Induced Breakdown Spectroscopy

  • Muhammad Aslam Khoso;Seher Saleem;Altaf H. Nizamani;Hussain Saleem;Abdul Majid Soomro;Waseem Ahmed Bhutto;Saifullah Jamali;Nek Muhammad Shaikh
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.200-206
    • /
    • 2024
  • Laser induced breakdown spectroscopy (LIBS) technique has been used for the elemental composition of the soils. In this technique, a high energy laser pulse is focused on a sample to produce plasma. From the spectroscopic analysis of such plasma plume, we have determined the different elements present in the soil. This technique is effective and rapid for the qualitative and quantitative analysis of all type of samples. In this work a Q-switched Nd: YAG laser operating with its fundamental mode (1064 nm laser wavelength), 5 nanosecond pulse width, and 10 Hz repetition rate was focused on soil samples using 10 cm quartz lens. The emission spectra of soil consist of Iron (Fe), Calcium (Ca), Titanium (Ti), Silicon (Si), Aluminum (Al), Magnesium (Mg), Manganese (Mn), Potassium (K), Nickel (Ni), Chromium (Cr), Copper (Cu), Mercury (Hg), Barium (Ba), Vanadium (V), Lead (Pb), Nitrogen (N), Scandium (Sc), Hydrogen (H), Strontium (Sr), and Lithium (Li) with different finger-prints of the transition lines. The maximum intensity of the transition lines was observed close to the surface of the sample and it was decreased along the axial direction of the plasma expansion due to the thermalization and the recombination process. We have also determined the plasma parameters such as electron temperature and the electron number density of the plasma using Boltzmann's plot method as well as the Stark broadening of the transition lines respectively. The electron temperature is estimated at 14611 °K, whereas the electron number density i.e. 4.1 × 1016 cm-3 lies close to the surface.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Quantitative Assessment of Myocardial Tissue Velocity in Normal Children with Doppler Tissue Imaging : Reference Values, Growth and Heart Rate Related Change (소아에서 도플러 조직영상을 이용한 최대 심근 속도의 계측 : 정상 추정치 및 성장 및 심박동수에 따른 변화)

  • Kim, Se Young;Hyun, Myung Chul;Lee, Sang Bum
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.8
    • /
    • pp.846-856
    • /
    • 2005
  • Purpose : To measure the peak myocardial tissue velocities and patterns of longitudinal motion of atrioventricular(AV) annuli and assess body weight and heart rates-related changes in normal children. Methods : Using pulsed wave Tissue Doppler Imaging(TDI), we measured peak systolic, early and late diastolic myocardial velocities in 72 normal children at six different sites in apical-4 chamber (A4C) view and at four different sites in apical-2 chamber(A2C) view and compared those values with each other, also observing effects with body weights and heart rates. Longitudinal motions of the AV annuli were measured at three different sites in A4C. Results : There were no significant differences of the TDI parameters between gender, ECHO-machines and among the three Doctors performing TDI. Peak myocardial velocities were significantly higher at the base of the heart than in the mid-ventricular region and in the right lateral ventricular wall than in the left lateral ventricular wall or IVS. The TDI parameters showed no significant correlation with fractional shortening(%). Peak systolic and early diastolic myocardial velocities had no correlation with heart rates, but peak late diastolic velocities and A/E ratio correlated positively with heart rates. Correlations between the TDI parameters and body weight were inconsistent. Absolute longitudinal displacement and % displacement were not differ between gender and not correlated with the TDI parameters. Conclusion : We measured the peak myocardial velocities with TDI and the longitudinal motion of the AV annuli using M-mode echocardiography in normal children. With more large scale evaluation, we may establish reference values in normal children and broaden clinical applicabilities in congenital and acquired heart diseases.

Gadoteridol's Signal Change according to TR, TE Parameters in T1 Image (T1영상에서 TR, TE 매개변수에 따른 Gadoteridol의 신호강도 변화)

  • Jeong, Hyun Keun;Jeong, Hyun Do;Nam, Ki Chang;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.117-124
    • /
    • 2015
  • In this paper, we introduce how to control TR, TE physical MR parameters for managing $H_1$ spin's SI(Signal Intensity) which is combined with gadolinium following administration MR agent in T1 effect for diagnostic usefulness. we used MRI phantom made with 0.5 mol Gadoteridol. This phantom was scanned by FSE sequence with different TR, TE parameters. In this study, to make T1 effect, TR was 200, 250, 300, 350, 400, 450, 500, 550, 600 msec. In addition to, TE was 6.2, 12.4, 18.6, 21.6 msec. The results were as follows ; Each RSP(Reaction Starting Point) was 100, 50, 40, 30 mmol in TE 6.2, 12.4, 18.6, 21.6 msec being irrelevant to TR. In MPSI(Max Peak Signal Intensity), 4 mmol was showed in TR 200 msec while peak signal was decreased to low concentration mol in TR 250-600 msec. In terms of RA(Reaction Area), the highest SI was TE 6.2 msec in TR 200-600msec. According to the study, we are able to recognize it is possible to control enhance rates by managing TR and TE of MR parameters; moreover, we expect that enhanced T1 image in MR clinical field can be performed in a practical way with this quantitative data.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Optimum Design of Soil Nailing Excavation Wall System Using Genetic Algorithm and Neural Network Theory (유전자 알고리즘 및 인공신경망 이론을 이용한 쏘일네일링 굴착벽체 시스템의 최적설계)

  • 김홍택;황정순;박성원;유한규
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.4
    • /
    • pp.113-132
    • /
    • 1999
  • Recently in Korea, application of the soil nailing is gradually extended to the sites of excavations and slopes having various ground conditions and field characteristics. Design of the soil nailing is generally carried out in two steps, The First step is to examine the minimum safety factor against a sliding of the reinforced nailed-soil mass based on the limit equilibrium approach, and the second step is to check the maximum displacement expected to occur at facing using the numerical analysis technique. However, design parameters related to the soil nailing system are so various that a reliable design method considering interrelationships between these design parameters is continuously necessary. Additionally, taking into account the anisotropic characteristics of in-situ grounds, disturbances in collecting the soil samples and errors in measurements, a systematic analysis of the field measurement data as well as a rational technique of the optimum design is required to improve with respect to economical efficiency. As a part of these purposes, in the present study, a procedure for the optimum design of a soil nailing excavation wall system is proposed. Focusing on a minimization of the expenses in construction, the optimum design procedure is formulated based on the genetic algorithm. Neural network theory is further adopted in predicting the maximum horizontal displacement at a shotcrete facing. Using the proposed procedure, various effects of relevant design parameters are also analyzed. Finally, an optimized design section is compared with the existing design section at the excavation site being constructed, in order to verify a validity of the proposed procedure.

  • PDF

Dynamic Salivary Gland Scintigraphy in Clinical Sicca Syndrome: Comparison with Static images (구내 건조증을 호소하는 환자에서 역동적 타액선 신티그라피: 정적영상과의 비교)

  • Kim, Euy-Neyng;Sohn, Hyung-Sun;Choi, Jung-Eun;Kim, Sung-Hoon;Chung, Yong-An;Chung, Soo-Kyo;Kim, Choon-Yul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.1
    • /
    • pp.43-51
    • /
    • 2001
  • Purpose: In this study, we compared the quantitative characteristics of dynamic salivary gland scintigraphy with static scintigraphy in patients with clinical sicca syndrome using Tc-99m pertechnetate. Materials and Methods: Fifty-two parotid glands and 52 submandibular glands out of 26 patients with clinical sicca syndrome were studied by dynamic and static salivary gland scintigraphy. Ten normal volunteers were also studied as a control group for comparison of scintigraphic parameters. Ten minutes after injection of 370 MBq Tc-99m pertechnetate, we obtained pre-stimulus static images for a few minutes. Then dynamic salivary gland scintigraphy with lemon juice stimulation was performed for 20 minutes. Finally we obtained post-stimulus static images after dynamic images. On dynamic study, functional parameters such as uptake rate, secretion rate and re-uptake rate were calculated. The results of dynamic study and static images were compared. Results: On dynamic study, we could obtain functional parameters of salivary glands successfully. On dynamic study, 22 parotid glands and 22 submandibular glands out of each of 52 glands are abnormal. The static images demonstrated somewhat different results, of which reasons we could assume via dynamic study. Conclusion: Dynamic salivary gland scintigraphy using Tc-99m perechnetate were more functional than static images and might be useful in the assessment of the functional change of the salivary gland in patients with clinical sicca syndrome.

  • PDF