• Title/Summary/Keyword: 데이터 수요

Search Result 40,285, Processing Time 0.075 seconds

Analysis of domestic and overseas coastal groundwater management laws and policies (국내외 해안 지하수관리 법·정책 사례 분석)

  • Shim, Young-Gyoo;Chung, Il-Moon;Chang, Sun Woo
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.9
    • /
    • pp.633-643
    • /
    • 2024
  • Many coastal countries have developed and used a wide range of technologies and policy measures to protect freshwater aquifers and groundwater resources from seawater intrusion, and have established and implemented a foundation to legally and institutionally support them. This study covers coastal states in the eastern United States, the Netheland, India and Japan. The goal of this study is to analyze each country's legal and policy measures for coastal groundwater management. By introducing Jeju Island's groundwater standard level system, we aim to provide a basis for future discussions on groundwater management measures not only in Jeju Island but also in coastal areas of Korea. As a result of the analysis, despite the various contents and aspects of coastal groundwater management based on local issues and characteristics around the world, in order to achieve the common goal of securing a stable amount of groundwater withdrawal and preventing seawater intrusion and to maximize the efficiency of groundwater management, it is understood that attempts are being made to establish optimal management measures, laws, systems, and policies based on several key factors. First, considering the hydrogeological characteristics and status of coastal groundwater, a separate special management system is being established and implemented within the scope of the national groundwater management system. In addition, preventing and maintaining groundwater level decline through limiting the amount of groundwater withdrawal and preventing seawater intrusion are key policy goals and policy tools, and it is suppored by research and development. Finally, tt was found that synergy effects are being sought by using various other policy tools and measures in a complex manner.

Association between the usage of dental floss and interdental brushes and the prevalence of systemic diseases (치실 및 치간칫솔 사용과 전신질환 유병률의 연관성)

  • Seon-Jip Kim;Hye-Jin Kwon;Hyun-Jae Cho
    • Journal of Korean Dental Hygiene Science
    • /
    • v.7 no.1
    • /
    • pp.17-28
    • /
    • 2024
  • Background: Oral health has a significant impact on systemic health, and the close association between oral and systemic diseases has been continuously reported. To prevent oral diseases, the role of oral hygiene products such as dental floss and interdental brushes, in addition to tooth brushing, is becoming increasingly important. This study aims to analyze the effect of using oral hygiene products on the lifetime prevalence of systemic diseases among Korean adults. Methods: This study utilized data from the 7th Korea National Health and Nutrition Examination Survey (2016-2018). The study population consisted of 13,199 adults aged 19 years and older. The independent variable was the use of oral hygiene products, and the dependent variable was the prevalence of systemic diseases diagnosed by a physician. Demographic variables, health status, and behavioral variables were included as covariates, and multivariable logistic regression analysis was performed. Results: The use of dental floss showed no significant association with the prevalence of systemic diseases. However, those who did not use interdental brushes had a 22% lower likelihood of dyslipidemia (OR 0.777, 95% CI 0.660-0.913). Among participants with periodontal disease, those who did not use dental floss had a significantly higher risk of myocardial infarction (OR 11.488, 95% CI 1.438-91.772). Conversely, those who did not use interdental brushes had lower risks of dyslipidemia, myocardial infarction, and angina, particularly among women and individuals under 65 years of age. Conclusion: This study found a low overall association between the use of oral hygiene products and the prevalence of systemic diseases, but there was a notable association with cardiovascular diseases. To reduce the risk of myocardial infarction, the prevention and treatment of periodontal disease, along with proper oral hygiene management, are crucial. Future prospective studies are needed to clearly establish the causal relationship between oral hygiene and systemic diseases.

Development of an Automated Algorithm for Analyzing Rainfall Thresholds Triggering Landslide Based on AWS and AMOS

  • Donghyeon Kim;Song Eu;Kwangyoun Lee;Sukhee Yoon;Jongseo Lee;Donggeun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.125-136
    • /
    • 2024
  • This study presents an automated Python algorithm for analyzing rainfall characteristics to establish critical rainfall thresholds as part of a landslide early warning system. Rainfall data were sourced from the Korea Meteorological Administration's Automatic Weather System (AWS) and the Korea Forest Service's Automatic Mountain Observation System (AMOS), while landslide data from 2020 to 2023 were gathered via the Life Safety Map. The algorithm involves three main steps: 1) processing rainfall data to correct inconsistencies and fill data gaps, 2) identifying the nearest observation station to each landslide location, and 3) conducting statistical analysis of rainfall characteristics. The analysis utilized power law and nonlinear regression, yielding an average R2 of 0.45 for the relationships between rainfall intensity-duration, effective rainfall-duration, antecedent rainfall-duration, and maximum hourly rainfall-duration. The critical thresholds identified were 0.9-1.4 mm/hr for rainfall intensity, 68.5-132.5 mm for effective rainfall, 81.6-151.1 mm for antecedent rainfall, and 17.5-26.5 mm for maximum hourly rainfall. Validation using AUC-ROC analysis showed a low AUC value of 0.5, highlighting the limitations of using rainfall data alone to predict landslides. Additionally, the algorithm's speed performance evaluation revealed a total processing time of 30 minutes, further emphasizing the limitations of relying solely on rainfall data for disaster prediction. However, to mitigate loss of life and property damage due to disasters, it is crucial to establish criteria using quantitative and easily interpretable methods. Thus, the algorithm developed in this study is expected to contribute to reducing damage by providing a quantitative evaluation of critical rainfall thresholds that trigger landslides.

The Causes of Conflict and the Effect of Control Mechanisms on Conflict Resolution between Manufacturer and Supplier (제조-공급자간 갈등 원인과 거래조정 방식의 갈등관리 효과)

  • Rhee, Jin Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.55-80
    • /
    • 2012
  • I. Introduction Developing the relationships between companies is very important issue to ensure a competitive advantage in today's business environment (Bleeke & Ernst 1991; Mohr & Spekman 1994; Powell 1990). Partnerships between companies are based on having same goals, pursuing mutual understanding, and having a professional level of interdependence. By having such a partnerships and cooperative efforts between companies, they will achieve efficiency and effectiveness of their business (Mohr and Spekman, 1994). However, it is difficult to expect these ideal results only in the B2B corporate transaction. According to agency theory which is the well-accepted theory in various fields of business strategy, organization, and marketing, the two independent companies have fundamentally different corporate purposes. Also there is a higher chance of developing opportunism and conflict due to natures of human(organization), such as self-interest, bounded rationality, risk aversion, and environment factor as imbalance of information (Eisenhardt 1989). That is, especially partnerships between principal(or buyer) and agent(or supplier) of companies within supply chain, the business contract itself will not provide competitive advantage. But managing partnership between companies is the key to success. Therefore, managing partnership between manufacturer and supplier, and finding causes of conflict are essential to improve B2B performance. In conclusion, based on prior researches and Agency theory, this study will clarify how business hazards cause conflicts on supply chain and then identify how developed conflicts have been managed by two control mechanisms. II. Research model III. Method In order to validate our research model, this study gathered questionnaires from small and medium sized enterprises(SMEs). In Korea, SMEs mean the firms whose employee is under 300 and capital is under 8 billion won(about 7.2 million dollar). We asked the manufacturer's perception about the relationship with the biggest supplier, and our key informants are denied to a person responsible for buying(ex)CEO, executives, managers of purchasing department, and so on). In detail, we contact by telephone to our initial sample(about 1,200 firms) and introduce our research motivation and send our questionnaires by e-mail, mail, and direct survey. Finally we received 361 data and eliminate 32 inappropriate questionnaires. We use 329 manufactures' data on analysis. The purpose of this study is to identify the anticipant role of business hazard (environmental dynamism, asset specificity) and investigate the moderating effect of control mechanism(formal control, social control) on conflict-performance relationship. To find out moderating effect of control methods, we need to compare the regression weight between low versus. high group(about level of exercised control methods). Therefore we choose the structural equation modeling method that is proper to do multi-group analysis. The data analysis is performed by AMOS 17.0 software, and model fits are good statically (CMIN/DF=1.982, p<.000, CFI=.936, IFI=.937, RMSEA=.056). IV. Result V. Discussion Results show that the higher environmental dynamism and asset specificity(on particular supplier) buyer(manufacturer) has, the more B2B conflict exists. And this conflict affect relationship quality and financial outcomes negatively. In addition, social control and formal control could weaken the negative effect of conflict on relationship quality significantly. However, unlikely to assure conflict resolution effect of control mechanisms on relationship quality, financial outcomes are changed by neither social control nor formal control. We could explain this results with the characteristics of our sample, SMEs(Small and Medium sized Enterprises). Financial outcomes of these SMEs(manufacturer or principal) are affected by their customer(usually major company) more easily than their supplier(or agent). And, in recent few years, most of companies have suffered from financial problems because of global economic recession. It means that it is hard to evaluate the contribution of supplier(agent). Therefore we also support the suggestion of Gladstein(1984), Poppo & Zenger(2002) that relational performance variable can capture the focal outcomes of relationship(exchange) better than financial performance variable. This study has some implications that it tests the sources of conflict and investigates the effect of resolution methods of B2B conflict empirically. And, especially, it finds out the significant moderating effect of formal control which past B2B management studies have ignored in Korea.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

The Measurement of Sensitivity and Comparative Analysis of Simplified Quantitation Methods to Measure Dopamine Transporters Using [I-123]IPT Pharmacokinetic Computer Simulations ([I-123]IPT 약역학 컴퓨터시뮬레이션을 이용한 민감도 측정 및 간편화된 운반체 정량분석 방법들의 비교분석 연구)

  • Son, Hye-Kyung;Nha, Sang-Kyun;Lee, Hee-Kyung;Kim, Hee-Joung
    • The Korean Journal of Nuclear Medicine
    • /
    • v.31 no.1
    • /
    • pp.19-29
    • /
    • 1997
  • Recently, [I-123]IPT SPECT has been used for early diagnosis of Parkinson's patients(PP) by imaging dopamine transporters. The dynamic time activity curves in basal ganglia(BG) and occipital cortex(OCC) without blood samples were obtained for 2 hours. These data were then used to measure dopamine transporters by operationally defined ratio methods of (BG-OCC)/OCC at 2 hrs, binding potential $R_v=k_3/k_4$ using graphic method or $R_A$= (ABBG-ABOCC)/ABOCC for 2 hrs, where ABBG represents accumulated binding activity in basal ganglia(${\int}^{120min}_0$ BG(t)dt) and ABOCC represents accumulated binding activity in occipital cortex(${\int}^{120min}_0$ OCC(t)dt). The purpose of this study was to examine the IPT pharmacokinetics and investigate the usefulness of simplified methods of (BG-OCC)/OCC, $R_A$, and $R_v$ which are often assumed that these values reflect the true values of $k_3/k_4$. The rate constants $K_1,\;k_2\;k_3$ and $k_4$ to be used for simulations were derived using [I-123]IPT SPECT and aterialized blood data with a standard three compartmental model. The sensitivities and time activity curves in BG and OCC were computed by changing $K_l$ and $k_3$(only BG) for every 5min over 2 hours. The values (BG-OCC)/OCC, $R_A$, and $R_v$ were then computed from the time activity curves and the linear regression analysis was used to measure the accuracies of these methods. The late constants $K_l,\;k_2\;k_3\;k_4$ at BG and OCC were $1.26{\pm}5.41%,\;0.044{\pm}19.58%,\;0.031{\pm}24.36%,\;0.008{\pm}22.78%$ and $1.36{\pm}4.76%,\;0.170{\pm}6.89%,\;0.007{\pm}23.89%,\;0.007{\pm}45.09%$, respectively. The Sensitivities for ((${\Delta}S/S$)/(${\Delta}k_3/k_3$)) and ((${\Delta}S/S$)/(${\Delta}K_l/K_l$)) at 30min and 120min were measured as (0.19, 0.50) and (0.61, 0,23), respectively. The correlation coefficients and slopes of ((BG-OCC)/OCC, $R_A$, and $R_v$) with $k_3/k_4$ were (0.98, 1.00, 0.99) and (1.76, 0.47, 1.25), respectively. These simulation results indicate that a late [I-123]IPT SPECT image may represent the distribution of the dopamine transporters. Good correlations were shown between (3G-OCC)/OCC, $R_A$ or $R_v$ and true $k_3/k_4$, although the slopes between them were not unity. Pharmacokinetic computer simulations may be a very useful technique in studying dopamine transporter systems.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.

Effects of an Aspirated Radiation Shield on Temperature Measurement in a Greenhouse (강제 흡출식 복사선 차폐장치가 온실의 기온측정에 미치는 영향)

  • Jeong, Young Kyun;Lee, Jong Goo;Yun, Sung Wook;Kim, Hyeon Tae;Ahn, Enu Ki;Seo, Jae Seok;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.28 no.1
    • /
    • pp.78-85
    • /
    • 2019
  • This study was designed to examine the performance of an aspirated radiation shield(ARS), which was made at the investigator's lab and characterized by relatively easier making and lower costs based on survey data and reports on errors in its measurements of temperature and relative humidity. The findings were summarized as follows: the ARS and the Jinju weather station made measurements and recorded the range of maximum, average, and minimum temperature at $2.0{\sim}34.1^{\circ}C$, $-6.1{\sim}22.2^{\circ}C$, $-14.0{\sim}15.1^{\circ}C$ and $0.4{\sim}31.5^{\circ}C$, $-5.8{\sim}22.0^{\circ}C$, $-14.1{\sim}16.3^{\circ}C$, respectively. There were no big differences in temperature measurements between the two institutions except that the lowest and highest point of maximum temperature was higher on the campus by $1.6^{\circ}C$ and $2.6^{\circ}C$, respectively. The measurements of ARS were tested against those of a standard thermometer. The results show that the temperature measured by ARS was lower by $-2.0^{\circ}C$ or higher by $1.8^{\circ}C$ than the temperature measured by a standard thermometer. The analysis results of its correlations with a standard thermometer reveal that the coefficient of determination was 0.99. Temperature was compared between fans and no fans, and the results show that maximum, average, and minimum temperature was higher overall with no fans by $0.5{\sim}7.6^{\circ}C$, $0.3{\sim}4.6^{\circ}C$ and $0.5{\sim}3.9^{\circ}C$, respectively. The daily average relative humidity measurements were compared between ARS and the weather station of Jinju, and the results show that the measurements of ARS were a little bit higher than those of the Jinju weather station. The measurements on June 27, July 26 and 29, and August 20 were relatively higher by 5.7%, 5.2%, 9.1%, and 5.8%, respectively, but differences in the monthly average between the two institutions were trivial at 2.0~3.0%. Relative humidity was in the range of -3.98~+7.78% overall based on measurements with ARS and Assman's psychometer. The study analyzed correlations in relative humidity between the measurements of the Jinju weather station and those of Assman's psychometer and found high correlations between them with the coefficient of determination at 0.94 and 0.97, respectively.

A Study on analysis of contrasts and variation in SUV with the passage of uptake time in 18F-FDOPA Brain PET/CT (18F-FDOPA Brain PET/CT 검사의 영상 대조도 분석 및 섭취 시간에 따른 SUV변화 고찰)

  • Seo, Kang rok;Lee, Jeong eun;Ko, Hyun soo;Ryu, Jae kwang;Nam, Ki pyo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.1
    • /
    • pp.69-74
    • /
    • 2019
  • Purpose $^{18}F$-FDOPA using amino acid is particularly attractive for imaging of brain tumors because of the high uptake in tumor tissue and the low uptake in normal brain tissue. But, on the other hand, $^{18}F$-FDG is highly uptake in both tumor tissue and normal brain tissue. The purpose of study is to evaluate comparison of contrasts in $^{18}F$-FDOPA Brain PET/CT and $^{18}F$-FDG Brain PET/CT and to find out optimal scan time by analysis of variation in SUV with the passage of uptake time. Materials and Methods A region of interest of approximately $350mm^2$ at the center of the tumor and cerebellum in 12 patients ($51.4{\pm}12.8yrs$) who $^{18}F$-FDG Brain PET/CT and $^{18}F$-FDOPA Brain PET/CT were examined more than once each. The $SUV_{max}$ was measured, and the $SUV_{max}$ ratio (T/C ratio) of the tumor cerebellum was calculated. In the analysis of SUV, T/C ratio was calculated for each frame after dividing into 15 frames of 2 minutes each using List mode data in 25 patients ($49.{\pm}10.3yrs$). SPSS 21 was used to compare T/C ratio of $^{18}F$-FDOPA and T/C ratio of $^{18}F$-FDG. Results The T/C ratio of $^{18}F$-FDOPA Brain PET/CT was higher than the T/C ratio of $^{18}F$-FDG Brain, and show a significant difference according to a paired t-test(t=-5.214, p=0.000). As a result of analyzing changes in $SUV_{max}$ and T/C ratio, the peak point of $SUV_{max}$ was $5.6{\pm}2.9$ and appeared in the fourth frame (6 to 8 minutes), and the peak of T/C ratio also appeared in the fourth frame (6 to 8 minutes). Taking this into consideration and comparing the existing 10 to 30 minutes image and 6 to 26 minutes image, the $SUV_{max}$ and T/C ratio increased by 0.2 and 0.1 each, compared to the 10 to 30 minutes image for 6 to 26 minutes image. Conclusion From this study, $^{18}F$-FDOPA Brain PET/CT is effective when reading the image, because the T/C ratio of $^{18}F$-FDOPA Brain PET/CT was higher than T/C ratio of $^{18}F$-FDG Brain PET/CT. In addition, in the case of $^{18}F$-FDOPA Brain PET/CT, there was no difference between the existing 10 to 30 minutes image and 6 to 26 minutes image. Through continuous research, we can find possibility of shortening examination time in $^{18}F$-FDOPA Brain PET/CT. Also, we can help physician to accurate reading using additional scan data.