• 제목/요약/키워드: 사용-확산

검색결과 4,361건 처리시간 0.032초

K-POP fandom and Korea's national reputation: An analysis on BTS fans in the U.S. (K-POP 팬덤과 한국의 국가 명성: 미국의 BTS 팬 중심 분석)

  • Soojin Kim;Hye Eun Lee
    • Journal of Public Diplomacy
    • /
    • 제3권1호
    • /
    • pp.1-19
    • /
    • 2023
  • Objectives: This study aims to discover how the spread of K-POP and the diversification of the Korean Wave affects Korea's national reputation. K-POP stars are diversifying their interactions with fandom by creating an online space to consume various products and services related to their stars and engage in fan activities. Because of this, this study aims to examine the relevance of K-POP to national reputation through a parasocial relationship with K-POP stars by fandom forming a community and utilizing media. Methods: An online survey was conducted in English using the Amazon survey company Mechanical Turk for BTS fans living in the United States. A total of 195 people's data, excluding incomplete responses, were used for the analysis. Results: It was found that BTS fans' social media participation activities themselves did not directly affect Korea's national reputation. But the mediating effect of BTS fans' parasocial relationship was found. That is, BTS fans' social media participation activities had a positive effect on their parasocial relationships with BTS which in turn had a positive effect on their national reputation. Conlusions: The use and participation of BTS fans in social media in Korea's national reputation has no significant effect on itself, but it has been found that it affects the national reputation through forming parasocial relationships. From the study results, the parasocial relationship of K-POP fans can be used as a strategic mechanism to enhance the national image and Korea's national reputation.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • 제79호
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

An Investigation of the Current Squeezing Effect through Measurement and Calculation of the Approach Curve in Scanning Ion Conductivity Microscopy (Scanning Ion Conductivity Microscopy의 Approach Curve에 대한 측정 및 계산을 통한 Current Squeezing 효과의 고찰)

  • Young-Seo Kim;Young-Jun Cho;Han-Kyun Shin;Hyun Park;Jung Han Kim;Hyo-Jong Lee
    • Journal of the Microelectronics and Packaging Society
    • /
    • 제31권2호
    • /
    • pp.54-62
    • /
    • 2024
  • SICM (Scanning Ion Conductivity Microscopy) is a technique for measuring surface topography in an environment where electrochemical reactions occur, by detecting changes in ion conductivity as a nanopipette tip approaches the sample. This study includes an investigation of the current response curve, known as the approach curve, according to the distance between the tip and the sample. First, a simulation analysis was conducted on the approach curves. Based on the simulation results, then, several measuring experiments were conducted concurrently to analyze the difference between the simulated and measured approach curves. The simulation analysis confirms that the current squeezing effect occurs as the distance between the tip and the sample approaches half the inner radius of the tip. However, through the calculations, the decrease in current density due to the simple reduction in ion channels was found to be much smaller compared to the current squeezing effect measured through actual experiments. This suggests that ion conductivity in nano-scale narrow channels does not simply follow the Nernst-Einstein relationship based on the diffusion coefficients, but also takes into account the fluidic hydrodynamic resistance at the interface created by the tip and the sample. It is expected that SICM can be combined with SECM (Scanning Electrochemical Microscopy) to overcome the limitations of SECM through consecutive measurement of the two techniques, thereby to strengthen the analysis of electrochemical surface reactivity. This could potentially provide groundbreaking help in understanding the local catalytic reactions in electroless plating and the behaviors of organic additives in electroplating for various kinds of patterns used in semiconductor damascene processes and packaging processes.

Implementation of integrated monitoring system for trace and path prediction of infectious disease (전염병의 경로 추적 및 예측을 위한 통합 정보 시스템 구현)

  • Kim, Eungyeong;Lee, Seok;Byun, Young Tae;Lee, Hyuk-Jae;Lee, Taikjin
    • Journal of Internet Computing and Services
    • /
    • 제14권5호
    • /
    • pp.69-76
    • /
    • 2013
  • The incidence of globally infectious and pathogenic diseases such as H1N1 (swine flu) and Avian Influenza (AI) has recently increased. An infectious disease is a pathogen-caused disease, which can be passed from the infected person to the susceptible host. Pathogens of infectious diseases, which are bacillus, spirochaeta, rickettsia, virus, fungus, and parasite, etc., cause various symptoms such as respiratory disease, gastrointestinal disease, liver disease, and acute febrile illness. They can be spread through various means such as food, water, insect, breathing and contact with other persons. Recently, most countries around the world use a mathematical model to predict and prepare for the spread of infectious diseases. In a modern society, however, infectious diseases are spread in a fast and complicated manner because of rapid development of transportation (both ground and underground). Therefore, we do not have enough time to predict the fast spreading and complicated infectious diseases. Therefore, new system, which can prevent the spread of infectious diseases by predicting its pathway, needs to be developed. In this study, to solve this kind of problem, an integrated monitoring system, which can track and predict the pathway of infectious diseases for its realtime monitoring and control, is developed. This system is implemented based on the conventional mathematical model called by 'Susceptible-Infectious-Recovered (SIR) Model.' The proposed model has characteristics that both inter- and intra-city modes of transportation to express interpersonal contact (i.e., migration flow) are considered. They include the means of transportation such as bus, train, car and airplane. Also, modified real data according to the geographical characteristics of Korea are employed to reflect realistic circumstances of possible disease spreading in Korea. We can predict where and when vaccination needs to be performed by parameters control in this model. The simulation includes several assumptions and scenarios. Using the data of Statistics Korea, five major cities, which are assumed to have the most population migration have been chosen; Seoul, Incheon (Incheon International Airport), Gangneung, Pyeongchang and Wonju. It was assumed that the cities were connected in one network, and infectious disease was spread through denoted transportation methods only. In terms of traffic volume, daily traffic volume was obtained from Korean Statistical Information Service (KOSIS). In addition, the population of each city was acquired from Statistics Korea. Moreover, data on H1N1 (swine flu) were provided by Korea Centers for Disease Control and Prevention, and air transport statistics were obtained from Aeronautical Information Portal System. As mentioned above, daily traffic volume, population statistics, H1N1 (swine flu) and air transport statistics data have been adjusted in consideration of the current conditions in Korea and several realistic assumptions and scenarios. Three scenarios (occurrence of H1N1 in Incheon International Airport, not-vaccinated in all cities and vaccinated in Seoul and Pyeongchang respectively) were simulated, and the number of days taken for the number of the infected to reach its peak and proportion of Infectious (I) were compared. According to the simulation, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days when vaccination was not considered. In terms of the proportion of I, Seoul was the highest while Pyeongchang was the lowest. When they were vaccinated in Seoul, the number of days taken for the number of the infected to reach at its peak was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. When they were vaccinated in Pyeongchang, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. Based on the results above, it has been confirmed that H1N1, upon the first occurrence, is proportionally spread by the traffic volume in each city. Because the infection pathway is different by the traffic volume in each city, therefore, it is possible to come up with a preventive measurement against infectious disease by tracking and predicting its pathway through the analysis of traffic volume.

Studies on the Physical Properties of Major Tree Barks Grown in Korea -Genus Pinus, Populus and Quercus- (한국산(韓國産) 주요(主要) 수종(樹種) 수피(樹皮)의 이학적(理學的) 성질(性質)에 관(關)한 연구(硏究) -소나무속(屬), 사시나무속(屬), 참나무속(屬)을 중심(中心)으로-)

  • Lee, Hwa Hyoung
    • Journal of Korean Society of Forest Science
    • /
    • 제33권1호
    • /
    • pp.33-58
    • /
    • 1977
  • A bark comprises about 10 to 20 percents of a typical log by volume, and is generally considered as an unwanted residue rather than a potentially valuable resourses. As the world has been confronted with decreasing forest resources, natural resources pressure dictate that a bark should be a raw material instead of a waste. The utilization of the largely wasted bark of genus Pinus, Quercus, and Populus grown in Korea can be enhanced by learning its physical and mechanical properties. However, the study of tree bark grown in Korea have never been undertaken. In the present paper, an investigative study is carried out on the bark of three genus, eleven species representing not only the major bark trees but major species currently grown in Korea. For each species 20 trees were selected, at Suweon and Kwang-neung areas, on the same basis of the diameter class at the proper harvesting age. One $200cm^2$ segment of bark was obtained from each tree at brest height. Physical properties of bark studied are: bark density, moisture content of green bark (inner-, outer-, and total-bark), fiber saturation point, hysteresis loop, shrinkage, water absorption, specific heat, heat of wetting, thermal conductivity, thermal diffusivity, heat of combustion, and differential thermal analysis. The mechanical properties are studied on bending and compression strength (radial, longitudinal, and tangential). The results may be summarized as follows: 1. The oven-dry specific gravities differ between wood and bark, further more even for a given bark sample, the difference is obersved between inner and outer bark. 2. The oven-dry specific gravity of bark is higher than that of wood. This fact is attributed to the anatomical structure whose characters are manifested by higher content of sieve fiber and sclereids. 3. Except Pinus koraiensis, the oven-dry specific gravity of inner bark is higher than that of outer bark, which results from higher shrinkage of inner bark. 4. The moisture content of bark increases with direct proportion to the composition ratio of sieve components and decreases with higher percent of sclerenchyma and periderm tissues. 5. The possibility of determining fiber saturation point is suggested by the measuring the heat of wetting. With the proposed method, the fiber saturation point of Pinus densiflora lies between 26 and 28%, that of Quercus accutissima ranges from 24 to 28%. These results need be further examined by other methods. 6. Contrary to the behavior of wood, the bark shrinkage is the highest in radial direction and the lowest in longitudinal direction. Quercus serrata and Q. variabilis do not fall in this category. 7. Bark shows the same specific heat as wood, but the heat of wetting of bark is higher than that of wood. In heat conductivity, bark is lower than wood. From the measures of oven-dry specific gravity (${\rho}d$) and moisture fraction specific gravity (${\rho}m$) is devised the following regression equation upon which heat conductivity can be calculated. The calculated heat conductivity of bark is between $0.8{\times}10^{-4}$ and $1.6{\times}10^{-4}cal/cm-sec-deg$. $$K=4.631+11.408{\rho}d+7.628{\rho}m$$ 8. The bark heat diffusivity varies from $8.03{\times}10^{-4}$ to $4.46{\times}10^{-4}cm^2/sec$. From differential thermal analysis, wood shows a higher thermogram than bark under ignition point, but the tendency is reversed above ignition point. 9. The modulus of rupture for static bending strength of bark is proportional to the density of bark which in turn gives the following regression equation. M=243.78X-12.02 The compressive strength of bark is the highest in radial direction, contrary to the behavior of wood, and the compressive strength of longitudinal direction follows the tangential one in decreasing order.

  • PDF

A Study on the Effect of the Introduction Characteristics of Cloud Computing Services on the Performance Expectancy and the Intention to Use: From the Perspective of the Innovation Diffusion Theory (클라우드 컴퓨팅 서비스의 도입특성이 조직의 성과기대 및 사용의도에 미치는 영향에 관한 연구: 혁신확산 이론 관점)

  • Lim, Jae Su;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • 제22권3호
    • /
    • pp.99-124
    • /
    • 2012
  • Our society has long been talking about necessity for innovation. Since companies in particular need to carry out business innovation in their overall processes, they have attempted to apply many innovation factors on sites and become to pay more attention to their innovation. In order to achieve this goal, companies has applied various information technologies (IT) on sites as a means of innovation, and consequently IT have been greatly developed. It is natural for the field of IT to have faced another revolution which is called cloud computing, which is expected to result in innovative changes in software application via the Internet, data storing, the use of devices, and their operations. As a vehicle of innovation, cloud computing is expected to lead the changes and advancement of our society and the business world. Although many scholars have researched on a variety of topics regarding the innovation via IT, few studies have dealt with the issue of could computing as IT. Thus, the purpose of this paper is to set the variables of innovation attributes based on the previous articles as the characteristic variables and clarify how these variables affect "Performance Expectancy" of companies and the intention of using cloud computing. The result from the analysis of data collected in this study is as follows. The study utilized a research model developed on the innovation diffusion theory to identify influences on the adaptation and spreading IT for cloud computing services. Second, this study summarized the characteristics of cloud computing services as a new concept that introduces innovation at its early stage of adaptation for companies. Third, a theoretical model is provided that relates to the future innovation by suggesting variables for innovation characteristics to adopt cloud computing services. Finally, this study identified the factors affecting expectation and the intention to use the cloud computing service for the companies that consider adopting the cloud computing service. As the parameter and dependent variable respectively, the study deploys the independent variables that are aligned with the characteristics of the cloud computing services based on the innovation diffusion model, and utilizes the expectation for performance and Intention to Use based on the UTAUT theory. Independent variables for the research model include Relative Advantage, Complexity, Compatibility, Cost Saving, Trialability, and Observability. In addition, 'Acceptance for Adaptation' is applied as an adjustment variable to verify the influences on the expected performances from the cloud computing service. The validity of the research model was secured by performing factor analysis and reliability analysis. After confirmatory factor analysis is conducted using AMOS 7.0, the 20 hypotheses are verified through the analysis of the structural equation model, accepting 12 hypotheses among 20. For example, Relative Advantage turned out to have the positive effect both on Individual Performance and on Strategic Performance from the verification of hypothesis, while it showed meaningful correlation to affect Intention to Use directly. This indicates that many articles on the diffusion related Relative Advantage as the most important factor to predict the rate to accept innovation. From the viewpoint of the influence on Performance Expectancy among Compatibility and Cost Saving, Compatibility has the positive effect on both Individual Performance and on Strategic Performance, while it showed meaningful correlation with Intention to Use. However, the topic of the cloud computing service has become a strategic issue for adoption in companies, Cost Saving turns out to affect Individual Performance without a significant influence on Intention to Use. This indicates that companies expect practical performances such as time and cost saving and financial improvements through the adoption of the cloud computing service in the environment of the budget squeezing from the global economic crisis from 2008. Likewise, this positively affects the strategic performance in companies. In terms of effects, Trialability is proved to give no effects on Performance Expectancy. This indicates that the participants of the survey are willing to afford the risk from the high uncertainty caused by innovation, because they positively pursue information about new ideas as innovators and early adopter. In addition, they believe it is unnecessary to test the cloud computing service before the adoption, because there are various types of the cloud computing service. However, Observability positively affected both Individual Performance and Strategic Performance. It also showed meaningful correlation with Intention to Use. From the analysis of the direct effects on Intention to Use by innovative characteristics for the cloud computing service except the parameters, the innovative characteristics for the cloud computing service showed the positive influence on Relative Advantage, Compatibility and Observability while Complexity, Cost saving and the likelihood for the attempt did not affect Intention to Use. While the practical verification that was believed to be the most important factor on Performance Expectancy by characteristics for cloud computing service, Relative Advantage, Compatibility and Observability showed significant correlation with the various causes and effect analysis. Cost Saving showed a significant relation with Strategic Performance in companies, which indicates that the cost to build and operate IT is the burden of the management. Thus, the cloud computing service reflected the expectation as an alternative to reduce the investment and operational cost for IT infrastructure due to the recent economic crisis. The cloud computing service is not pervasive in the business world, but it is rapidly spreading all over the world, because of its inherited merits and benefits. Moreover, results of this research regarding the diffusion innovation are more or less different from those of the existing articles. This seems to be caused by the fact that the cloud computing service has a strong innovative factor that results in a new paradigm shift while most IT that are based on the theory of innovation diffusion are limited to companies and organizations. In addition, the participants in this study are believed to play an important role as innovators and early adapters to introduce the cloud computing service and to have competency to afford higher uncertainty for innovation. In conclusion, the introduction of the cloud computing service is a critical issue in the business world.

  • PDF

An Overview of the Rationale of Monetary and Banking Intervention: The Role of the Central Bank in Money and Banking Revisited (화폐(貨幣)·금융개입(金融介入)의 이론적(理論的) 근거(根據)에 대한 고찰(考察) : 중앙은행(中央銀行)의 존립근거(存立根據)에 대한 개관(槪觀))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • 제12권3호
    • /
    • pp.71-94
    • /
    • 1990
  • This paper reviews the rationale of monetary and banking intervention by an outside authority, either the government or the central bank, and seeks to delineate clearly the optimal limits to the monetary and banking deregulation currently underway in Korea as well as on a global scale. Furthermore, this paper seeks to establish an objective and balanced view on the role of the central bank, especially in light of the current discussion on the restructuring of Korea's central bank, which has been severely contaminated by interest-group politics. The discussion begins with the recognition that the modern free banking school and the new monetary economics are becoming formidable challenges to the traditional role of the government or the central bank in the monetary and banking sector. The paper reviews six arguments that have traditionally been presented to support intervention: (1) the possibility of an over-issue of bank notes under free banking instead of central banking; (2) externalities in and the public good nature of the use of money; (3) economies of scale and natural monopoly in producing money; (4) the need for macro stabilization policy due to the instability of the real sector; (5) the external effects of bank failure due to the inherent instability of the existing banking system; and (6) protection for small banknote users and depositors. Based on an analysis of the above arguments, the paper speculates on the optimal role of the government or central bank in the monetary and banking system and the optimal degree of monetary and banking deregulation. By contrast to the arguments for free banking or laissez-faire monetary systems, which become fashionable in recent years, monopoly and intervention by the government or central bank in the outside money system can be both necessary and optimal. In this case, of course, an over-issue of fiat money may be possible due to political considerations, but this issue is beyond the scope of this paper. On the other hand, the issue of inside monies based on outside money could indeed be provided for optimally under market competition by private institutions. A competitive system in issuing inside monies would help realize, to the maxim urn extent possible, external economies generated by using a single outside money. According to this reasoning, free banking activities will prevail in the inside money system, while a government monopoly will prevail in the outside money system. This speculation, then, also implies that the monetary and banking deregulation currently underway should and most likely will be limited to the inside money system, which could be liberalized to the fullest degree. It is also implied that it will be impractical to deregulate the outside money system and to allow market competition to provide outside money, in accordance with the arguments of the free banking school and the new monetary economics. Furthermore, the role of the government or central bank in this new environment will not be significantly different from their current roles. As far as the supply of fiat money continues to be monopolized by the government, the control of the supply of base money and such related responsibilities as monetary policy (argument(4)) and the lender of the last resort (argument (5)) will naturally be assigned to the outside money supplier. However, a mechanism for controlling an over-issue of fiat money by a monopolistic supplier will definitely be called for (argument(1)). A monetary policy based on a certain policy rule could be one possibility. More importantly, the deregulation of the inside money system would further increase the systemic risk inherent in the current fractional banking system, while enhancing the efficiency of the system (argument (5)). In this context, the role of the lender of the last resort would again become an instrument of paramount importance in alleviating liquidity crises in the early stages, thereby disallowing the possibility of a widespread bank run. Similarly, prudential banking supervision would also help maintain the safety and soundness of the fully deregulated banking system. These functions would also help protect depositors from losses due to bank failures (argument (6)). Finally, these speculations suggest that government or central bank authorities have probably been too conservative on the issue of the deregulation of the financial system, beyond the caution necessary to preserve system safety. Rather, only the fullest deregulation of the inside money system seems to guarantee the maximum enjoyment of external economies in the single outside money system.

  • PDF

The Comparison of Image Quality and Quantitative Indices by Wide Beam Reconstruction Method and Filtered Back Projection Method in Tl-201 Myocardial Perfusion SPECT (Tl-201 심근관류 SPECT 검사에서 광대역 재구성(Wide Beam Reconstruction: WBR) 방법과 여과 후 역투영법에 따른 영상의 질 및 정량적 지표 값 비교)

  • Yoon, Soon-Sang;Nam, Ki-Pyo;Shim, Dong-Oh;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • 제14권2호
    • /
    • pp.122-127
    • /
    • 2010
  • Purpose: The Xpress3.$cardiac^{TM}$ which is a kind of wide beam reconstruction (WBR) method developed by UltraSPECT (Haifa, Israel) enables the acquisition of at quarter time while maintaining image quality. The purpose of this study is to investigate the usefulness of WBR method for decreasing scan times and to compare to it with filtered back projection (FBP), which is the method routinely used. Materials and Methods: Phantom and clinical studies were performed. The anthropomorphic torso phantom was made on an equality with counts from patient's body. The Tl-201 concentrations in the compartments were 74 kBq (2 ${\mu}Ci$)/cc in myocardium, 11.1 kBq (0.3 ${\mu}Ci$)/cc in soft tissue, and 2.59 kBq (0.07 ${\mu}Ci$)/cc in lung. The non-gated Tl-201 myocardial perfusion SPECT data were acquired with the phantom. The former study was scanned for 50 seconds per frame with FBP method, and the latter study was acquired for 13 seconds per frame with WBR method. Using the Xeleris ver. 2.0551, full width at half maximum (FWHM) and average image contrast were compared. In clinical studies, we analyzed the 30 patients who were examined by Tl-201 gated myocardial perfusion SPECT in department of nuclear medicine at Asan Medical Center from January to April 2010. The patients were imaged at full time (50 second per frame) with FBP algorithm and again quarter-time (13 second per frame) with the WBR algorithm. Using the 4D MSPECT (4DM), Quantitative Perfusion SPECT (QPS), and Quantitative Gated SPECT (QGS) software, the summed stress score (SSS), summed rest score (SRS), summed difference score, end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (EF) were analyzed for their correlations and statistical comparison by paired t-test. Results: As a result of the phantom study, the WBR method improved FWHM more than about 30% compared with FBP method (WBR data 5.47 mm, FBP data 7.07 mm). And the WBR method's average image contrast was also higher than FBP method's. However, in result of quantitative indices, SSS, SDS, SRS, EDV, ESV, EF, there were statistically significant differences from WBR and FBP(p<0.01). In the correlation of SSS, SDS, SRS, there were significant differences for WBR and FBP (0.18, 0.34, 0.08). But EDV, ESV, EF showed good correlation with WBR and FBP (0.88, 0.89, 0.71). Conclusion: From phantom study results, we confirmed that the WBR method reduces an acquisition time while improving an image quality compared with FBP method. However, we should consider significant differences in quantitative indices. And it needs to take an evaluation test to apply clinical study to find a cause of differences out between phantom and clinical results.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • 제14권1호
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

Predicting Oxygen Uptake for Men with Moderate to Severe Chronic Obstructive Pulmonary Disease (COPD환자에서 6분 보행검사를 이용한 최대산소섭취량 예측)

  • Kim, Changhwan;Park, Yong Bum;Mo, Eun Kyung;Choi, Eun Hee;Nam, Hee Seung;Lee, Sung-Soon;Yoo, Young Won;Yang, Yun Jun;Moon, Joung Wha;Kim, Dong Soon;Lee, Hyang Yi;Jin, Young-Soo;Lee, Hye Young;Chun, Eun Mi
    • Tuberculosis and Respiratory Diseases
    • /
    • 제64권6호
    • /
    • pp.433-438
    • /
    • 2008
  • Background: Measurement of the maximum oxygen uptake in patients with chronic obstructive pulmonary disease (COPD) has been used to determine the intensity of exercise and to estimate the patient's response to treatment during pulmonary rehabilitation. However, cardiopulmonary exercise testing is not widely available in Korea. The 6-minute walk test (6MWT) is a simple method of measuring the exercise capacity of a patient. It also provides high reliability data and it reflects the fluctuation in one' s exercise capacity relatively well with using the standardized protocol. The prime objective of the present study is to develop a regression equation for estimating the peak oxygen uptake ($VO_2$) for men with moderate to very severe COPD from the results of a 6MWT. Methods: A total of 33 male patients with moderate to very severe COPD agreed to participate in this study. Pulmonary function testing, cardiopulmonary exercise testing and a 6MWT were performed on their first visits. The index of work ($6M_{work}$, 6-minute walk distance [6MWD]${\times}$body weight) was calculated for each patient. Those variables that were closely related to the peak $VO_2$ were identified through correlation analysis. With including such variables, the equation to predict the peak $VO_2$ was generated by the multiple linear regression method. Results: The peak $VO_2$ averaged $1,015{\pm}392ml/min$, and the mean 6MWD was $516{\pm}195$ meters. The $6M_{work}$ (r=.597) was better correlated to the peak $VO_2$ than the 6MWD (r=.415). The other variables highly correlated with the peak $VO_2$ were the $FEV_1$ (r=.742), DLco (r=.734) and FVC (r=.679). The derived prediction equation was $VO_2$ (ml/min)=($274.306{\times}FEV_1$)+($36.242{\times}DLco$)+($0.007{\times}6M_{work}$)-84.867. Conclusion: Under the circumstances when measurement of the peak $VO_2$ is not possible, we consider the 6MWT to be a simple alternative to measuring the peak $VO_2$. Of course, it is necessary to perform a trial on much larger scale to validate our prediction equation.