• Title/Summary/Keyword: Method of Noise Analysis

Search Result 3,812, Processing Time 0.037 seconds

A Study on Bismuth tri-iodide for X-ray direct and digital imagers (직접방식 엑스선 검출기를 위한 $BiI_3$ 특성 연구)

  • Lee, S.H.;Kim, Y.S.;Kim, Y.B.;Jung, S.H.;Park, J.K.;Jung, W.B.;Jang, M.Y.;Mun, C.W.;Nam, S.H.
    • Journal of the Korean Society of Radiology
    • /
    • v.3 no.2
    • /
    • pp.27-31
    • /
    • 2009
  • Now a days, the Medical X-ray equipments has become digitalized from analog type such as film, cassette to CR, DR. And many scientists are still researching and developing the Medical X-ray equipment. In this study, we used the Bismuth tri-iodide to conversion material for digital X-ray equipments and we couldn't get the satisfying result than previous study, but it opened new possibility to cover the disadvantage of a-Se is high voltage aplly and difficultness of make. In this paper, we use $BiI_3$ powder(99.99%) as x-ray conversion material and make films that have thickness of 200um and the film size is $3cm{\times}3cm$. Also, we deposited an ITO(Indium Tin Oxide) electrode as top electrode and bottom electrode using a Magnetron Sputtering System. To evaluate a characteristics of the produced films, an electrical and structural properties are performed. Through a SEM analysis, we confirmed a surface and component part. And to analyze the electrical properties, darkcurrent, sensitivity and SNR(Signal to Noise Ratio) are measured. Darkcurrent is $1.6nA/cm^2$ and sensitivity is $0.629nC/cm^2$ and this study shows that the electrical properties of x-ray conversion material that made by screen printing method are similar to PVD method or better than that. This results suggest that $BiI_3$ is suitable for a replacement of a-Se because of the reduced manufacture processing and improved yield.

  • PDF

The study of quantitative analytical method for pH and moisture of Hanji record paper using non-destructive FT-NIR spectroscopy (비파괴 분석 방법인 푸리에 변환 근적외선 분광 분석을 이용한 한지 기록물의 산성도 및 함수율 정량 분석 연구)

  • Shin, Yong-Min;Park, Soung-Be;Lee, Chang-Yong;Kim, Chan-Bong;Lee, Seong-Uk;Cho, Won-Bo;Kim, Hyo-Jin
    • Analytical Science and Technology
    • /
    • v.25 no.2
    • /
    • pp.121-126
    • /
    • 2012
  • It is essential to evaluate the quality of Hanji record paper without damaging the record paper by previous destructive methods. The samples were Hanji record paper produced in the 1900s. Near-infrared (NIR) spectrometer was used as a non destructive method for evaluating the quality of record papers. Fourier transform (FT) spectrometer was used with 12,500 to 4,000 $cm^{-1}$ wavenumber range for quantitative analysis and it has high accuracy and good signal-to-noise ratio. The acidity and moisture content of Hanji record paper were measured by integrating sphere as diffuse reflectance type. The acidity (pH) of chemical factors as a quality evaluated factor of Hanji was correlated to NIR spectrum. The NIR spectrum was pretreated to obtain the coefficients of optimum correlation. Multiplicative scatter correction (MSC) and First derivative of Savitzky-Golay were used as pretreated methods. The coefficients of optimum correlation were calculated by PLSR (partial least square regression). The correlation coefficients ($R^2$) of acidity had 0.92 on NIR spectra without pretreatment. Also the standard error of prediction (SEP) of pH was 0.24. And then the NIR spectra with pretreatment would have better correlation coefficient ($R^2$ = 0.98) and 0.19 as SEP on pH. For moisture contents, the linearity correlation without pretreatment was higher than the case with pretreatment (MSC, $1^{st}$ derivative). As the best result, the $R^2$ was 0.99 and SEP was 0.45. This indicates that it is highly proper to evaluate the quality of Hanji record papers speedily with integrated sphere and FT NIR analyzer as a non-destructive method.

The Study on Risk Factors Analysis and Improvement of VDT Syndrome in Nuclear Medicine (핵의학과 Video Display Terminals Syndrome 유해 요인 조사 및 개선에 관한 연구)

  • Kim, Jung-Soo;Kim, Seung-Jeong;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo;Han, In-Im;Joo, Yung-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.61-66
    • /
    • 2010
  • Purpose: Recently, Department of Nuclear Medicine have an interest in Video Display Terminals (VDT) syndrome including musculoskeletal disorders, ophthalmologic disorders, trouble of electromagnetic waves and stress disorders occur to VDT workers as the growing number of users and rapid pace of service period supply in large amount. This study research on the actual condition for VDT syndrome in Nuclear Medicine, Seoul National University Hospital (SNUH), discover the problem and draw a plan of upcoming improvement. The aim of this study establish awareness about VDT syndrome and is to prevent for it in the long run. Materials and Methods: Department of Nuclear Medicine, SNUH is composed Principle part, Pediatric part and PET center. We estimated risk factors visit in each part directly. Estimation method use "Check list for VDT work" of Wonjin working environment health laboratory and check list is condition of VDT work, condition of work tables, condition of chairs, condition of keyboards, condition of monitors, working position, character of health management and other working environment. Analysis result is verified in Department of Occupational and Environment, Hallym University Sacred Heard Hospital. Results: As a result of analysis, VDT condition of Department of Nuclear Medicine, SNUH is rule good. In case of work tables, recent of things are suitable to users upon the ergonomical planning, but 15% of existing work tables are below the standard value. In case of chairs are suitable, but 5% of theirs lost optimum capacity become superannuated. The keyboards are suitable for 98% of standard value. In case of monitors, angle control of screen is possible of all, but positioning control is impossible for 38%. In case of working position, 10% is fixed positioning for long time and some of the items researched unsuitable things for standard. At health management point, needed capable of improvement. Also, other working condition as lighting, temperature, noise and ventilation, discovered the problem, but is sufficient to advice value. Conclusion: VDT syndrome is occurrences of possibility continuously, come economical expensive about improvement, is inherent in various causes and originate without your knowledge. So, there is need systematic management system. In Nuclear Medicine, VDT syndrome make it better that constant interest and effort as improvement of ergonomical working environment, improvement of working procedure, regular exercise and steady stretching, and can be prevented fairly. This promote physical and mental condition of worker in top form in comfortable working environment, so this is judged by enlargement of operations efficiency and rising of satisfaction ratings of the inside client.

  • PDF

Principle and Recent Advances of Neuroactivation Study (신경 활성화 연구의 원리와 최근 동향)

  • Kang, Eun-Joo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.2
    • /
    • pp.172-180
    • /
    • 2007
  • Among the nuclear medicine imaging methods available today, $H_2^{15}O-PET$ is most widely used by cognitive neuroscientists to examine regional brain function via the measurement of regional cerebral blood flow (rCBF). The short half-life of the radioactively labeled probe, $^{15}O$, often allows repeated measures from the same subjects in many different task conditions. $H_2^{15}O-$ PET, however, has technical limitations relative to other methods of functional neuroimaging, e.g., fMRI, including relatively poor time and spatial resolutions, and, frequently, insufficient statistical power for analysis of individual subjects. However, recent technical developments, such as the 3-D acquisition method provide relatively good image quality with a smaller radioactive dosage, which in turn results in more PET scans from each individual, thus providing sufficient statistical power for the analysis of individual subject's data. Furthermore, the noise free scanner environment $H_2^{15}O$ PET, along with discrete acquisition of data for each task condition, are important advantages of PET over other functional imaging methods regarding studying state-dependent changes in brain activity. This review presents both the limitations and advantages of $^{15}O-PET$, and outlines the design of efficient PET protocols, using examples of recent PET studies both in the normal healthy population, and in the clinical population.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Development and Animal Tests of Prototype Oxygen Concentrator (국산 산소 농축기의 개발 및 동물실험)

  • 변정욱;성숙환;이태수
    • Journal of Chest Surgery
    • /
    • v.31 no.7
    • /
    • pp.643-649
    • /
    • 1998
  • Background: For the patient with chronic obstructive pulmonary disease requiring long-term oxygen therapy, oxygen concentrator machines are already widely available for use in home. In this study, we used mongrel dogs as test subjects to compare the functional efficiency and safety of the oxygen concentrator developed by our own research team with those of the imported FORLIFE(TM) machine made by AIRSEP Corp. Method and method: To test mechanical reliability, the concentrations of oxygen delivered were measured after 4 hours of continuous operation. Sixteen mongrel dogs were divided into two equal groups. Mongrel dogs in group A were given oxygen using the imported oxygen concentrator, and those in group B using the machine developed. 5 l/min of oxygen were given, after which vital signs were analyzed, arterial blood gases measured, and blood chemistry tests carried out. Results: After 4 hours of continuous operation, the imported model performed better, giving 98${\pm}$3% oxygen, compared to our model, which gave 91${\pm}$1%. In the animal experiments, oxygen concentrations were measured at the inlet of face mask 1, 2, 3, and 4 hours after continuous administration, and there was no statistically significant difference(repeated measures of analysis of variance p=0.70) between the values of 70.6${\pm}$2.5%, 67.1${\pm}$2.9%, 68.2${\pm}$2.6%, and 64.9${\pm}$3.9% that were measured from group A, and the values of 65.1${\pm}$4.8%, 65.2${\pm}$3.6%, 68.7${\pm}$4.3%, and 66.0${\pm}$5.0% measured from group B. Before oxygen administration, and at 1, 2, 3, and 4 hours after oxygen administration, arterial blood partial pressure of oxygen 87.2${\pm}$2.5 mmHg, 347.4${\pm}$29.3 mmHg, 353.4${\pm}$21.2 mmHg, 343.0${\pm}$28.8 mmHg, and 321.6${\pm}$24.4 mmHg, respectively, were read from group A, which were not statistically different (p=0.24) to the values of 102.5${\pm}$9.6 mmHg, 300.3${\pm}$17.1 mmHg, 321.6${\pm}$23.7 mmHg, 303.4${\pm}$27.4 mmHg, and 273.5${\pm}$25.9 mmHg read from group B. Nonetheless, the arterial blood partial pressure of oxygen values appear to be somewhat higher in dogs that were given oxygen using the imported oxygen concentrator. Conclusions: From these results the prototype oxygen concentrator developed appears to function relatively satisfactorily compared to the imported, established model, but may be criticized for the excessive noise generated and poor long-term endurance or consistency, which need improvement.

  • PDF

High-resolution Spiral-scan Imaging at 3 Tesla MRI (3.0 Tesla 자기공명영상시스템에서 고 해상도 나선주사영상)

  • Kim, P.K.;Lim, J.W.;Kang, S.W.;Cho, S.H.;Jeon, S.Y.;Lim, H.J.;Park, H.C.;Oh, S.J.;Lee, H.K.;Ahn, C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.10 no.2
    • /
    • pp.108-116
    • /
    • 2006
  • Purpose : High-resolution spiral-scan imaging is performed at 3 Tesla MRI system. Since the gradient waveforms for the spiral-scan imaging have lower slopes than those for the Echo Planar Imaging (EPI), they can be implemented with the gradient systems having lower slew rates. The spiral-scan imaging also involves less eddy currents due to the smooth gradient waveforms. The spiral-scan imaging method does not suffer from high specific absorption rate (SAR), which is one of the main obstacles in high field imaging for rf echo-based fast imaging methods such as fast spin echo techniques. Thus, the spiral-scan imaging has a great potential for the high-speed imaging in high magnetic fields. In this paper, we presented various high-resolution images obtained by the spiral-scan methods at 3T MRI system for various applications. Materials and Methods : High-resolution spiral-scan imaging technique is implemented at 3T whole body MRI system. An efficient and fast higher-order shimming technique is developed to reduce the inhomogeneity, and the single-shot and interleaved spiral-scan imaging methods are developed. Spin-echo and gradient-echo based spiral-scan imaging methods are implemented, and image contrast and signal-tonoise ratio are controlled by the echo time, repetition time, and the rf flip angles. Results : Spiral-scan images having various resolutions are obtained at 3T MRI system. Since the absolute magnitude of the inhomogeneity is increasing in higher magnetic fields, higher order shimming to reduce the inhomogeneity becomes more important. A fast shimming technique in which axial, sagittal, and coronal sectional inhomogeneity maps are obtained in one scan is developed, and the shimming method based on the analysis of spherical harmonics of the inhomogeneity map is applied. For phantom and invivo head imaging, image matrix size of about $100{\times}100$ is obtained by a single-shot spiral-scan imaging, and a matrix size of $256{\times}256$ is obtained by the interleaved spiral-scan imaging with the number of interleaves of from 6 to 12. Conclusion : High field imaging becomes increasingly important due to the improved signal-to-noise ratio, larger spectral separation, and the higher BOLD-based contrast. The increasing SAR is, however, a limiting factor in high field imaging. Since the spiral-scan imaging has a very low SAR, and lower hardware requirements for the implementation of the technique compared to EPI, it is suitable for a rapid imaging in high fields. In this paper, the spiral-scan imaging with various resolutions from $100{\times}100$ to $256{\times}256$ by controlling the number of interleaves are developed for the high-speed imaging in high magnetic fields.

  • PDF

Accuracy of HF radar-derived surface current data in the coastal waters off the Keum River estuary (금강하구 연안역에서 HF radar로 측정한 유속의 정확도)

  • Lee, S.H.;Moon, H.B.;Baek, H.Y.;Kim, C.S.;Son, Y.T.;Kwon, H.K.;Choi, B.J.
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.13 no.1
    • /
    • pp.42-55
    • /
    • 2008
  • To evaluate the accuracy of currents measured by HF radar in the coastal sea off Keum River estuary, we compared the facing radial vectors of two HF radars, and HF radar-derived currents with in-situ measurement currents. Principal component analysis was used to extract regression line and RMS deviation in the comparison. When two facing radar's radial vectors at the mid-point of baseline are compared, RMS deviation is 4.4 cm/s in winter and 5.4 cm/s in summer. When GDOP(Geometric Dilution of Precision) effect is corrected from the RMS deviations that is analyzed from the comparison between HF radar-derived and current-metermeasured currents, the error of velocity combined by HF radar-derived current is less than 5.1 cm/s in the stations having moderate GDOP values. These two results obtained from different method suggest that the lower limit of HF radar-derived current's accuracy is 5.4 cm/s in our study area. As mentioned in previous researches, RMS deviations become large in the stations located near the islands and increase as a function of mean distance from the radar site due to decrease of signal-to-noise level and the intersect angle of radial vectors. We found that an uncertain error bound of HF radar-derived current can be produced from the separation process of RMS deviations using GDOP value if GDOP value for each component is very close and RMS deviations obtained from current component comparison are also close. When the current measured in the stations having moderate GDOP values is separated into tidal and subtidal current, characteristics of tidal current ellipses analyzed from HF radar-derived current show a good agreement with those from current-meter-measured current, and time variation of subtidal current showed a response reflecting physical process driven by wind and density field.

Variation on Estimated Values of Radioactivity Concentration According to the Change of the Acquisition Time of SPECT/CT (SPECT/CT의 획득시간 증감에 따른 방사능농도 추정치의 변화)

  • Kim, Ji-Hyeon;Lee, Jooyoung;Son, Hyeon-Soo;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.15-24
    • /
    • 2021
  • Purpose SPECT/CT was noted for its excellent correction method and qualitative functions based on fusion images in the early stages of dissemination, and interest in and utilization of quantitative functions has been increasing with the recent introduction of companion diagnostic therapy(Theranostics). Unlike PET/CT, various conditions like the type of collimator and detector rotation are a challenging factor for image acquisition and reconstruction methods at absolute quantification of SPECT/CT. Therefore, in this study, We want to find out the effect on the radioactivity concentration estimate by the increase or decrease of the total acquisition time according to the number of projections and the acquisition time per projection among SPECT/CT imaging conditions. Materials and Methods After filling the 9,293 ml cylindrical phantom with sterile water and diluting 99mTc 91.76 MBq, the standard image was taken with a total acquisition time of 600 sec (10 sec/frame × 120 frames, matrix size 128 × 128) and also volume sensitivity and the calibration factor was verified. Based on the standard image, the comparative images were obtained by increasing or decreasing the total acquisition time. namely 60 (-90%), 150 (-75%), 300 (-50%), 450 (-25%), 900 (+50%), and 1200 (+100%) sec. For each image detail, the acquisition time(sec/frame) per projection was set to 1.0, 2.5, 5.0, 7.5, 15.0 and 20.0 sec (fixed number of projections: 120 frame) and the number of projection images was set to 12, 30, 60, 90, 180 and 240 frames(fixed time per projection:10 sec). Based on the coefficients measured through the volume of interest in each acquired image, the percentage of variation about the contrast to noise ratio (CNR) was determined as a qualitative assessment, and the quantitative assessment was conducted through the percentage of variation of the radioactivity concentration estimate. At this time, the relationship between the radioactivity concentration estimate (cps/ml) and the actual radioactivity concentration (Bq/ml) was compared and analyzed using the recovery coefficient (RC_Recovery Coefficients) as an indicator. Results The results [CNR, radioactivity Concentration, RC] by the change in the number of projections for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.5%, +3.90%, 1.04] at -90%, [-77.9%, +2.71%, 1.03] at -75%, [-55.6%, +1.85%, 1.02] at -50%, [-33.6%, +1.37%, 1.01] at -25%, [-33.7%, +0.71%, 1.01] at +50%, [+93.2%, +0.32%, 1.00] at +100%. and also The results [CNR, radioactivity Concentration, RC] by the acquisition time change for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.3%, -3.55%, 0.96] at - 90%, [-73.4%, -0.17%, 1.00] at -75%, [-49.6%, -0.34%, 1.00] at -50%, [-24.9%, 0.03%, 1.00] at -25%, [+49.3%, -0.04%, 1.00] at +50%, [+99.0%, +0.11%, 1.00] at +100%. Conclusion In SPECT/CT, the total coefficient obtained according to the increase or decrease of the total acquisition time and the resulting image quality (CNR) showed a pattern that changed proportionally. On the other hand, quantitative evaluations through absolute quantification showed a change of less than 5% (-3.55 to +3.90%) under all experimental conditions, maintaining quantitative accuracy (RC 0.96 to 1.04). Considering the reduction of the total acquisition time rather than the increasing of the image acquiring time, The reduction in total acquisition time is applicable to quantitative analysis without significant loss and is judged to be clinically effective. This study shows that when increasing or decreasing of total acquisition time, changes in acquisition time per projection have fewer fluctuations that occur in qualitative and quantitative condition changes than the change in the number of projections under the same scanning time conditions.

Technical Inefficiency in Korea's Manufacturing Industries (한국(韓國) 제조업(製造業)의 기술적(技術的) 효율성(效率性) : 산업별(産業別) 기술적(技術的) 효율성(效率性)의 추정(推定))

  • Yoo, Seong-min;Lee, In-chan
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.51-79
    • /
    • 1990
  • Research on technical efficiency, an important dimension of market performance, had received little attention until recently by most industrial organization empiricists, the reason being that traditional microeconomic theory simply assumed away any form of inefficiency in production. Recently, however, an increasing number of research efforts have been conducted to answer questions such as: To what extent do technical ineffciencies exist in the production activities of firms and plants? What are the factors accounting for the level of inefficiency found and those explaining the interindustry difference in technical inefficiency? Are there any significant international differences in the levels of technical efficiency and, if so, how can we reconcile these results with the observed pattern of international trade, etc? As the first in a series of studies on the technical efficiency of Korea's manufacturing industries, this paper attempts to answer some of these questions. Since the estimation of technical efficiency requires the use of plant-level data for each of the five-digit KSIC industries available from the Census of Manufactures, one may consture the findings of this paper as empirical evidence of technical efficiency in Korea's manufacturing industries at the most disaggregated level. We start by clarifying the relationship among the various concepts of efficiency-allocative effciency, factor-price efficiency, technical efficiency, Leibenstein's X-efficiency, and scale efficiency. It then becomes clear that unless certain ceteris paribus assumptions are satisfied, our estimates of technical inefficiency are in fact related to factor price inefficiency as well. The empirical model employed is, what is called, a stochastic frontier production function which divides the stochastic term into two different components-one with a symmetric distribution for pure white noise and the other for technical inefficiency with an asymmetric distribution. A translog production function is assumed for the functional relationship between inputs and output, and was estimated by the corrected ordinary least squares method. The second and third sample moments of the regression residuals are then used to yield estimates of four different types of measures for technical (in) efficiency. The entire range of manufacturing industries can be divided into two groups, depending on whether or not the distribution of estimated regression residuals allows a successful estimation of technical efficiency. The regression equation employing value added as the dependent variable gives a greater number of "successful" industries than the one using gross output. The correlation among estimates of the different measures of efficiency appears to be high, while the estimates of efficiency based on different regression equations seem almost uncorrelated. Thus, in the subsequent analysis of the determinants of interindustry variations in technical efficiency, the choice of the regression equation in the previous stage will affect the outcome significantly.

  • PDF