• Title/Summary/Keyword: Random noise

Search Result 1,059, Processing Time 0.033 seconds

Evaluation of Reasonable $^{18}F$-FDG Injected Dose for Maintaining the Image Quality in 3D WB PET/CT (PET/CT 검사에서 영상의 질을 유지하기 위한 적정한 $^{18}F$-FDG 투여량의 평가)

  • Moon, A-Reum;Lee, Hyuk;Kwak, In-Suk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.36-40
    • /
    • 2011
  • Purpose: $^{18}F$-FDG injected dose to the patient is quite different between the recommended dose from manufacturer and the actual dose applied to each of hospitals. injection of inappropriate $^{18}F$-FDG dose may not only increase the exposed dose to patients but also reduce the image quality. we thus evaluated the proper $^{18}F$-FDG injected dose to decrease the exposed dose to patients considering the image quality. Materials And Methods: NEMA Nu2-1994 phantom was filled with $^{18}F$-FDG increasing hot cylinder radioactivity concentration to 1, 3, 5, 7, 9 MBq/kg based on the ratio of 4:1 between the hot cylinder and background activity. after completing the transmission scan using ct, emission scan was acquired in 3D mode for 2 minutes 30 seconds/bed. ROI was set up on hot cylinder and background radioactivity region. after measuring $SUV_{max}$ those regions, then analyzed SNR at the points. clinical experiment has been conducted the object of patients who have came to smc from november 2009 to august 2010, 97 patients without having a hepatic lesions were selected. ROI was set up in the liver and thigh area. after measuring $SUV_{max}$, the image quality was compared following the injected dose. Results: in phantom study, as the injected radioactivity concentration per unit mass was 1, 3, 5, 7, 9 MBq/kg, $SUV_{max}$ was 23.1, 24.1, 24.3, 22.8, 23.6 and SNR was shown 0.48, 0.54, 0.56, 0.55, 0.55. according to increment of the injected dose, $SUV_{max}$ and SNR was increased under 5 MBq/kg but they were decreased over 7 MBq/kg. in case of clinical experiment, as increased the injected radioactivity concentration per unit mass was 4.72, 5.34, 6.16, 7.41, 8.68 MBq/kg, $SUV_{max}$ was 2.68, 2.67, 2.26, 1.88, 1.95 and SNR was shown 0.52, 0.53, 0.46, 0.46, 0.44. if the injected dose exceeds 5 MBq/kg, showed a decrease pattern as phantom study. Conclusion: increasing $^{18}F$-FDG injected dose considered patient's body weight improve image quality within a certain range. if it exceeds the range, it can be reduced image quality due to random and scatter coincidences. this study indicates that the optimal injected dose was 5 MBq/kg per unit mass the injected radioactivity concentration in 3d wb pet/ct.

  • PDF

Detection of Hepatic Lesion: Comparison of Free-Breathing and Respiratory-Triggered Diffusion-Weighted MR imaging on 1.5-T MR system (국소 간 병변의 발견: 1.5-T 자기공명영상에서의 자유호흡과 호흡유발 확산강조 영상의 비교)

  • Park, Hye-Young;Cho, Hyeon-Je;Kim, Eun-Mi;Hur, Gham;Kim, Yong-Hoon;Lee, Byung-Hoon
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.1
    • /
    • pp.22-31
    • /
    • 2011
  • Purpose : To compare free-breathing and respiratory-triggered diffusion-weighted imaging on 1.5-T MR system in the detection of hepatic lesions. Materials and Methods: This single-institution study was approved by our institutional review board. Forty-seven patients (mean 57.9 year; M:F = 25:22) underwent hepatic MR imaging on 1.5-T MR system using both free-breathing and respiratory-triggered diffusion-weighted imaging (DWI) at a single examination. Two radiologists retrospectively reviewed respiratory-triggered and free-breathing sets (B50, B400, B800 diffusion weighted images and ADC map) in random order with a time interval of 2 weeks. Liver SNR and lesion-to-liver CNR of DWI were calculated measuring ROI. Results : Total of 62 lesions (53 benign, 9 malignant) that included 32 cysts, 13 hemangiomas, 7 hepatocellular carcinomas (HCCs), 5 eosinophilic infiltration, 2 metastases, 1 eosinophilic abscess, focal nodular hyperplasia, and pseudolipoma of Glisson's capsule were reviewed by two reviewers. Though not reaching statistical significance, the overall lesion sensitivities were increased in respiratory-triggered DWI [reviewer1: reviewer2, 47/62(75.81%):45/62(72.58%)] than free-breathing DWI [44/62(70.97%):41/62(66.13%)]. Especially for smaller than 1 cm hepatic lesions, sensitivity of respiratory-triggered DWI [24/30(80%):21/30(70%)] was superior to free-breathing DWI [17/30(56.7%):15/30(50%)]. The diagnostic accuracy measuring the area under the ROC curve (Az value) of free-breathing and respiratory-triggered DWI was not statistically different. Liver SNR and lesion-to-liver CNR of respiratory-triggered DWI ($87.6{\pm}41.4$, $41.2{\pm}62.5$) were higher than free-breathing DWI ($38.8:{\pm}13.6$, $24.8{\pm}36.8$) (p value < 0.001, respectively). Conclusion: Respiratory-triggered diffusion-weighted MR imaging seemed to be better than free-breathing diffusion-weighted MR imaging on 1.5-T MR system for the detection of smaller than 1 cm lesions by providing high SNR and CNR.

Adaptive Data Hiding Techniques for Secure Communication of Images (영상 보안통신을 위한 적응적인 데이터 은닉 기술)

  • 서영호;김수민;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.664-672
    • /
    • 2004
  • Widespread popularity of wireless data communication devices, coupled with the availability of higher bandwidths, has led to an increased user demand for content-rich media such as images and videos. Since such content often tends to be private, sensitive, or paid for, there exists a requirement for securing such communication. However, solutions that rely only on traditional compute-intensive security mechanisms are unsuitable for resource-constrained wireless and embedded devices. In this paper, we propose a selective partial image encryption scheme for image data hiding , which enables highly efficient secure communication of image data to and from resource constrained wireless devices. The encryption scheme is invoked during the image compression process, with the encryption being performed between the quantizer and the entropy coder stages. Three data selection schemes are proposed: subband selection, data bit selection and random selection. We show that these schemes make secure communication of images feasible for constrained embed-ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of data hiding achieved with the computation requirements imposed on the wireless devices. Experiments conducted on over 500 test images reveal that, by using our techniques, the fraction of data to be encrypted with our scheme varies between 0.0244% and 0.39% of the original image size. The peak signal to noise ratios (PSNR) of the encrypted image were observed to vary between about 9.5㏈ to 7.5㏈. In addition, visual test indicate that our schemes are capable of providing a high degree of data hiding with much lower computational costs.

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.

The Study on the Reduction of Patient Surface Dose Through the use of Copper Filter in a Digital Chest Radiography (디지털 흉부 촬영에서 구리필터사용에 따른 환자 표면선량 감소효과에 관한 연구)

  • Shin, Soo-In;Kim, Chong-Yeal;Kim, Sung-Chul
    • Journal of radiological science and technology
    • /
    • v.31 no.3
    • /
    • pp.223-228
    • /
    • 2008
  • The most critical point in the medical use of radiation is to minimize the patient's entrance dose while maintaining the diagnostic function. Low-energy photons (long wave X-ray) among diagnostic X-rays are unnecessary because they are mostly absorbed and contribute the increase of patient's entrance dose. The most effective method to eliminate the low-energy photons is to use the filtering plate. The experiments were performed by observing the image quality. The skin entrance dose was 0.3 mmCu (copper) filter. A total of 80 images were prepared as two sets of 40 cuts. In the first set (of 40 cuts), 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of signal + noise image set. In the second set of 40 cuts, 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of non-signal image (noisy image) with random location of diameter 4 mm and 3 mm thickness of acryl disc for ROC signal at the chest phantom. P(S/s) and P(S/n) were calculated and the ROC curve was described in terms of sensitivity and specificity. Accuracy were evaluated after reading by five radiologists. The number of optically observable lesions was counted through ANSI chest phantom and contrast-detail phantom by recommendation of AAPM when non-filter or Cu filter was used, and the skin entrance dose was also measured for both conditions. As the result of the study, when the Cu filter was applied, favorable outcomes were observed on, the ROC Curve was located on the upper left area, sensitivity, accuracy and the number of CD phantom lesions were reasonable. Furthermore, if skin entrance dose was reduced, the use of additional filtration may be required to be considered in many other cases.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.