• Title/Summary/Keyword: Noise Signal Analysis

Search Result 1,783, Processing Time 0.027 seconds

A Study for Analysis of Image Quality Based on the CZT and NaI Detector according to Physical Change in Monte Carlo Simulation (CZT와 NaI 검출기 물질 기반 물리적 변화에 따른 영상의 질 분석에 관한 연구: 몬테카를로 시뮬레이션)

  • Ko, Hye-Rim;Yoo, Yu-Ri;Park, Chan-Rok
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.741-748
    • /
    • 2021
  • In this study, we evaluated image quality by changing collimator length and detector thickness using the Geant4 Application for Tomographic Emission (GATE) simulation tool. The gamma camera based on the Cadimium Zinc Telluride (CZT) and NaI detectors is modeled. In addition the images were acquired by setting 1, 2, 3, 4, 5, and 6 cm collimator length and 1, 3, 5, and 7 mm detector thickness using point source and phantom, which is designed by each diameter (4.45, 3.80, 3.15, 2.55 mm) with 447, 382, 317, and 256 Bq. The sensitivity (cps/MBq) for point source, and signal to noise ratio (SNR) and profile for phantom at the 4.45 mm by drwan the region of interests were used for quantitative analysis. Based on the results, the sensitivity according to collimator length is 2.3 ~ 48.6 cps/MBq for CZT detector, and 1.8 ~ 43.9 cps/MBq for NaI detector. The SNR using phantom is 3.6~9.8 for CZT detector, and 2.9~9.5 for NaI detector. As the collimator length is increased, the image resolution is also improved according to profile results based on the CZT and NaI detector. In addition, the senistivity for detector thickness is 0.04 ~ 0.12 cps/MBq for CZT detector, and 0.03 ~ 0.11 cps/MBq. The SNR using phnatom is 7.3~9.8 count for CZT detector, and 5.9~9.5 for NaI detector. As the detector thickness is increased, the image resolution is decreased according to profile results based on the CZT and NaI detector due to scatter ray. In conclusion, we need to set the geometric material such as detector and collimator to acuquire suitable image quality in nuclear medicine.

Comparison of Image Quality among Different Computed Tomography Algorithms for Metal Artifact Reduction (금속 인공물 감소를 위한 CT 알고리즘 적용에 따른 영상 화질 비교)

  • Gui-Chul Lee;Young-Joon Park;Joo-Wan Hong
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.4
    • /
    • pp.541-549
    • /
    • 2023
  • The aim of this study wasto conduct a quantitative analysis of CT image quality according to an algorithm designed to reduce metal artifacts induced by metal components. Ten baseline images were obtained with the standard filtered back-projection algorithm using spectral detector-based CT and CT ACR 464 phantom, and ten images were also obtained on the identical phantom with the standard filtered back-projection algorithm after inducing metal artifacts. After applying the to raw data from images with metal artifacts, ten additional images for each were obtained by applying the virtual monoenergetic algorithm. Regions of interest were set for polyethylene, bone, acrylic, air, and water located in the CT ACR 464 phantom module 1 to conduct compare the Hounsfield units for each algorithm. The algorithms were individually analyzed using root mean square error, mean absolute error, signal-to-noise ratio, peak signal-to-noise ratio, and structural similarity index to assess the overall image quality. When the Hounsfield units of each algorithm were compared, a significant difference was found between the images with different algorithms (p < .05), and large changes were observed in images using the virtual monoenergetic algorithm in all regions of interest except acrylic. Image quality analysis indices revealed that images with the metal artifact reduction algorithm had the highest resolution, but the structural similarity index was highest for images with the metal artifact reduction algorithm followed by an additional virtual monoenergetic algorithm. In terms of CT images, the metal artifact reduction algorithm was shown to be more effective than the monoenergetic algorithm at reducing metal artifacts, but to obtain quality CT images, it will be important to ascertain the advantages and differences in image qualities of the algorithms, and to apply them effectively.

Efficient Algorithms for Motion Parameter Estimation in Object-Oriented Analysis-Synthesis Coding (객체지향 분석-함성 부호화를 위한 효율적 움직임 파라미터 추정 알고리듬)

  • Lee Chang Bum;Park Rae-Hong
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.653-660
    • /
    • 2004
  • Object-oriented analysis-synthesis coding (OOASC) subdivides each image of a sequence into a number of moving objects and estimates and compensates the motion of each object. It employs a motion parameter technique for estimating motion information of each object. The motion parameter technique employing gradient operators requires a high computational load. The main objective of this paper is to present efficient motion parameter estimation techniques using the hierarchical structure in object-oriented analysis-synthesis coding. In order to achieve this goal, this paper proposes two algorithms : hybrid motion parameter estimation method (HMPEM) and adaptive motion parameter estimation method (AMPEM) using the hierarchical structure. HMPEM uses the proposed hierarchical structure, in which six or eight motion parameters are estimated by a parameter verification process in a low-resolution image, whose size is equal to one fourth of that of an original image. AMPEM uses the same hierarchical structure with the motion detection criterion that measures the amount of motion based on the temporal co-occurrence matrices for adaptive estimation of the motion parameters. This method is fast and easily implemented using parallel processing techniques. Theoretical analysis and computer simulation show that the peak signal to noise ratio (PSNR) of the image reconstructed by the proposed method lies between those of images reconstructed by the conventional 6- and 8-parameter estimation methods with a greatly reduced computational load by a factor of about four.

Content Analysis-based Adaptive Filtering in The Compressed Satellite Images (위성영상에서의 적응적 압축잡음 제거 알고리즘)

  • Choi, Tae-Hyeon;Ji, Jeong-Min;Park, Joon-Hoon;Choi, Myung-Jin;Lee, Sang-Keun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.84-95
    • /
    • 2011
  • In this paper, we present a deblocking algorithm that removes grid and staircase noises, which are called "blocking artifacts", occurred in the compressed satellite images. Particularly, the given satellite images are compressed with equal quantization coefficients in row according to region complexity, and more complicated regions are compressed more. However, this approach has a problem that relatively less complicated regions within the same row of complicated regions have blocking artifacts. Removing these artifacts with a general deblocking algorithm can blur complex and undesired regions as well. Additionally, the general filter lacks in preserving the curved edges. Therefore, the proposed algorithm presents an adaptive filtering scheme for removing blocking artifacts while preserving the image details including curved edges using the given quantization step size and content analysis. Particularly, WLFPCA (weighted lowpass filter using principle component analysis) is employed to reduce the artifacts around edges. Experimental results showed that the proposed method outperforms SA-DCT in terms of subjective image quality.

Design of Experiment and Analysis Method for the Integrated Logistics System Using Orthogonal Array (직교배열을 이용한 통합물류시스템의 실험 설계 및 분석방법)

  • Park, Youl-Kee;Um, In-Sup;Lee, Hong-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.12
    • /
    • pp.5622-5632
    • /
    • 2011
  • This paper presents the simulation design and analysis of Integrated Logistics System(ILS) which is operated by using the AGV(Automated Guided Vehicle). To maximize the operation performances of ILS with AGV, many parameters should be considered such as the number, velocity, and dispatching rule of AGV, part types, scheduling, and buffer sizes. We established the design of experiment in a way of Orthogonal Array in order to consider (1)maximizing the throughput; (2)maximizing the vehicle utilization; (3)minimizing the congestion; and (4)maximizing the Automated Storage and Retrieval System(AS/RS) utilization among various critical factors. Furthermore, we performed the optimization by using the simulation-based analysis and Evolution Strategy(ES). As a result, Orthogonal Array which is conducted far fewer than ES significantly saved not only the time but the same outcome when compared after validation test on the result from the two methods. Therefore, this approach ensures the confidence and provides better process for quick analysis by specifying exact experiment outcome even though it provides small number of experiment.

Optimal Value Detection of Irregular RR Interval for Atrial Fibrillation Classification based on Linear Analysis (선형분석 기반의 심방세동 분류를 위한 불규칙 RR 간격의 최적값 검출)

  • Cho, Ik-Sung;Jeong, Jong-Hyeog;Cho, Young Chang;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2551-2561
    • /
    • 2014
  • Several algorithms have been developed to detect AFIB(Atrial Fibrillation) which either rely on the linear and frequency analysis. But they are more complex than time time domain algorithm and difficult to get the consistent rule of irregular RR interval rhythm. In this study, we propose algorithm for optimal value detection of irregular RR interval for AFIB classification based on linear analysis. For this purpose, we detected R wave, RR interval, from noise-free ECG signal through the preprocessing process and subtractive operation method. Also, we set scope for segment length and detected optimal value and then classified AFIB in realtime through liniar analysis such as absolute deviation and absolute difference. The performance of proposed algorithm for AFIB classification is evaluated by using MIT-BIH arrhythmia and AFIB database. The optimal value indicate ${\alpha}=0.75$, ${\beta}=1.4$, ${\gamma}=300ms$ in AFIB classification.

Partial Discharge Detection of High Voltage Switchgear Using a Ultra High Frequency Sensor

  • Shin, Jong-Yeol;Lee, Young-Sang;Hong, Jin-Woong
    • Transactions on Electrical and Electronic Materials
    • /
    • v.14 no.4
    • /
    • pp.211-215
    • /
    • 2013
  • Partial discharge diagnosis techniques using ultra high frequencies do not affect load movement, because there is no interruption of power. Consequently, these techniques are popular among the prevention diagnosis methods. For the first time, this measurement technique has been applied to the GIS, and has been tested by applying an extra high voltage switchboard. This particular technique makes it easy to measure in the live state, and is not affected by the noise generated by analyzing the causes of faults ? thereby making risk analysis possible. It is reported that the analysis data and the evaluation of the risk level are improved, especially for poor location, and that the measurement of Ultra high frequency (UHF) partial discharge of the real live wire in industrial switchgear is spectacular. Partial discharge diagnosis techniques by using the Ultra High Frequency sensor have been recently highlighted, and it is verified by applying them to the GIS. This has become one of the new and various power equipment techniques. Diagnosis using a UHF sensor is easy to measure, and waveform analysis is already standardized, due to numerous past case experiments. This technique is currently active in research and development, and commercialization is becoming a reality. Another aspect of this technique is that it can determine the occurrences and types of partial discharge, by the application diagnosis for live wire of ultra high voltage switchgear. Measured data by using the UHF partial discharge techniques for ultra high voltage switchgear was obtained from 200 places in Gumi, Yeosu, Taiwan and China's semiconductor plants, and also the partial discharge signals at 15 other places were found. It was confirmed that the partial discharge signal was destroyed by improving the work of junction bolt tightening check, and the cable head reinforcement insulation at 8 places with a possibility for preventing the interruption of service. Also, it was confirmed that the UHF partial discharge measurement techniques are also a prevention diagnosis method in actual industrial sites. The measured field data and the usage of the research for risk assessment techniques of the live wire status of power equipment make a valuable database for future improvements.

Sensitivity Study on the Infra-Red Signature of Naval Ship According to the Composition Ratio of Exhaust Plume (폐기가스 조성 비율이 적외선 신호에 미치는 영향 연구)

  • Cho, Yong-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.103-110
    • /
    • 2018
  • Infrared signatures emitted from naval ships are mainly classified into internal signatures generated by the internal combustion engine of the ship and external signatures generated from the surface of the ship heated by solar heat. The internal signatures are also affected by the chemical components ($CO_2$, $H_2O$, CO and soot) of the exhaust plumes generated by the gas turbine and diesel engine, which constitute the main propulsion system. Therefore, in this study, the chemical composition ratios of the exhaust plumes generated by the gas turbines and diesel engines installed in domestic naval ships were examined to identify the chemical components and their levels. The influence of the chemical components of the exhaust plumes and their ratios on the infrared signatures of a naval ship was investigated using orthogonal arrays. The infrared signature intensity of the exhaust plumes calculated using infrared signature analysis software was converted to the signal-to-noise ratio to facilitate the analysis. The signature analysis showed that $CO_2$, soot and $H_2O$ are the major components influencing the mid-wave infrared signatures of both the gas turbine and diesel engine. In addition, it was confirmed that $H_2O$ and $CO_2$ are the major components influencing the long-wave infrared signatures.

Measurement and Assessment of Absolute Quantification from in Vitro Canine Brain Metabolites Using 500 MHz Proton Nuclear Magnetic Resonance Spectroscopy: Preliminary Results (개의 뇌 조직로부터 추출한 대사물질의 절대농도 측정 및 평가: 500 MHz 고자장 핵자기공명분광법을 이용한 예비연구결과)

  • Woo, Dong-Cheol;Bang, Eun-Jung;Choi, Chi-Bong;Lee, Sung-Ho;Kim, Sang-Soo;Rhim, Hyang-Shuk;Kim, Hwi-Yool;Choe, Bo-Young
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.2
    • /
    • pp.100-106
    • /
    • 2008
  • The purpose of this study was to confirm the exactitude of in vitro nuclear magnetic resonance spectroscopy(NMRS) and to complement the defect of in vivo NMRS. It has been difficult to understand the metabolism of a cerebellum using in vivo NMRS owing to the generated inhomogeneity of magnetic fields (B0 and B1 field) by the complexity of the cerebellum structure. Thus, this study tried to more exactly analyze the metabolism of a canine cerebellum using the cell extraction and high resolution NMRS. In order to conduct the absolute metabolic quantification in a canine cerebellum, the spectrum of our phantom included in various brain metabolites (i.e., NAA, Cr, Cho, Ins, Lac, GABA, Glu, Gln, Tau and Ala) was obtained. The canine cerebellum tissue was extracted using the methanol-chloroform water extraction (M/C extraction) and one group was filtered and the other group was not under extract processing. Finally, NMRS of a phantom solution and two extract solution (90% D2O) was progressed using a 500MHz (11.4 T) NMR machine. Filtering a solution of the tissue extract increased the signal to noise ratio (SNR). The metabolic concentrations of a canine cerebellum were more close to rat’s metabolic concentration than human’s metabolic concentration. The present study demonstrates the absolute quantification technique in vitro high resolution NMRS with tissue extraction as the method to accurately measure metabolite concentration.

  • PDF

Design and Performance Analysis of an Off-Axis Three-Mirror Telescope for Remote Sensing of Coastal Water (연안 원격탐사를 위한 비축 삼반사경 설계와 성능 분석)

  • Oh, Eunsong;Kang, Hyukmo;Hyun, Sangwon;Kim, Geon-Hee;Park, YoungJe;Choi, Jong-Kuk;Kim, Sug-Whan
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.3
    • /
    • pp.155-161
    • /
    • 2015
  • We report the design and performance analysis of an off-axis three-mirror telescope as the fore optics for a new hyperspectral sensor aboard a small unmanned aerial vehicle (UAV), for low-altitude coastal remote sensing. The sensor needs to have at least 4 cm of spatial resolution at an operating altitude of 500 m, $4^{\circ}$ field of view (FOV), and a signal to noise ratio (SNR) of 100 at 660 nm. For these performance requirements, the sensor's optical design has an entrance pupil diameter of 70 mm and an F-ratio of 5.0. The fore optics is a three-mirror system, including aspheric primary and secondary mirrors. The optical performance is expected to reach $1/15{\lambda}$ in RMS wavefront error and 0.75 in MTF value at 660 nm. Considering the manufacturing and assembling phase, we determined the alignment compensation due to the tertiary mirror from the sensitivity, and derived the tilt-tolerance range to be 0.17 mrad. The off-axis three-mirror telescope, which has better performance than the fore optics of other hyperspectral sensors and is fitted for a small UAV, will contribute to ocean remote-sensing research.