• Title/Summary/Keyword: Noise metric

Search Result 109, Processing Time 0.027 seconds

Effective Detective Quantum Efficiency (eDQE) Evaluation for the Influence of Focal Spot Size and Magnification on the Digital Radiography System (X-선관 초점 크기와 확대도에 따른 디지털 일반촬영 시스템의 유효검출양자효율 평가)

  • Kim, Ye-Seul;Park, Hye-Suk;Park, Su-Jin;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.26-32
    • /
    • 2012
  • The magnification technique has recently become popular in bone radiography, mammography and other diagnostic examination. However, because of the finite size of X-ray focal spot, the magnification influences various imaging properties with resolution, noise and contrast. The purpose of study is to investigate the influence of magnification and focal spot size on digital imaging system using eDQE (effective detective quantum efficiency). Effective DQE is a metric reflecting overall system response including focal spot blur, magnification, scatter and grid response. The adult chest phantom employed in the Food and Drug Administration (FDA) was used to derive eDQE from eMTF (effective modulation transfer function), eNPS (effective noise power spectrum), scatter fraction and transmission fraction. According to results, spatial frequencies that eMTF is 10% with the magnification factor of 1.2, 1.4, 1.6, 1.8 and 2.0 are 2.76, 2.21, 1.78, 1.49 and 1.26 lp/mm respectively using small focal spot. The spatial frequencies that eMTF is 10% with the magnification factor of 1.2, 1.4, 1.6, 1.8 and 2.0 are 2.21, 1.66, 1.25, 0.93 and 0.73 lp/mm respectively using large focal spot. The eMTFs and eDQEs decreases with increasing magnification factor. Although there are no significant differences with focal spot size on eDQE (0), the eDQEs drops more sharply with large focal spot than small focal spot. The magnification imaging can enlarge the small size lesion and improve the contrast due to decrease of effective noise and scatter with air-gap effect. The enlargement of the image size can be helpful for visual detection of small image. However, focal spot blurring caused by finite size of focal spot shows more significant impact on spatial resolution than the improvement of other metrics resulted by magnification effect. Based on these results, appropriate magnification factor and focal spot size should be established to perform magnification imaging with digital radiography system.

A Nobel Video Quality Degradation Monitoring Schemes Over an IPTV Service with Packet Loss (IPTV 서비스에서 패킷손실에 의한 비디오품질 열화 모니터링 방법)

  • Kwon, Jae-Cheol;Oh, Seoung-Jun;Suh, Chang-Ryul;Chin, Young-Min
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.573-588
    • /
    • 2009
  • In this paper, we propose a novel video quality degradation monitoring scheme titled VR-VQMS(Visual Rhythm based Video Quality Monitoring Scheme) over an IPTV service prone to packet losses during network transmission. Proposed scheme quantifies the amount of quality degradation due to packet losses, and can be classified into a RR(reduced-reference) based quality measurement scheme exploiting visual rhythm data of H.264-encoded video frames at a media server and reconstructed ones at an Set-top Box as feature information. Two scenarios, On-line and Off-line VR-VQMS, are proposed as the practical solutions. We define the NPSNR(Networked Peak-to-peak Signal-to-Noise Ratio) modified by the well-known PSNR as a new objective quality metric, and several additional objective and subjective metrics based on it to obtain the statistics on timing, duration, occurrence, and amount of quality degradation. Simulation results show that the proposed method closely approximates the results from 2D video frames and gives good estimation of subjective quality(i.e.,MOS(mean opinion score)) performed by 10 test observers. We expect that the proposed scheme can play a role as a practical solution to monitor the video quality experienced by individual customers in a commercial IPTV service, and be implemented as a small and light agent program running on a resource-limited set-top box.

The Performance Comparison of the MMA and SCA Algorithm for Self Adaptive Equalization (자기 적응 등화를 위한 MMA와 SCA 알고리즘의 성능 비교)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.159-165
    • /
    • 2012
  • This paper deals with the performance comparison of adaptive equalization algorithm, MMA and SCA, that is used for the minimization of the distortion and noise effect in the communication channel.. The transmitting signal will be distorted and received due to the nonlinearties of magnitude and phase transfer characteristics of communication channel, the compensation of it by using the self adaptive equalizer. The constant modulus has important metric in the self adaptive equalizer, the MMA uses the 2nd and 4th high order statistics of transmitting signal, the SCA uses the 2nd order statistics of transmitting signal only in order to the calculation of it. We compared to the compensation performance of the MMA and SCA by the computer simulation that are possible to the compensation of the two kinds of transfer characteristics at same times by the relatively simple arithmatic operation. We used to the recovered constellation, residual isi and MSE, SER that are the essential index for the comparison of the adaptive equalizer. The result of performance comparison of algorithms, the MMA which uses the high order statistics of transmitting signal has good performance in the MSE and SER compared to the SCA which is using the low order statistics. But in the recovered costellation and residual isi, the SCA has a good than the MMA.

Optimizing Imaging Conditions in Digital Tomosynthesis for Image-Guided Radiation Therapy (영상유도 방사선 치료를 위한 디지털 단층영상합성법의 촬영조건 최적화에 관한 연구)

  • Youn, Han-Bean;Kim, Jin-Sung;Cho, Min-Kook;Jang, Sun-Young;Song, William Y.;Kim, Ho-Kyung
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.281-290
    • /
    • 2010
  • Cone-beam digital tomosynthesis (CBDT) has greatly been paid attention in the image-guided radiation therapy because of its attractive advantages such as low patient dose and less motion artifact. Image quality of tomograms is, however, dependent on the imaging conditions such as the scan angle (${\beta}_{scan}$) and the number of projection views. In this paper, we describe the principle of CBDT based on filtered-backprojection technique and investigate the optimization of imaging conditions. As a system performance, we have defined the figure-of-merit with a combination of signal difference-to-noise ratio, artifact spread function and floating-point operations which determine the computational load of image reconstruction procedures. From the measurements of disc phantom, which mimics an impulse signal and thus their analyses, it is concluded that the image quality of tomograms obtained from CBDT is improved as the scan angle is wider than 60 degrees with a larger step scan angle (${\Delta}{\beta}$). As a rule of thumb, the system performance is dependent on $\sqrt{{\Delta}{\beta}}{\times}{\beta}^{2.5}_{scan}$. If the exact weighting factors could be assigned to each image-quality metric, we would find the better quantitative imaging conditions.

A New Demosaicking Algorithm for Honeycomb CFA CCD by Utilizing Color Filter Characteristics (Honeycomb CFA 구조를 갖는 CCD 이미지센서의 필터특성을 고려한 디모자이킹 알고리즘의 개발 및 검증)

  • Seo, Joo-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.62-70
    • /
    • 2011
  • Nowadays image sensor is an essential component in many multimedia devices, and it is covered by a color filter array to filter out specific color components at each pixel. We need a certain algorithm to combine those color components reconstructed a full color image from incomplete color samples output from an image sensor, which is called a demosaicking process. Most existing demosaicking algorithms are developed for ideal image sensors, but they do not work well for the practical cases because of dissimilar characteristics of each sensor. In this paper, we propose a new demosaicking algorithm in which the color filter characteristics are fully utilized to generate a good image. To demonstrate significance of our algorithm, we used a commerically available sensor, CBN385B, which is a sort of Honeycomb-style CFA(Color Filter Array) CCD image sensor. As a performance metric of the algorithm, PSNR(Peak Signal to Noise Ratio) and RGB distribution of the output image are used. We first implemented our algorithm in C-language for simulation on various input images. As a result, we could obtain much enhanced images whose PSNR was improved by 4~8 dB compared to the commonly idealized approaches, and we also could remove the inclined red property which was an unique characteristics of the image sensor(CBN385B).Then we implemented it in hardware to overcome its problem of computational complexity which made it operate slow in software. The hardware was verified on Spartan-3E FPGA(Field Programable Gate Array) to give almost the same performance as software, but in much faster execution time. The total logic gate count is 45K, and it handles 25 image frmaes per second.

Elaborate Image Quality Assessment with a Novel Luminance Adaptation Effect Model (새로운 광적응 효과 모델을 이용한 정교한 영상 화질 측정)

  • Bae, Sung-Ho;Kim, Munchurl
    • Journal of Broadcast Engineering
    • /
    • v.20 no.6
    • /
    • pp.818-826
    • /
    • 2015
  • Recently, objective image quality assessment (IQA) methods that elaborately reflect the visual quality perception characteristics of human visual system (HVS) have actively been studied. Among those characteristics of HVS, luminance adaptation (LA) effect, indicating that HVS has different sensitivities depending on background luminance values to distortions, has widely been reflected into many existing IQA methods via Weber's law model. In this paper, we firstly reveal that the LA effect based on Weber's law model has inaccurately been reflected into the conventional IQA methods. To solve this problem, we firstly derive a new LA effect-based Local weight Function (LALF) that can elaborately reflect LA effect into IQA methods. We validate the effectiveness of our proposed LALF by applying LALF into SSIM (Structural SIMilarity) and PSNR methods. Experimental results show that the SSIM based on LALF yields remarkable performance improvement of 5% points compared to the original SSIM in terms of Spear rank order correlation coefficient between estimated visual quality values and measured subjective visual quality scores. Moreover, the PSNR (Peak to Signal Noise Ratio) based on LALF yields performance improvement of 2.5% points compared to the original PSNR.

Evaluation of Image Quality in Micro-CT System Using Constrained Total Variation (TV) Minimization (Micro-CT 시스템에서 제한된 조건의 Total Variation (TV) Minimization을 이용한 영상화질 평가)

  • Jo, Byung-Du;Choi, Jong-Hwa;Kim, Yun-Hwan;Lee, Kyung-Ho;Kim, Dae-Hong;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.252-260
    • /
    • 2012
  • The reduction of radiation dose from x-ray is a main concern in computed tomography (CT) imaging due to the side-effect of the dose on human body. Recently, the various methods for dose reduction have been studied in CT and one of the method is a iterative reconstruction based on total variation (TV) minimization at few-views data. In this paper, we evaluated the image quality between total variation (TV) minimization algorithm and Feldkam-Davis-kress (FDK) algorithm in micro computed tomography (CT). To evaluate the effect of TV minimization algorithm, we produced a cylindrical phantom including contrast media, water, air inserts. We can acquire maximum 400 projection views per rotation of the x-ray tube and detector. 20, 50, 90, 180 projection data were chosen for evaluating the level of image restoration by TV minimization. The phantom and mouse image reconstructed with FDK algorithm at 400 projection data used as a reference image for comparing with TV minimization and FDK algorithm at few-views. Contrast-to-noise ratio (CNR), Universal quality index (UQI) were used as a image evaluation metric. When projection data are not insufficient, our results show that the image quality of reconstructed with TV minimization is similar to reconstructed image with FDK at 400 view. In the cylindrical phantom study, the CNR of TV image was 5.86, FDK image was 5.65 and FDK-reference was 5.98 at 90-views. The CNR of TV image 0.21 higher than FDK image CNR at 90-views. UQI of TV image was 0.99 and FDK image was 0.81 at 90-views. where, the number of projection is 90, the UQI of TV image 0.18 higher than FDK image at 90-views. In the mouse study UQI of TV image was 0.91, FDK was 0.83 at 90-views. the UQI of TV image 0.08 higher than FDK image at 90-views. In cylindrical phantom image and mouse image study, TV minimization algorithm shows the best performance in artifact reduction and preserving edges at few view data. Therefore, TV minimization can potentially be expected to reduce patient dose in clinics.

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.