• Title/Summary/Keyword: thresholding value

Search Result 114, Processing Time 0.019 seconds

Denoising on Image Signal in Wavelet Basis with the VisuShrink Technique Using the Estimated Noise Deviation by the Monotonic Transform (웨이블릿 기저의 영상신호에서 단조변환으로 추정된 잡음편차를 사용한 VisuShrink 기법의 잡음제거)

  • 우창용;박남천
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.111-118
    • /
    • 2004
  • Techniques based on thresholding of wavelet coefficients are gaining popularity for denoising data because of the reasonable performance at the low complexity. The VisuShrink which removes the noise with the universal threshold is one of the techniques. The universal threshold is proportional to the noise deviation and the number of data samples. In general, because the noise deviation is not known, one needs to estimate the deviation for determining the value of the universal threshold. But, only for the finest scale wavelet coefficients, it has been known the way of estimating the noise deviation, so the noise in coarse scales cannot be removed with the VisuShrink. We propose here a new denoising method which removes the noise in each scale except the coarsest scale by Visushrink method. The noise deviation at each band is estimated by the monotonic transform and weighted deviation, the product of estimated noise deviation by the weight, is applied to the universal threshold. By making use of the universal threshold and the Soft-Threshold technique, the noise in each band is removed. The denoising characteristics of the proposed method is compared with that of the traditional VisuShrink and SureShrink method. The result showed that the proposed method is effective in denoising on Gaussian noise and quantization noise.

  • PDF

Background Removal and ROI Segmentation Algorithms for Chest X-ray Images (흉부 엑스레이 영상에서 배경 제거 및 관심영역 분할 기법)

  • Park, Jin Woo;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.105-114
    • /
    • 2015
  • This paper proposes methods to remove background area and segment region of interest (ROI) in chest X-ray images. Conventional algorithms to improve detail or contrast of images normally utilize brightness and frequency information. If we apply such algorithms to the entire images, we cannot obtain reliable visual quality due to unnecessary information such as background area. So, we propose two effective algorithms to remove background and segment ROI from the input X-ray images. First, the background removal algorithm analyzes the histogram distribution of the input X-ray image. Next, the initial background is estimated by a proper thresholding on histogram domain, and it is removed. Finally, the body contour or background area is refined by using a popular guided filter. On the other hand, the ROI, i.e., lung segmentation algorithm first determines an initial bounding box using the lung's inherent location information. Next, the main intensity value of the lung is computed by vertical cumulative sum within the initial bounding box. Then, probable outliers are removed by using a specific labeling and the pre-determined background information. Finally, a bounding box including lung is obtained. Simulation results show that the proposed background removal and ROI segmentation algorithms outperform the previous works.

A Study on the Improvement of Wavefront Sensing Accuracy for Shack-Hartmann Sensors (Shack-Hartmann 센서를 이용한 파면측정의 정확도 향상에 관한 연구)

  • Roh, Kyung-Wan;Uhm, Tae-Kyoung;Kim, Ji-Yeon;Park, Sang-Hoon;Youn, Sung-Kie;Lee, Jun-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.5
    • /
    • pp.383-390
    • /
    • 2006
  • The SharkHartmann wavefront sensors are the most popular devices to measure wavefront in the field of adaptive optics. The Shack-Hartmann sensors measure the centroids of spot irradiance distribution formed by each corresponding micro-lens. The centroids are linearly proportional to the local mean slopes of the wavefront defined within the corresponding sub-aperture. The wavefront is then reconstructed from the evaluated local mean slopes. The uncertainty of the Shack-Hartmann sensor is caused by various factors including the detector noise, the limited size of the detector, the magnitude and profile of spot irradiance distribution, etc. This paper investigates the noise propagation in two major centroid evaluation algorithms through computer simulation; 1st order moments of the irradiance algorithms i.e. center of gravity algorithm, and correlation algorithm. First, the center of gravity algorithm is shown to have relatively large dependence on the magnitudes of noises and the shape & size of irradiance sidelobes, whose effects are also shown to be minimized by optimal thresholding. Second, the correlation algorithm is shown to be robust over those effects, while its measurement accuracy is vulnerable to the size variation of the reference spot. The investigation is finally confirmed by experimental measurements of defocus wavefront aberrations using a Shack-Hartmann sensor using those two algorithms.

Fully Automatic Segmentation of Acute Ischemic Lesions on Diffusion-Weighted Imaging Using Convolutional Neural Networks: Comparison with Conventional Algorithms

  • Ilsang Woo;Areum Lee;Seung Chai Jung;Hyunna Lee;Namkug Kim;Se Jin Cho;Donghyun Kim;Jungbin Lee;Leonard Sunwoo;Dong-Wha Kang
    • Korean Journal of Radiology
    • /
    • v.20 no.8
    • /
    • pp.1275-1284
    • /
    • 2019
  • Objective: To develop algorithms using convolutional neural networks (CNNs) for automatic segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) and compare them with conventional algorithms, including a thresholding-based segmentation. Materials and Methods: Between September 2005 and August 2015, 429 patients presenting with acute cerebral ischemia (training:validation:test set = 246:89:94) were retrospectively enrolled in this study, which was performed under Institutional Review Board approval. Ground truth segmentations for acute ischemic lesions on DWI were manually drawn under the consensus of two expert radiologists. CNN algorithms were developed using two-dimensional U-Net with squeeze-and-excitation blocks (U-Net) and a DenseNet with squeeze-and-excitation blocks (DenseNet) with squeeze-and-excitation operations for automatic segmentation of acute ischemic lesions on DWI. The CNN algorithms were compared with conventional algorithms based on DWI and the apparent diffusion coefficient (ADC) signal intensity. The performances of the algorithms were assessed using the Dice index with 5-fold cross-validation. The Dice indices were analyzed according to infarct volumes (< 10 mL, ≥ 10 mL), number of infarcts (≤ 5, 6-10, ≥ 11), and b-value of 1000 (b1000) signal intensities (< 50, 50-100, > 100), time intervals to DWI, and DWI protocols. Results: The CNN algorithms were significantly superior to conventional algorithms (p < 0.001). Dice indices for the CNN algorithms were 0.85 for U-Net and DenseNet and 0.86 for an ensemble of U-Net and DenseNet, while the indices were 0.58 for ADC-b1000 and b1000-ADC and 0.52 for the commercial ADC algorithm. The Dice indices for small and large lesions, respectively, were 0.81 and 0.88 with U-Net, 0.80 and 0.88 with DenseNet, and 0.82 and 0.89 with the ensemble of U-Net and DenseNet. The CNN algorithms showed significant differences in Dice indices according to infarct volumes (p < 0.001). Conclusion: The CNN algorithm for automatic segmentation of acute ischemic lesions on DWI achieved Dice indices greater than or equal to 0.85 and showed superior performance to conventional algorithms.