• Title/Summary/Keyword: Weighted Median

Search Result 128, Processing Time 0.043 seconds

A Study on Recursive Spacial Filtering for Impulse Noise Removal in Image (영상의 임펄스 노이즈 제거를 위한 재귀적 공간 필터링에 관한 연구)

  • Noh, Hyun-Yong;Bae, Sang-Bum;Kim, Nam-Ho
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.167-170
    • /
    • 2005
  • Recently, filtering methods for attenuating noise while preserving image details are in progress actively. And SM(standard median) filter showed a great performance for noise removal in impulse noise environment but, it caused edge cancellation error. So, variable methods that modified SM(standard median) filter have been proposed, and CWM(center weighted median) filter is representative. Also, there are several methods to improve the efficiency based on min/max operation in term of preserving detail and filtering speed. In this paper, we managed a pixel corrupted by impulsive noise using min/max value of the surrounding band enclosing a pixel, and compared the efficiency with exiting methods in the simulation.

  • PDF

An Edge-Based Adaptive Method for Removing High-Density Impulsive Noise from an Image While Preserving Edges

  • Lee, Dong-Ho
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.564-571
    • /
    • 2012
  • This paper presents an algorithm for removing high-density impulsive noise that generates some serious distortions in edge regions of an image. Although many works have been presented to reduce edge distortions, these existing methods cannot sufficiently restore distorted edges in images with large amounts of impulsive noise. To solve this problem, this paper proposes a method using connected lines extracted from a binarized image, which segments an image into uniform and edge regions. For uniform regions, the existing simple adaptive median filter is applied to remove impulsive noise, and, for edge regions, a prediction filter and a line-weighted median filter using the connected lines are proposed. Simulation results show that the proposed method provides much better performance in restoring distorted edges than existing methods provide. When noise content is more than 20 percent, existing algorithms result in severe edge distortions, while the proposed algorithm can reconstruct edge regions similar to those of the original image.

A Study on Cascade Filter Algorithm for Random Valued Impulse Noise Elimination (랜덤 임펄스 잡음제거를 위한 캐스케이드 필터 알고리즘에 관한 연구)

  • Yinyu, Gao;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.3
    • /
    • pp.598-604
    • /
    • 2012
  • Image signal is corrupted by various noises in image processing, many studies are being accomplished to restore those images. In this paper, we proposed a cascade filter algorithm for removing random valued impulse noise. The algorithm consists two steps that noise detection and noise elimination. Variance of filtering mask and center pixel variance are calculated for noise detection, and the noise pixel is replaced by estimated value which first apply switching self adaptive weighted median filter and finally processed by modified weight filter. Considering the proposed algorithm only remove noise and preserve the uncorrupted information that the algorithm can not only remove noise well but also preserve edge.

Implementation of Real-Time Post-Processing for High-Quality Stereo Vision

  • Choi, Seungmin;Jeong, Jae-Chan;Chang, Jiho;Shin, Hochul;Lim, Eul-Gyoon;Cho, Jae Il;Hwang, Daehwan
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.752-765
    • /
    • 2015
  • We propose a novel post-processing algorithm and its very-large-scale integration architecture that simultaneously uses the passive and active stereo vision information to improve the reliability of the three-dimensional disparity in a hybrid stereo vision system. The proposed architecture consists of four steps - left-right consistency checking, semi-2D hole filling, a tiny adaptive variance checking, and a 2D weighted median filter. The experimental results show that the error rate of the proposed algorithm (5.77%) is less than that of a raw disparity (10.12%) for a real-world camera image having a $1,280{\times}720$ resolution and maximum disparity of 256. Moreover, for the famous Middlebury stereo image sets, the proposed algorithm's error rate (8.30%) is also less than that of the raw disparity (13.7%). The proposed architecture is implemented on a single commercial field-programmable gate array using only 13.01% of slice resources, which achieves a rate of 60 fps for $1,280{\times}720$ stereo images with a disparity range of 256.

Efficacy and Toxicity of Anti-VEGF Agents in Patients with Castration-Resistant Prostate Cancer: a Meta-analysis of Prospective Clinical Studies

  • Qi, Wei-Xiang;Fu, Shen;Zhang, Qing;Guo, Xiao-Mao
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.19
    • /
    • pp.8177-8182
    • /
    • 2014
  • Background: Blocking angiogenesis by targeting vascular endothelial growth factor (VEGF) signaling pathway to inhibit tumor growth has proven to be successful in treating a variety of different metastatic tumor types, including kidney, colon, ovarian, and lung cancers, but its role in castration-resistant prostate cancer (CRPC) is still unknown. We here aimed to determine the efficacy and toxicities of anti-VEGF agents in patients with CRPC. Materials and Methods: The databases of PubMed, Web of Science and abstracts presented at the American Society of Clinical Oncology up to March 31, 2014 were searched for relevant articles. Pooled estimates of the objective response rate (ORR) and prostate-specific antigen (PSA) response rate (decline ${\geq}50%$) were calculated using the Comprehensive Meta-Analysis (version 2.2.064) software. Median weighted progression-free survival (PFS) and overall survival (OS) time for anti-VEGF monotherapy and anti-VEGF-based doublets were compared by two-sided Student's t test. Results: A total of 3,841 patients from 19 prospective studies (4 randomized controlled trials and 15 prospective nonrandomized cohort studies) were included for analysis. The pooled ORR was 12.4% with a higher response rate of 26.4% (95%CI, 13.6-44.9%) for anti-VEGF-based combinations vs. 6.7% (95%CI, 3.5-12.7%) for anti-VEGF alone (p=0.004). Similarly, the pooled PSA response rate was 32.4% with a higher PSA response rate of 52.8% (95%CI: 40.2-65.1%) for anti-VEGF-based combinations vs. 7.3% (95%CI, 3.6-14.2%) for anti-VEGF alone (p<0.001). Median PFS and OS were 6.9 and 22.1 months with weighted median PFS of 5.6 vs. 6.9 months (p<0.001) and weighted median OS of 13.1 vs. 22.1 months (p<0.001) for anti-VEGF monotherapy vs. anti-VEGF-based doublets. Conclusions: With available evidence, this pooled analysis indicates that anti-VEGF monotherapy has a modest effect in patients with CRPC, and clinical benefits gained from anti-VEGF-based doublets appear greater than anti-VEGF monotherapy.

Weighted Parameter Analysis of L1 Minimization for Occlusion Problem in Visual Tracking (영상 추적의 Occlusion 문제 해결을 위한 L1 Minimization의 Weighted Parameter 분석)

  • Wibowo, Suryo Adhi;Jang, Eunseok;Lee, Hansoo;Kim, Sungshin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.101-103
    • /
    • 2016
  • Recently, the target object can be represented as sparse coefficient vector in visual tracking. Due to this reason, exploitation of the compressibility in the transform domain by using L1 minimization is needed. Further, L1 minimization is proposed to handle the occlusion problem in visual tracking, since tracking failures mostly are caused by occlusion. Furthermore, there is a weighted parameter in L1 minimization that influences the result of this minimization. In this paper, this parameter is analyzed for occlusion problem in visual tracking. Several coefficients that derived from median value of the target object, mean value of the arget object, the standard deviation of the target object are, 0, 0.1, and 0.01 are used as weighted parameter of L1 minimization. Based on the experimental results, the value which is equal to 0.1 is suggested as weighted parameter of L1 minimization, due to achieved the best result of success rate and precision performance parameter. Both of these performance parameters are based on one pass evaluation (OPE).

  • PDF

A Study on a Measure for Non-Normal Process Capability (비정규 공정능력 측도에 관한 연구)

  • 김홍준;김진수;조남호
    • Proceedings of the Korean Reliability Society Conference
    • /
    • 2001.06a
    • /
    • pp.311-319
    • /
    • 2001
  • All indices that are now in use assume normally distributed data, and any use of the indices on non-normal data results in inaccurate capability measurements. Therefore, $C_{s}$ is proposed which extends the most useful index to date, the Pearn-Kotz-Johnson $C_{pmk}$, by not only taking into account that the process mean may not lie midway between the specification limits and incorporating a penalty when the mean deviates from its target, but also incorporating a penalty for skewness. Therefore we propose, a new process capability index $C_{psk}$( WV) applying the weighted variance control charting method for non-normally distributed. The main idea of the weighted variance method(WVM) is to divide a skewed or asymmetric distribution into two normal distribution from its mean to create two new distributions which have the same mean but different standard distributions. In this paper we propose an example, a distribution generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and the Wright methods. This example shows that the weighted valiance-based indices are more consistent than the other two methods In terms of sensitivity to departure to the process mean/median from the target value for non-normal process.s.s.s.

  • PDF

Robust and Optimum Weighted Stacking of Seismic Data (탄성파 자료의 강인한 최적 가중 겹쌓기)

  • Ji, Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.1
    • /
    • pp.1-5
    • /
    • 2013
  • Stacking in seismic processing plays an important role in improving signal-to-noise ratio and imaging quality of seismic data. However, the conventional stacking method doesn't remove random noises with various distributions and outliers up to a satisfactory level. This paper introduces a robust and optimum weighted stack method which shows both robustness to outlier noises and optimum in removing random noises. This was achieved by combining the robust median stacking with the optimum weighted stacking using local correlation. Application of the method to synthetic data showed that the proposed method is very effective in suppressing random noises with various distributions including outliers.

Diagnostic Performance of Diffusion-Weighted Steady-State Free Precession in Differential Diagnosis of Neoplastic and Benign Osteoporotic Vertebral Compression Fractures: Comparison to Diffusion-Weighted Echo-Planar Imaging

  • Shin, Jae Ho;Jeong, Soh Yong;Lim, Jung Hyun;Park, Jeongmi
    • Investigative Magnetic Resonance Imaging
    • /
    • v.21 no.3
    • /
    • pp.154-161
    • /
    • 2017
  • Purpose: To evaluate the diagnostic performance of diffusion-weighted steady-state free precession (DW-SSFP) in comparison to diffusion-weighted echo-planar imaging (DW-EPI) for differentiating the neoplastic and benign osteoporotic vertebral compression fractures. Materials and Methods: The subjects were 40 patients with recent vertebral compression fractures but no history of vertebroplasty, spine operation, or chemotherapy. They had received 3-Tesla (T) spine magnetic resonance imaging (MRI), including both DW-SSFP and DW-EPI sequences. The 40 patients included 20 with neoplastic vertebral fracture and 20 with benign osteoporotic vertebral fracture. In each fracture lesion, we obtained the signal intensity normalized by the signal intensity of normal bone marrow (SI norm) on DW-SSFP and the apparent diffusion coefficient (ADC) on DW-EPI. The correlation between the SI norm and the ADC in each lesion was analyzed using linear regression. The optimal cut-off values for the diagnosis of neoplastic fracture were determined in each sequence using Youden's J statistics and receiver operating characteristic curve analyses. Results: In the neoplastic fracture, the median SI norm on DW-SSFP was higher and the median ADC on DW-EPI was lower than the benign osteoporotic fracture (5.24 vs. 1.30, P = 0.032, and 0.86 vs. 1.48, P = 0.041, respectively). Inverse linear correlations were evident between SI norm and ADC in both neoplastic and benign osteoporotic fractures (r = -0.45 and -0.61, respectively). The optimal cut-off values for diagnosis of neoplastic fracture were SI norm of 3.0 in DW-SSFP with the sensitivity and specificity of 90.4% (95% confidence interval [CI]: 81.0-99.0) and 95.3% (95% CI: 90.0-100.0), respectively, and ADC of 1.3 in DW-EPI with the sensitivity and specificity of 90.5% (95% CI: 80.0-100.0) and 70.4% (95% CI: 60.0-80.0), respectively. Conclusion: In 3-T MRI, DW-SSFP has comparable sensitivity and specificity to DW-EPI in differentiating the neoplastic vertebral fracture from the benign osteoporotic vertebral fracture.

Development and validation of a quantitative food frequency questionnaire to assess nutritional status in Korean adults

  • Na, Youn Ju;Lee, Seon Heui
    • Nutrition Research and Practice
    • /
    • v.6 no.5
    • /
    • pp.444-450
    • /
    • 2012
  • This study was performed to evaluate the validity of the food frequency questionnaire (FFQ), which is being used at the Samsung Medical Center. In total, 305 (190 males and 115 females) participants consented and completed the 3-day diet records and FFQ. Age, gender and energy-adjusted and de-attenuated correlations ranged from 0.317 (polyunsaturated fatty acid) to 0.748 (carbohydrate) with a median value of 0.550. The weighted kappa value ranged from 0.18 (vitamin A) to 0.57 (carbohydrate) with a median value of 0.36. More than 75% of the subjects were classified into the same or adjacent quartiles. The FFQ had reasonably good validity compared with that of another study. Therefore, our FFQ is considered a proper method to assess nutrient intake in healthy Korean adults.