• Title/Summary/Keyword: regularization method

Search Result 304, Processing Time 0.022 seconds

IMAGE DEBLURRING USING GLOBAL PCG METHOD WITH KRONECKER PRODUCT PRECONDITIONER

  • KIM, KYOUM SUN;YUN, JAE HEON
    • Journal of applied mathematics & informatics
    • /
    • v.36 no.5_6
    • /
    • pp.531-540
    • /
    • 2018
  • We first show how to construct the linear operator equations corresponding to Tikhonov regularization problems for solving image deblurring problems with nearly separable point spread functions. We next propose a Kronecker product preconditioner which is suitable for the global PCG method. Lastly, we provide numerical experiments of the global PCG method with the Kronecker product preconditioner for several image deblurring problems to evaluate its effectiveness.

TWO DIMENSIONAL VERSION OF LEAST SQUARES METHOD FOR DEBLURRING PROBLEMS

  • Kwon, SunJoo;Oh, SeYoung
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.24 no.4
    • /
    • pp.895-903
    • /
    • 2011
  • A two dimensional version of LSQR iterative algorithm which takes advantages of working solely with the 2-dimensional arrays is developed and applied to the image deblurring problem. The efficiency of the method comparing to the Fourier-based LSQR method and the 2-D version CGLS algorithm methods proposed by Hanson ([4]) is analyzed.

Finite Element Mesh Dependency in Nonlinear Earthquake Analysis of Concrete Dams (콘크리트 댐의 비선형 지진해석에서의 유한요소망 영향)

  • 이지호
    • Journal of the Korea Concrete Institute
    • /
    • v.13 no.6
    • /
    • pp.637-644
    • /
    • 2001
  • A regularization method based on the Duvaut-Lions viscoplastic scheme for plastic-damage and continuum damage models, which provides mesh-independent and well-posed solutions in nonlinear earthquake analysis of concrete dams, is presented. A plastic-damage model regularized using the proposed rate-dependent viscosity method and its original rate-independent version are used for the earthquake damage analysis of a concrete dam to analyze the effect of the regualarization and mesh. The computational analysis shows that the regularized plastic-damage model gives well-posed solutions regardless mesh size and arrangement, while the rate-independent counterpart produces mesh-dependent ill-posed results.

Penalized-Likelihood Image Reconstruction for Transmission Tomography Using Spline Regularizers (스플라인 정칙자를 사용한 투과 단층촬영을 위한 벌점우도 영상재구성)

  • Jung, J.E.;Lee, S.-J.
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.211-220
    • /
    • 2015
  • Recently, model-based iterative reconstruction (MBIR) has played an important role in transmission tomography by significantly improving the quality of reconstructed images for low-dose scans. MBIR is based on the penalized-likelihood (PL) approach, where the penalty term (also known as the regularizer) stabilizes the unstable likelihood term, thereby suppressing the noise. In this work we further improve MBIR by using a more expressive regularizer which can restore the underlying image more accurately. Here we used a spline regularizer derived from a linear combination of the two-dimensional splines with first- and second-order spatial derivatives and applied it to a non-quadratic convex penalty function. To derive a PL algorithm with the spline regularizer, we used a separable paraboloidal surrogates algorithm for convex optimization. The experimental results demonstrate that our regularization method improves reconstruction accuracy in terms of both regional percentage error and contrast recovery coefficient by restoring smooth edges as well as sharp edges more accurately.

Two Dimensional Slow Feature Discriminant Analysis via L2,1 Norm Minimization for Feature Extraction

  • Gu, Xingjian;Shu, Xiangbo;Ren, Shougang;Xu, Huanliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3194-3216
    • /
    • 2018
  • Slow Feature Discriminant Analysis (SFDA) is a supervised feature extraction method inspired by biological mechanism. In this paper, a novel method called Two Dimensional Slow Feature Discriminant Analysis via $L_{2,1}$ norm minimization ($2DSFDA-L_{2,1}$) is proposed. $2DSFDA-L_{2,1}$ integrates $L_{2,1}$ norm regularization and 2D statically uncorrelated constraint to extract discriminant feature. First, $L_{2,1}$ norm regularization can promote the projection matrix row-sparsity, which makes the feature selection and subspace learning simultaneously. Second, uncorrelated features of minimum redundancy are effective for classification. We define 2D statistically uncorrelated model that each row (or column) are independent. Third, we provide a feasible solution by transforming the proposed $L_{2,1}$ nonlinear model into a linear regression type. Additionally, $2DSFDA-L_{2,1}$ is extended to a bilateral projection version called $BSFDA-L_{2,1}$. The advantage of $BSFDA-L_{2,1}$ is that an image can be represented with much less coefficients. Experimental results on three face databases demonstrate that the proposed $2DSFDA-L_{2,1}/BSFDA-L_{2,1}$ can obtain competitive performance.

Regularized Multichannel Blind Deconvolution Using Alternating Minimization

  • James, Soniya;Maik, Vivek;Karibassappa, K.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.6
    • /
    • pp.413-421
    • /
    • 2015
  • Regularized Blind Deconvolution is a problem applicable in degraded images in order to bring the original image out of blur. Multichannel blind Deconvolution considered as an optimization problem. Each step in the optimization is considered as variable splitting problem using an algorithm called Alternating Minimization Algorithm. Each Step in the Variable splitting undergoes Augmented Lagrangian method (ALM) / Bregman Iterative method. Regularization is used where an ill posed problem converted into a well posed problem. Two well known regularizers are Tikhonov class and Total Variation (TV) / L2 model. TV can be isotropic and anisotropic, where isotropic for L2 norm and anisotropic for L1 norm. Based on many probabilistic model and Fourier Transforms Image deblurring can be solved. Here in this paper to improve the performance, we have used an adaptive regularization filtering and isotropic TV model Lp norm. Image deblurring is applicable in the areas such as medical image sensing, astrophotography, traffic signal monitoring, remote sensors, case investigation and even images that are taken using a digital camera / mobile cameras.

ALTERNATING RESOLVENT ALGORITHMS FOR FINDING A COMMON ZERO OF TWO ACCRETIVE OPERATORS IN BANACH SPACES

  • Kim, Jong Kyu;Truong, Minh Tuyen
    • Journal of the Korean Mathematical Society
    • /
    • v.54 no.6
    • /
    • pp.1905-1926
    • /
    • 2017
  • In this paper we introduce a new iterative method by the combination of the prox-Tikhonov regularization and the alternating resolvents for finding a common zero of two accretive operators in Banach spaces. And we will give some applications and numerical examples. The results of this paper improve and extend the corresponding results announced by many others.

Restoration of Bi-level Images via Iterative Semi-blind Wiener Filtering (반복 semi-blind 위너 필터링을 이용한 이진영상의 복원)

  • Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.7
    • /
    • pp.1290-1294
    • /
    • 2008
  • We present a novel deblurring algorithm for bi-level images blurred by some parameterizable point spread function. The proposed method iteratively searches unknown parameters in the point spread function and noise-to-signal ratio by minimizing an objective function that is based on the binariness and the difference between two intensity values of restoring image. In simulations and experiments, the proposed method showed improved performance compared with the Wiener filtering based method in terms of bit error rate after segmentation.

Conductivity Image Reconstruction Using Modified Gauss-Newton Method in Electrical Impedance Tomography (전기 임피던스 단층촬영 기법에서 수정된 가우스-뉴턴 방법을 이용한 도전율 영상 복원)

  • Kim, Bong Seok;Park, Hyung Jun;Kim, Kyung Youn
    • Journal of IKEEE
    • /
    • v.19 no.2
    • /
    • pp.219-224
    • /
    • 2015
  • Electrical impedance tomography is an imaging technique to reconstruct the internal conductivity distribution based on applied currents and measured voltages in a domain of interest. In this paper, a modified Gauss-Newton method is proposed for conductivity image reconstruction. In the proposed method, the dimension of the inverse term is reduced by replacing the number of elements with the number of measurement data in the conductivity updating equation of the conventional Gauss-Newton method. Therefore, the computation time is greatly reduced as compared to the conventional Gauss-Newton method. Moreover, the regularization parameter is selected by computing the minimum-maximum from the diagonal components of the Jacobian matrix at every iteration. The numerical experiments with several scenarios were carried out to evaluate the reconstruction performance of the proposed method.

Performance Improvement of Convolutional Neural Network for Pulmonary Nodule Detection (폐 결절 검출을 위한 합성곱 신경망의 성능 개선)

  • Kim, HanWoong;Kim, Byeongnam;Lee, JeeEun;Jang, Won Seuk;Yoo, Sun K.
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.5
    • /
    • pp.237-241
    • /
    • 2017
  • Early detection of the pulmonary nodule is important for diagnosis and treatment of lung cancer. Recently, CT has been used as a screening tool for lung nodule detection. And, it has been reported that computer aided detection(CAD) systems can improve the accuracy of the radiologist in detection nodules on CT scan. The previous study has been proposed a method using Convolutional Neural Network(CNN) in Lung CAD system. But the proposed model has a limitation in accuracy due to its sparse layer structure. Therefore, we propose a Deep Convolutional Neural Network to overcome this limitation. The model proposed in this work is consist of 14 layers including 8 convolutional layers and 4 fully connected layers. The CNN model is trained and tested with 61,404 regions-of-interest (ROIs) patches of lung image including 39,760 nodules and 21,644 non-nodules extracted from the Lung Image Database Consortium(LIDC) dataset. We could obtain the classification accuracy of 91.79% with the CNN model presented in this work. To prevent overfitting, we trained the model with Augmented Dataset and regularization term in the cost function. With L1, L2 regularization at Training process, we obtained 92.39%, 92.52% of accuracy respectively. And we obtained 93.52% with data augmentation. In conclusion, we could obtain the accuracy of 93.75% with L2 Regularization and Data Augmentation.