• Title/Summary/Keyword: blurring images

Search Result 191, Processing Time 0.022 seconds

Skin Lesion Image Segmentation Based on Adversarial Networks

  • Wang, Ning;Peng, Yanjun;Wang, Yuanhong;Wang, Meiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2826-2840
    • /
    • 2018
  • Traditional methods based active contours or region merging are powerless in processing images with blurring border or hair occlusion. In this paper, a structure based convolutional neural networks is proposed to solve segmentation of skin lesion image. The structure mainly consists of two networks which are segmentation net and discrimination net. The segmentation net is designed based U-net that used to generate the mask of lesion, while the discrimination net is designed with only convolutional layers that used to determine whether input image is from ground truth labels or generated images. Images were obtained from "Skin Lesion Analysis Toward Melanoma Detection" challenge which was hosted by ISBI 2016 conference. We achieved segmentation average accuracy of 0.97, dice coefficient of 0.94 and Jaccard index of 0.89 which outperform the other existed state-of-the-art segmentation networks, including winner of ISBI 2016 challenge for skin melanoma segmentation.

Dual Exposure Fusion with Entropy-based Residual Filtering

  • Heo, Yong Seok;Lee, Soochahn;Jung, Ho Yub
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2555-2575
    • /
    • 2017
  • This paper presents a dual exposure fusion method for image enhancement. Images taken with a short exposure time usually contain a sharp structure, but they are dark and are prone to be contaminated by noise. In contrast, long-exposure images are bright and noise-free, but usually suffer from blurring artifacts. Thus, we fuse the dual exposures to generate an enhanced image that is well-exposed, noise-free, and blur-free. To this end, we present a new scale-space patch-match method to find correspondences between the short and long exposures so that proper color components can be combined within a proposed dual non-local (DNL) means framework. We also present a residual filtering method that eliminates the structure component in the estimated noise image in order to obtain a sharper and further enhanced image. To this end, the entropy is utilized to determine the proper size of the filtering window. Experimental results show that our method generates ghost-free, noise-free, and blur-free enhanced images from the short and long exposure pairs for various dynamic scenes.

A Hierarchical Stereo Matching Algorithm Using Wavelet Representation (웨이브릿 변환을 이용한 계층적 스테레오 정합)

  • 김영석;이준재;하영호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.74-86
    • /
    • 1994
  • In this paper a hierarchical stereo matching algorithm to obtain the disparity in wavelet transformed domain by using locally adaptive window and weights is proposed. The pyramidal structure obtained by wavelet transform is used to solve the loss of information which the conventional Gaussian or Laplacian pyramid have. The wavelet transformed images are decomposed into the blurred image the horizontal edges the vertical edges and the diagonal edges. The similarity between each wavelet channel of left and right image determines the relative importance of each primitive and make the algorithm perform the area-based and feature-based matching adaptively. The wavelet transform can extract the features that have the dense resolution as well as can avoid the duplication or loss of information. Meanwhile the variable window that needs to obtain precise and stable estimation of correspondense is decided adaptively from the disparities estimated in coarse resolution and LL(low-low) channel of wavelet transformed stereo image. Also a new relaxation algorithm that can reduce the false match without the blurring of the disparity edge is proposed. The experimental results for various images show that the proposed algorithm has good perfpormance even if the images used in experiments have the unfavorable conditions.

  • PDF

A Study on Image restoration Algorithm using LOG function character (LOG함수의 특성을 이용한 영상잡음제거(1))

  • Kwon, Kee-Hong
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.3
    • /
    • pp.447-456
    • /
    • 2005
  • This paper describes iterative restoration method of restoring blurred images using the LOG compansion function and Conjugate Gradient method. Conventional restoration methods results satisfy the requirement performance for restoring blurred images. but iteration number and convergence velocity increase. This paper proposed an opmtimised iteration restoration method for the images degraded by blurring effect, using the LOG compansion function and Conjugate Gradient method. Here, the LOG compansion function used to improve local properties of the image being restored, made the visual character and convergence velocity of the restored image improved. Throught the simulation results, the author showed that proposed algorithm produced superior performance results by conventional methods.

  • PDF

Comparative Evaluation of Filters for Speckle Noise Reduction in a Clinical Liver Ultrasound Image (간 초음파 영상에서의 스페클 노이즈 제거를 위한 필터들의 비교 평가)

  • Hajin Kim;Youngjin Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.6
    • /
    • pp.475-484
    • /
    • 2023
  • This study aimed to compare filters for reducing speckle noise in ultrasound images using clinical liver images. We acquired the clinical liver ultrasound images, and noisy images were obtained by adding 0.01, 0.05, 0.10, and 0.50 intensity levels of speckle noise to the liver images. The Wiener filter, median modified Wiener filter, gamma filter, and Lee filter were designed for the noisy images by setting window sizes at 3×3, 5×5, and 7×7. The coefficient of variation (COV) and contrast to noise ratio (CNR) were calculated to evaluate noise reduction and various filters. Moreover, the filter with the highest image quality was selected and quantitatively compared to a noisy image. As a result, COV and CNR showed the noise improved result when the Lee filter was applied. Furthermore, the Lee filter image with a window size of 7×7 was noted to possess approximately a minimum of 1.28 to a maximum of 3.38 times better COV and a minimum of 2.18 to a maximum of 5.50 times better CNR than the noisy image. In conclusion, we confirmed that the Lee filter was effective in reducing speckle noise and proved that an appropriate window size needs to be set considering blurring.

Blocking artifacts reduction for improving visual quality of highly compressed images (압축영상의 화질향상을 위한 블록킹 현상 제거에 관한 연구)

  • 이주홍;김민구;정제창;최병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.8
    • /
    • pp.1677-1690
    • /
    • 1997
  • Block-transform coding is one of the most popular approaches for image compression. For example, DCT is widely used in the internaltional standards standards such as MPEG-1, MPEG-2, JPEG, and H.261. In the block-based transform coding, blocking artifacts may appear along block boundaries, and they can cause severe image degradation eqpecially when the transform coefficients are coarsely quantized. In this paper, we propose a new method for blocking artifacts reduction in transform-coded images. For blocking artifacts reduction, we add a correction term, on a block basis, composed of a linear combination of 28 basis images that are orthonormal on block boundaries. We select 28 DCT kernel functions of which boundary values are linearly independent, and Gram-Schmidt process is applied to the boundary values in order to obtain 28 boundary-orthonormal basis images. A threshold of bolock discontinuity is introduced for improvement of visual quality by reducing image blurring. We also investigate the number of basis images needed for efficient blocking artifacts reduction when the compression ratio changes.

  • PDF

EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement

  • Canlin Li;Shun Song;Pengcheng Gao;Wei Huang;Lihua Bi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.980-997
    • /
    • 2024
  • To improve the brightness of images and reveal hidden information in dark areas is the main objective of low-light image enhancement (LLIE). LLIE methods based on deep learning show good performance. However, there are some limitations to these methods, such as the complex network model requires highly configurable environments, and deficient enhancement of edge details leads to blurring of the target content. Single-scale feature extraction results in the insufficient recovery of the hidden content of the enhanced images. This paper proposed an edge detection-based multi-scale feature enhancement network for LLIE (EDMFEN). To reduce the loss of edge details in the enhanced images, an edge extraction module consisting of a Sobel operator is introduced to obtain edge information by computing gradients of images. In addition, a multi-scale feature enhancement module (MSFEM) consisting of multi-scale feature extraction block (MSFEB) and a spatial attention mechanism is proposed to thoroughly recover the hidden content of the enhanced images and obtain richer features. Since the fused features may contain some useless information, the MSFEB is introduced so as to obtain the image features with different perceptual fields. To use the multi-scale features more effectively, a spatial attention mechanism module is used to retain the key features and improve the model performance after fusing multi-scale features. Experimental results on two datasets and five baseline datasets show that EDMFEN has good performance when compared with the stateof-the-art LLIE methods.

Edge-preserving demosaicing method for digital cameras with Bayer-like W-RGB color filter array

  • Park, Jongjoo;Chong, Jongwha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1011-1025
    • /
    • 2014
  • A demosaicing method for a Bayer-like W-RGB color filter array (CFA) is proposed. When reproducing images from a W-RGB CFA, conventional color separation methods for W-RGB CFA are likely to cause blurring near the edges due to rough averaging using a color ratio of neighboring pixels. Moreover, these methods cannot be applied to real-life digital cameras with W-RGB CFA because the methods were proposed under an ideal situation, W=R+G+B, not a real-life situation, $W{\neq}R+G+B$. To improve edge performance, we propose a method of constant color difference assumption with inversed weight, which uses information from all edge directions for interpolating all missing color channels. The proposed method calculates the correlation between W, R, G, and B to enable its application to real-life digital cameras with W-RGB CFA. Simulations were performed to evaluate the proposed method using images captured from a real-life digital camera with W-RGB CFA. Simulation results shows that we can demosaic by using the proposed algorithm compared with the conventional one in about +34.79% SNR, +11.43% PSNR, +1.54% SSIM and 14.02% S-CIELAB error. Thus, the proposed method demosaics better than the conventional methods.

3D Environmental Walkthrough Using The Integration of Multiple Segmentation Based Environment Models (다중 분할 기반 환경 모델의 통합에 의한 3차원 환경 탐색)

  • Ryoo, Seung-Taek
    • The Journal of Korean Association of Computer Education
    • /
    • v.8 no.1
    • /
    • pp.105-115
    • /
    • 2005
  • An environment model that is constructed using a single image has the problem of a blurring effect caused by the fixed resolution, and the stretching effect of the 3D model caused when information that does not exist on the image occurs due to the occlusion. This paper introduces the registration and integration method using multiple images to resolve the above problem. This method can represent parallax effect and expand the environment model to represent wide range of environment. The segmentation-based environment modeling method using multiple images can build a detail model with optimal resolution.

  • PDF

The Improvement of Motion Compensation for a Moving Target Using the Gabor Wavelet Transform (Gabor Wavelet Transform을 이용한 움직이는 표적에 대한 움직임 보상 개선)

  • Shin, Seung-Yong;Myung, Noh-Hoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.17 no.10 s.113
    • /
    • pp.913-919
    • /
    • 2006
  • This paper presents a technique for motion compensation of ISAR(Inverse SAR) images for a moving target. If a simple fourier transform is employed to obtain ISAR image for a moving target, the image is usually blurred. These images blurring problem can be solved with the time-frequency transform. In this paper, motion compensation algorithms of ISAR image such as STFT(Short Time Fourier Transform), GWT(Gabor Wavelet Transform) are described. In order to show the performances of each algorithm, we use scattering wave of the ideal point scatterers and simulated MIG-25 to obtain motion compensated ISAR image, and display the resolution of STFT and GWT ISAR image.