• Title/Summary/Keyword: Grayscale image

Search Result 120, Processing Time 0.019 seconds

COSMO-SkyMed 2 Image Color Mapping Using Random Forest Regression

  • Seo, Dae Kyo;Kim, Yong Hyun;Eo, Yang Dam;Park, Wan Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.4
    • /
    • pp.319-326
    • /
    • 2017
  • SAR (Synthetic aperture radar) images are less affected by the weather compared to optical images and can be obtained at any time of the day. Therefore, SAR images are being actively utilized for military applications and natural disasters. However, because SAR data are in grayscale, it is difficult to perform visual analysis and to decipher details. In this study, we propose a color mapping method using RF (random forest) regression for enhancing the visual decipherability of SAR images. COSMO-SkyMed 2 and WorldView-3 images were obtained for the same area and RF regression was used to establish color configurations for performing color mapping. The results were compared with image fusion, a traditional color mapping method. The UIQI (universal image quality index), the SSIM (structural similarity) index, and CC (correlation coefficients) were used to evaluate the image quality. The color-mapped image based on the RF regression had a significantly higher quality than the images derived from the other methods. From the experimental result, the use of color mapping based on the RF regression for SAR images was confirmed.

The Error Diffusion Halftoning Using Local Adaptive Sharpening Control (국부 적응 샤프닝 조절을 사용한 오차확산 해프토닝)

  • 곽내정;양운모;윤태승;안재형
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.87-92
    • /
    • 2004
  • Digital halftoning is to quantize a grayscale image to binary image. The error diffusion halftoning generates high quality bilevel image. But that also has some defects such as warms effect, sharpening and etc. To reduce these defects, Kite proposed the modified threshold modulation that has a parameter to control sharpening. Nevertheless some degradation left near edges with large luminance change. In this paver, we propose a method to control the parameter in proportional to local edge magnitude. The results of computer simulation show more reductions of the sharpening in the halftone image. Especially there are great improvement of quality near edges with large luminance change.

3D Segmentation for High-Resolution Image Datasets Using a Commercial Editing Tool in the IoT Environment

  • Kwon, Koojoo;Shin, Byeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1126-1134
    • /
    • 2017
  • A variety of medical service applications in the field of the Internet of Things (IoT) are being studied. Segmentation is important to identify meaningful regions in images and is also required in 3D images. Previous methods have been based on gray value and shape. The Visible Korean dataset consists of serially sectioned high-resolution color images. Unlike computed tomography or magnetic resonance images, automatic segmentation of color images is difficult because detecting an object's boundaries in colored images is very difficult compared to grayscale images. Therefore, skilled anatomists usually segment color images manually or semi-automatically. We present an out-of-core 3D segmentation method for large-scale image datasets. Our method can segment significant regions in the coronal and sagittal planes, as well as the axial plane, to produce a 3D image. Our system verifies the result interactively with a multi-planar reconstruction view and a 3D view. Our system can be used to train unskilled anatomists and medical students. It is also possible for a skilled anatomist to segment an image remotely since it is difficult to transfer such large amounts of data.

Application of Image Processing to Determine Size Distribution of Magnetic Nanoparticles

  • Phromsuwan, U.;Sirisathitkul, C.;Sirisathitkul, Y.;Uyyanonvara, B.;Muneesawang, P.
    • Journal of Magnetics
    • /
    • v.18 no.3
    • /
    • pp.311-316
    • /
    • 2013
  • Digital image processing has increasingly been implemented in nanostructural analysis and would be an ideal tool to characterize the morphology and position of self-assembled magnetic nanoparticles for high density recording. In this work, magnetic nanoparticles were synthesized by the modified polyol process using $Fe(acac)_3$ and $Pt(acac)_2$ as starting materials. Transmission electron microscope (TEM) images of as-synthesized products were inspected using an image processing procedure. Grayscale images ($800{\times}800$ pixels, 72 dot per inch) were converted to binary images by using Otsu's thresholding. Each particle was then detected by using the closing algorithm with disk structuring elements of 2 pixels, the Canny edge detection, and edge linking algorithm. Their centroid, diameter and area were subsequently evaluated. The degree of polydispersity of magnetic nanoparticles can then be compared using the size distribution from this image processing procedure.

Performance Comparison on Speech Codecs for Digital Watermarking Applications

  • Mamongkol, Y.;Amornraksa, T.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.466-469
    • /
    • 2002
  • Using intelligent information contained within the speech to identify the specific hidden data in the watermarked multimedia data is considered to be an efficient method to achieve the speech digital watermarking. This paper presents the performance comparison between various types of speech codec in order to determine an appropriate one to be used in digital watermarking applications. In the experiments, the speech signal encoded by four different types of speech codec, namely CELP, GSM, SBC and G.723.1codecs is embedded into a grayscale image, and theirs performance in term of speech recognition are compared. The method for embedding the speech signal into the host data is borrowed from a watermarking method based on the zerotrees of wavelet packet coefficients. To evaluate efficiency of the speech codec used in watermarking applications, the speech signal after being extracted from the attacked watermarked image will be played back to the listeners, and then be justified whether its content is intelligible or not.

  • PDF

Optical Recognition of Credit Card Numbers (신용카드 번호의 광학적 인식)

  • Jung, Min Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.57-62
    • /
    • 2014
  • This paper proposes a new optical recognition method of credit card numbers. Firstly, the proposed method segments numbers from the input image of a credit card. It uses the significant differences of standard deviations between the foreground numbers and the background. Secondly, the method extracts gradient features from the segmented numbers. The gradient features are defined as four directions of grayscale pixels for 16 regions of an input number. Finally, it utilizes an artificial neural network classifier that uses an error back-propagation algorithm. The proposed method is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiments were conducted by using real credit card images. The results show that the proposed algorithm is quite successful for most credit cards. However, the method fails in some credit cards with strong background patterns.

Wire Recognition on the Chip Photo based on Histogram (칩 사진 상의 와이어 인식 방법)

  • Jhang, Kyoungson
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.5
    • /
    • pp.111-120
    • /
    • 2016
  • Wire recognition is one of the important tasks in chip reverse engineering since connectivity comes from wires. Recognized wires are used to recover logical or functional representation of the corresponding circuit. Though manual recognition provides accurate results, it becomes impossible, as the number of wires is more than hundreds of thousands. Wires on a chip usually have specific intensity or color characteristics since they are made of specific materials. This paper proposes two stage wire recognition scheme; image binarization and then the process of determining whether regions in binary image are wires or not. We employ existing techniques for two processes. Since the second process requires the characteristics of wires, the users needs to select the typical wire region in the given image. The histogram characteristic of the selected region is used in calculating histogram similarity between the typical wire region and the other regions. The first experiment is to select the most appropriate binarization scheme for the second process. The second experiment on the second process compares three proposed methods employing histogram similarity of grayscale or HSV color since there have not been proposed any wire recognition method comparable by experiment. The best method shows more than 98% of true positive rate for 25 test examples.

Modified Adaptive Gaussian Filter for Removal of Salt and Pepper Noise

  • Li, Zuoyong;Tang, Kezong;Cheng, Yong;Chen, Xiaobo;Zhou, Chongbo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2928-2947
    • /
    • 2015
  • Adaptive Gaussian filter (AGF) is a recently developed switching filter to remove salt and pepper noise. AGF first directly identifies pixels of gray levels 0 and 255 as noise pixels, and then only restored noise pixels using a Gaussian filter with adaptive variance based on the estimated noise density. AGF usually achieves better denoising effect in comparison with other filters. However, AGF still fails to obtain good denoising effect on images with noise-free pixels of gray levels 0 and 255, due to its severe false alarm in its noise detection stage. To alleviate this issue, a modified version of AGF is proposed in this paper. Specifically, the proposed filter first performs noise detection via an image block based noise density estimation and sequential noise density guided rectification on the noise detection result of AGF. Then, a modified Gaussian filter with adaptive variance and window size is used to restore the detected noise pixels. The proposed filter has been extensively evaluated on two representative grayscale images and the Berkeley image dataset BSDS300 with 300 images. Experimental results showed that the proposed filter achieved better denoising effect over the state-of-the-art filters, especially on images with noise-free pixels of gray levels 0 and 255.

Assessment of Gradient-based Digital Speckle Correlation Measurement Errors

  • Jian, Zhao;Dong, Zhao;Zhe, Zhang
    • Journal of the Optical Society of Korea
    • /
    • v.16 no.4
    • /
    • pp.372-380
    • /
    • 2012
  • The optical method Digital Speckle Correlation Measurement (DSCM) has been extensively applied due its capability to measure the entire displacement field over a body surface. A formula of displacement measurement errors by the gradient-based DSCM method was derived. The errors were found to explicitly relate to the image grayscale errors consisting of sub-pixel interpolation algorithm errors, image noise, and subset deformation mismatch at each point of the subset. A power-law dependence of the standard deviation of displacement measurement errors on the subset size was established when the subset deformation was rigid body translation and random image noise was dominant and it was confirmed by both the numerical and experimental results. In a gradient-based algorithm the basic assumption is rigid body translation of the interrogated subsets, however, this is in contradiction to the real circumstances where strains exist. Numerical and experimental results also indicated that, subset shape function mismatch was dominant when the order of the assumed subset shape function was lower than that of the actual subset deformation field and the power-law dependence clearly broke down. The power-law relationship further leads to a simple criterion for choosing a suitable subset size, image quality, sub-pixel algorithm, and subset shape function for DSCM.

A Novel Image Segmentation Method Based on Improved Intuitionistic Fuzzy C-Means Clustering Algorithm

  • Kong, Jun;Hou, Jian;Jiang, Min;Sun, Jinhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3121-3143
    • /
    • 2019
  • Segmentation plays an important role in the field of image processing and computer vision. Intuitionistic fuzzy C-means (IFCM) clustering algorithm emerged as an effective technique for image segmentation in recent years. However, standard fuzzy C-means (FCM) and IFCM algorithms are sensitive to noise and initial cluster centers, and they ignore the spatial relationship of pixels. In view of these shortcomings, an improved algorithm based on IFCM is proposed in this paper. Firstly, we propose a modified non-membership function to generate intuitionistic fuzzy set and a method of determining initial clustering centers based on grayscale features, they highlight the effect of uncertainty in intuitionistic fuzzy set and improve the robustness to noise. Secondly, an improved nonlinear kernel function is proposed to map data into kernel space to measure the distance between data and the cluster centers more accurately. Thirdly, the local spatial-gray information measure is introduced, which considers membership degree, gray features and spatial position information at the same time. Finally, we propose a new measure of intuitionistic fuzzy entropy, it takes into account fuzziness and intuition of intuitionistic fuzzy set. The experimental results show that compared with other IFCM based algorithms, the proposed algorithm has better segmentation and clustering performance.