• Title/Summary/Keyword: Noisy images

Search Result 229, Processing Time 0.027 seconds

The Inspection Algorithm using Invariant Moment for the Detection of Lead Faults of Semiconductor IC (불변 모멘트를 이용한 반도체 IC 리드 불량 검사 알고리즘)

  • Rhee, Kil-Whi;Kim, Joon-Seek
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2737-2749
    • /
    • 1998
  • Recently, vision system is widely used in factory automation processes. In this paper, the method which detects the badness in the position, slop, and the lead of chips is proposed for the inspection of semiconductor chips. The conventional methods mainly inspect semiconductor IC with the features which is extracted in image. But we propose the method which segments the lead part by the morphology and inspects the lead faults by the invariant moment. In the simulation. the results of the proposed method is better than those of the conventional method for the noisy and noiseless images .

  • PDF

CAR DETECTION IN COLOR AERIAL IMAGE USING IMAGE OBJECT SEGMENTATION APPROACH

  • Lee, Jung-Bin;Kim, Jong-Hong;Kim, Jin-Woo;Heo, Joon
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.260-262
    • /
    • 2006
  • One of future remote sensing techniques for transportation application is vehicle detection from the space, which could be the basis of measuring traffic volume and recognizing traffic condition in the future. This paper introduces an approach to vehicle detection using image object segmentation approach. The object-oriented image processing is particularly beneficial to high-resolution image classification of urban area, which suffers from noisy components in general. The project site was Dae-Jeon metropolitan area and a set of true color aerial images at 10cm resolution was used for the test. Authors investigated a variety of parameters such as scale, color, and shape and produced a customized solution for vehicle detection, which is based on a knowledge-based hierarchical model in the environment of eCognition. The highest tumbling block of the vehicle detection in the given data sets was to discriminate vehicles in dark color from new black asphalt pavement. Except for the cases, the overall accuracy was over 90%.

  • PDF

Multibiometrics fusion using $Acz{\acute{e}}l$-Alsina triangular norm

  • Wang, Ning;Lu, Li;Gao, Ge;Wang, Fanglin;Li, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2420-2433
    • /
    • 2014
  • Fusing the scores of multibiometrics is a very promising approach to improve the overall system's accuracy and the verification performance. In recent years, there are several approaches towards studying score level fusion of several biometric systems. However, most of them does not consider the genuine and imposter score distributions and result in a higher equal error rate usually. In this paper, a novel score level fusion approach of different biometric systems (dual iris, thermal and visible face traits) based on $Acz{\acute{e}}l$-Alsina triangular norm is proposed. It achieves higher identification performance as well as acquires a closer genuine distance and larger imposter distance. The experimental tests are conducted on a virtual multibiometrics database, which merges the challenging CASIA-Iris-Thousand database with noisy samples and the NVIE face database with visible and thermal face images. The rigorous results suggest that significant performance improvement can be achieved after the implementation of multibiometrics. The comparative experiments also ascertain that the proposed fusion approach outperforms the state-of-art verification performance.

A Study on the Comparision of One-Dimensional Scattering Extraction Algorithms for Radar Target Identification (레이더 표적 구분을 위한 1차원 산란점 추출 기법 알고리즘들의 성능에 관한 비교 연구)

  • Jung, Ho-Ryung;Seo, Dong-Kyu;Kim, Kyung-Tae;Kim, Hyo-Tae
    • Proceedings of the Korea Electromagnetic Engineering Society Conference
    • /
    • 2003.11a
    • /
    • pp.193-197
    • /
    • 2003
  • Radar target identification can be achieved by using various radar signatures, such as one-dimensional(1-D) range profile, 2-D radar images, and 1-D or 2-D scattering centers on a target. In this letter, five 1-D scattering center extraction methods are discussed - TLS(Total Least Square)-Prony, Fast Root-MUSIC (Multiple Signal Classification), Matrix-Pencil, GEESE(GEneralized Eigenvalues utilizing Signal-subspace Eigenvalues), TLS-ESPRIT(Total Least Squares - Estimation of Signal Parameters via Rotational Invariance Technique), These methods are compared in the context of estimation accuracy as well as a computational efficiency using a noisy data. Finally these methods are applied to the target classification experiment with the measured data in the POSTECH compact range facility.

  • PDF

A New Approach for Information Security using an Improved Steganography Technique

  • Juneja, Mamta;Sandhu, Parvinder Singh
    • Journal of Information Processing Systems
    • /
    • v.9 no.3
    • /
    • pp.405-424
    • /
    • 2013
  • This research paper proposes a secured, robust approach of information security using steganography. It presents two component based LSB (Least Significant Bit) steganography methods for embedding secret data in the least significant bits of blue components and partial green components of random pixel locations in the edges of images. An adaptive LSB based steganography is proposed for embedding data based on the data available in MSB's (Most Significant Bits) of red, green, and blue components of randomly selected pixels across smooth areas. A hybrid feature detection filter is also proposed that performs better to predict edge areas even in noisy conditions. AES (Advanced Encryption Standard) and random pixel embedding is incorporated to provide two-tier security. The experimental results of the proposed approach are better in terms of PSNR and capacity. The comparison analysis of output results with other existing techniques is giving the proposed approach an edge over others. It has been thoroughly tested for various steganalysis attacks like visual analysis, histogram analysis, chi-square, and RS analysis and could sustain all these attacks very well.

A Study on Fuzzy Minutiae-Based Matching Method (퍼지를 이용한 지문 정합에 관한 연구)

  • Eom, Ki-Yol;Kang, Min-Koo;Hong, Da-Hye;Kim, Mun-Hyun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.359-361
    • /
    • 2008
  • This paper presents the fuzzy minutiae-based matching to improve the accuracy of the difference between template and imput fingerprint image. Minutiae-based matching method is the most well-known and widely used method for fingerprint matching. However, fingerprint pressure, dryness of the skin, skin disease, sweat, dirt, grease, and humidity in the air cause the noisy fingerprint images and the distortion is produced by users moving their fingers on the scanner surface. The input image may be rejected from the Fingerprint Recognition System, because the distorted fingerprint image is very different from the original image. Large tolerence boxes and fuzzy discriminant function is required to improve the accuracy.

  • PDF

Toward Occlusion-Free Depth Estimation for Video Production

  • Park, Jong-Il;Seiki-Inoue
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.131-136
    • /
    • 1997
  • We present a method to estimate a dense and sharp depth map using multiple cameras for the application to flexible video production. A key issue for obtaining sharp depth map is how to overcome the harmful influence of occlusion. Thus, we first propose to selectively use the depth information from multiple cameras. With a simple sort and discard technique, we resolve the occlusion problem considerably at a slight sacrifice of noise tolerance. However, boundary overreach of more textured area to less textured area at object boundaries still remains to be solved. We observed that the amount of boundary overreach is less than half the size of the matching window and, unlike usual stereo matching, the boundary overreach with the proposed occlusion-overcoming method shows very abrupt transition. Based on these observations, we propose a hierarchical estimation scheme that attempts to reduce boundary overreach such that edges of the depth map coincide with object boundaries on the one hand, and to reduce noisy estimates due to insufficient size of matching window on the other hand. We show the hierarchical method can produce a sharp depth map for a variety of images.

  • PDF

Denoising Images by Soft-Threshold Technique Using the Monotonic Transform and the Noise Power of Wavelet Subbands (단조변환 및 웨이블릿 서브밴드 잡음전력을 이용한 Soft-Threshold 기법의 영상 잡음제거)

  • Park, Nam-Chun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.4
    • /
    • pp.141-147
    • /
    • 2014
  • The wavelet shrinkage is a technique that reduces the wavelet coefficients to minimize the MSE(Mean Square Error) between the signal and the noisy signal by making use of the threshold determined by the variance of the wavelet coefficients. In this paper, by using the monotonic transform and the power of wavelet subbands, new thresholds applicable to the high and the low frequency wavelet bands are proposed, and the thresholds are applied to the ST(soft-threshold) technique to denoise on image signals with additive Gaussian noise. And the results of PSNRs are compared with the results obtained by the VisuShrink technique and those of [15]. The results shows the validity of this technique.

A Potts Automata algorithm for Noise Removal and Edge Detection (Potts Automata를 이용한 영상의 잡음 제거 및 에지 주줄)

  • 이석기;김석태;조성진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.3C
    • /
    • pp.327-335
    • /
    • 2003
  • Cellular Automata is discrete dynamical systems which natural phenomena may be specified completely in terms of local relation. In this Paper we Propose noise removal and edge detection algorithm using a Potts Automata which is based on Cellular Automata. The proposed method is aimed to locally increase or decrease the differences in gray level values between pixel of the image without loss of the main characteristics of the image. The dynamical behavior of these automata is determined by Lyapunov operators for sequential and parallel update. We have found that proposed automata rules Present very fast convergence to fixed points, stability in front of random noisy images. Based on the experimental results we discuses the advantage and efficiency.

High Representation based GAN defense for Adversarial Attack

  • Sutanto, Richard Evan;Lee, Suk Ho
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.141-146
    • /
    • 2019
  • These days, there are many applications using neural networks as parts of their system. On the other hand, adversarial examples have become an important issue concerining the security of neural networks. A classifier in neural networks can be fooled and make it miss-classified by adversarial examples. There are many research to encounter adversarial examples by using denoising methods. Some of them using GAN (Generative Adversarial Network) in order to remove adversarial noise from input images. By producing an image from generator network that is close enough to the original clean image, the adversarial examples effects can be reduced. However, there is a chance when adversarial noise can survive the approximation process because it is not like a normal noise. In this chance, we propose a research that utilizes high-level representation in the classifier by combining GAN network with a trained U-Net network. This approach focuses on minimizing the loss function on high representation terms, in order to minimize the difference between the high representation level of the clean data and the approximated output of the noisy data in the training dataset. Furthermore, the generated output is checked whether it shows minimum error compared to true label or not. U-Net network is trained with true label to make sure the generated output gives minimum error in the end. At last, the remaining adversarial noise that still exist after low-level approximation can be removed with the U-Net, because of the minimization on high representation terms.