• Title/Summary/Keyword: Normalized Algorithm

Search Result 591, Processing Time 0.027 seconds

A Study on Signature Identification using the Distribution of Space Spectrum (공간 스펙트럼 분포를 이용한 서명 인식에 관한 연구)

  • 남시병;박진양;이상범
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.8
    • /
    • pp.1-7
    • /
    • 1993
  • This paper proposed an algorithm that extracts the optimum characteristics parameters to identify the signatures from the spectrum using 2-D FFT. The signature image input through a scanner is normalized into 250*128 pixels in the prepocessor. Normalized image is divided into block segments and each segment is transformed into space spectrum by 2-d FFT. There are several methods extracting the signature characteristic parameters from that spectrum. The result of experimentations which use the characteristic parameters extracted between $0^{\circ}and\;90^{\circ}$ in (0, 0), (63, 0) corners from 64$\times$64 block spectrum shows that the signature identification rate using that method gives 92.5% of successful achievement for 100 signatures, higher than the others.

  • PDF

Security Verification of Video Telephony System Implemented on the DM6446 DaVinci Processor

  • Ghimire, Deepak;Kim, Joon-Cheol;Lee, Joon-Whoan
    • International Journal of Contents
    • /
    • v.8 no.1
    • /
    • pp.16-22
    • /
    • 2012
  • In this paper we propose a method for verifying video in a video telephony system implemented in DM6446 DaVinci Processor. Each frame is categorized either error free frame or error frame depending on the predefined criteria. Human face is chosen as a basic means for authenticating the video frame. Skin color based algorithm is implemented for detecting the face in the video frame. The video frame is classified as error free frame if there is single face object with clear view of facial features (eyes, nose, mouth etc.) and the background of the image frame is not different then the predefined background, otherwise it will be classified as error frame. We also implemented the image histogram based NCC (Normalized Cross Correlation) comparison for video verification to speed up the system. The experimental result shows that the system is able to classify frames with 90.83% of accuracy.

Research on the Multi-Focus Image Fusion Method Based on the Lifting Stationary Wavelet Transform

  • Hu, Kaiqun;Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1293-1300
    • /
    • 2018
  • For the disadvantages of multi-scale geometric analysis methods such as loss of definition and complex selection of rules in image fusion, an improved multi-focus image fusion method is proposed. First, the initial fused image is quickly obtained based on the lifting stationary wavelet transform, and a simple normalized cut is performed on the initial fused image to obtain different segmented regions. Then, the original image is subjected to NSCT transformation and the absolute value of the high frequency component coefficient in each segmented region is calculated. At last, the region with the largest absolute value is selected as the postfusion region, and the fused multi-focus image is obtained by traversing each segment region. Numerical experiments show that the proposed algorithm can not only simplify the selection of fusion rules, but also overcome loss of definition and has validity.

A study on pattern recognition using DCT and neural network (DCT와 신경회로망을 이용한 패턴인식에 관한 연구)

  • 이명길;이주신
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.3
    • /
    • pp.481-492
    • /
    • 1997
  • This paper presents an algorithm for recognizing surface mount device(SMD) IC pattern based on the error back propoagation(EBP) neural network and discrete cosine transform(DCT). In this approach, we chose such parameters as frequency, angle, translation and amplitude for the shape informantion of SMD IC, which are calculated from the coefficient matrix of DCT. These feature parameters are normalized and then used for the input vector of neural network which is capable of adapting the surroundings such as variation of illumination, arrangement of objects and translation. Learning of EBP neural network is carried out until maximum error of the output layer is less then 0.020 and consequently, after the learning of forty thousand times, the maximum error have got to this value. Experimental results show that the rate of recognition is 100% in case of the random pattern taken at a similar circumstance as well as normalized training pattern. It also show that proposed method is not only relatively relatively simple compare with the traditional space domain method in extracting the feature parameter but also able to re recognize the pattern's class, position, and existence.

  • PDF

Matching Of Feature Points using Dynamic Programming (동적 프로그래밍을 이용한 특징점 정합)

  • Kim, Dong-Keun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.73-80
    • /
    • 2003
  • In this paper we propose an algorithm which matches the corresponding feature points between the reference image and the search image. We use Harris's corner detector to find the feature points in both image. For each feature point in the reference image, we can extract the candidate matching points as feature points in the starch image which the normalized correlation coefficient goes greater than a threshold. Finally we determine a corresponding feature points among candidate points by using dynamic programming. In experiments we show results that match feature points in synthetic image and real image.

Validation of Ocean Color Algorithms in the Ulleung Basin, East/Japan Sea

  • Yoo, Sin-Jae;Park, Ji-Soo;Kim, Hyun-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.4
    • /
    • pp.315-325
    • /
    • 2000
  • Observations were made to validate ocean color algorithms in the Ulleung Basin, East Sea in May 2000. Small scale and meso-scale surveys were conducted for the validation of ocean color products (nLw: normalized water-leaving radiance and chlorophyll concentration). There were discrepancies between SeaWiFS and in situ nLw showing the current aerosol models of standard SeaWiFS processing software are less than adequate (Gordon and Wang, 1994). Applying the standard SeaWiFS in-water algorithm resulted in an overestimation of chlorophyll concentration. This is because that CDOM absorption was higher than the estimated chlorophyll absorption. TSS concentration was also high. Therefore, the study region deviated from Case 1 waters. The source of these materials seems to be the entrainment of coastal water by the Tsushima Warm Current. Study of the bio-optical properties in other season is desirable.

An information-theoretical analysis of gene nucleotide sequence structuredness for a selection of aging and cancer-related genes

  • Blokh, David;Gitarts, Joseph;Stambler, Ilia
    • Genomics & Informatics
    • /
    • v.18 no.4
    • /
    • pp.41.1-41.8
    • /
    • 2020
  • We provide an algorithm for the construction and analysis of autocorrelation (information) functions of gene nucleotide sequences. As a measure of correlation between discrete random variables, we use normalized mutual information. The information functions are indicative of the degree of structuredness of gene sequences. We construct the information functions for selected gene sequences. We find a significant difference between information functions of genes of different types. We hypothesize that the features of information functions of gene nucleotide sequences are related to phenotypes of these genes.

An FPGA Implementation of Acoustic Echo Canceller Using S-LMS Algorithm (S-LMS 알고리즘을 이용한 음향반향제거기의 FPGA구현)

  • 이행우
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.9
    • /
    • pp.65-71
    • /
    • 2004
  • This paper describes a new adaptive algorithm which can reduce the required computation quantities in the adaptive filter. The proposed S-LMS algorithm uses only the signs of the normalized input signal rather than the input signals when coefficients of the filter are adapted. By doing so, there is no need for the multiplications and divisions which are mostly responsible for the computation quantities. To analyze the convergence characteristics of the proposed algorithm, the condition and speed of the convergence are derived mathematically. Also, we simulate an echo canceller adopting this algorithm and compare the performances of convergence for this algorithm with the ones for the other algorithm. As the results of simulations, it is proved that the echo canceller adopting this algorithm shows almost the same performances of convergence as the echo canceller adopting the SIA algorithm.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.

Fast non-local means noise reduction algorithm with acceleration function for improvement of image quality in gamma camera system: A phantom study

  • Park, Chan Rok;Lee, Youngjin
    • Nuclear Engineering and Technology
    • /
    • v.51 no.3
    • /
    • pp.719-722
    • /
    • 2019
  • Gamma-ray images generally suffer from a lot of noise because of low photon detection in the gamma camera system. The purpose of this study is to improve the image quality in gamma-ray images using a gamma camera system with a fast nonlocal means (FNLM) noise reduction algorithm with an acceleration function. The designed FNLM algorithm is based on local region considerations, including the Euclidean distance in the gamma-ray image and use of the encoded information. To evaluate the noise characteristics, the normalized noise power spectrum (NNPS), contrast-to-noise ratio (CNR), and coefficient of variation (COV) were used. According to the NNPS result, the lowest values can be obtained using the FNLM noise reduction algorithm. In addition, when the conventional methods and the FNLM noise reduction algorithm were compared, the average CNR and COV using the proposed algorithm were approximately 2.23 and 7.95 times better than those of the noisy image, respectively. In particular, the image-processing time of the FNLM noise reduction algorithm can achieve the fastest time compared with conventional noise reduction methods. The results of the image qualities related to noise characteristics demonstrated the superiority of the proposed FNLM noise reduction algorithm in a gamma camera system.