• 제목/요약/키워드: Hamming distance

검색결과 151건 처리시간 0.024초

A study on Iris Recognition using Wavelet Transformation and Nonlinear Function

  • Hur Jung-Youn;Truong Le Xuan;Lee Sang-Kyu
    • 한국지능시스템학회논문지
    • /
    • 제15권3호
    • /
    • pp.357-362
    • /
    • 2005
  • Iris recognition system is the one of the most reliable biometries recognition system. An algorithm is proposed to determine the localized iris from the iris image received from iris input camera in client. For the first step, the algorithm determines the center of pupil. For the second step, the algorithm determines the outer boundary of the iris and the pupillary boundary. The localized iris area is transformed into polar coordinates. After performing three times Wavelet transformation, normalization was done using a sigmoid function. The converting binary process performs normalized value of pixel from 0 to 255 to be binary value, and then the converting binary process is compared pairs of two adjacent pixels. The binary code of the iris is transmitted to the server by the network. In the server, the comparing process compares the binary value of presented iris to the reference value in the database. The process of recognition or rejection is dependent on the value of Hamming Distance. After matching the binary value of presented iris with the database stored in the server, the result is transmitted to the client.

가중치 집합 최적화를 통한 효율적인 가중 무작위 패턴 생성 (Efficient Weighted Random Pattern Generation Using Weight Set Optimization)

  • 이항규;김홍식;강성호
    • 전자공학회논문지C
    • /
    • 제35C권9호
    • /
    • pp.29-37
    • /
    • 1998
  • 가중 무작위 패턴 테스트에서 적은 수의 가중 무작위 패턴을 사용하여 높은 고장 검출율을 달성하기 위해서는 최적화된 가중치 집합들을 찾아내야만 한다. 따라서 최적화된 가중치 집합을 찾아내려는 많은 연구가 행해져 왔다. 이 논문에서 결정론적인 테스트 패턴에 대한 샘플링 확률을 기반으로 하여 최적화된 가중치 집합을 효율적으로 찾는 새로운 가중치 집합 최적화 알고리듬을 제한한다. 아울러 시뮬레이션을 통해 적당한 최대해밍거리를 구하는 방법도 소개된다. ISCAS 85 벤치마크 회로에 대한 실험결과는 새로운 가중치 집합 최적화 알고리듬과 적절한 최대 해밍거리를 구하는 방법의 효율성을 뒷받침해 준다.

  • PDF

Robust Image Hashing for Tamper Detection Using Non-Negative Matrix Factorization

  • Tang, Zhenjun;Wang, Shuozhong;Zhang, Xinpeng;Wei, Weimin;Su, Shengjun
    • Journal of Ubiquitous Convergence Technology
    • /
    • 제2권1호
    • /
    • pp.18-26
    • /
    • 2008
  • The invariance relation existing in the non-negative matrix factorization (NMF) is used for constructing robust image hashes in this work. The image is first re-scaled to a fixed size. Low-pass filtering is performed on the luminance component of the re-sized image to produce a normalized matrix. Entries in the normalized matrix are pseudo-randomly re-arranged under the control of a secret key to generate a secondary image. Non-negative matrix factorization is then performed on the secondary image. As the relation between most pairs of adjacent entries in the NMF's coefficient matrix is basically invariant to ordinary image processing, a coarse quantization scheme is devised to compress the extracted features contained in the coefficient matrix. The obtained binary elements are used to form the image hash after being scrambled based on another key. Similarity between hashes is measured by the Hamming distance. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as Gaussian filtering, moderate noise contamination, JPEG compression, re-scaling, and watermark embedding. Hashes of different images have very low collision probability. Tampering to local image areas can be detected by comparing the Hamming distance with a predetermined threshold, indicating the usefulness of the technique in digital forensics.

  • PDF

레이저를 이용한 Bin-Picking 방법 (Bin-Picking Method Using Laser)

  • 주기세;한민홍
    • 한국정밀공학회지
    • /
    • 제12권9호
    • /
    • pp.156-166
    • /
    • 1995
  • This paper presents a bin picking method using a slit beam laser in which a robot recognizes all of the unoccluded objects from the top of jumbled objects, and picks them up one by one. Once those unoccluded objects are removed, newly developed unoccluded objects underneath are recognized and the same process is continued until the bin gets empty. To recognize unoccluded objects, a new algotithm to link edges on slices which are generated by the orthogonally mounted laser on the xy table is proposed. The edges on slices are partitioned and classified using convex and concave function with a distance parameter. The edge types on the neighborhood slices are compared, then the hamming distances among identical kinds of edges are extracted as the features of fuzzy membership function. The sugeno fuzzy integration about features is used to determine linked edges. Finally, the pick-up sequence based on MaxMin theory is determined to cause minimal disturbance to the pile. This proposed method may provide a solution to the automation of part handling in manufacturing environments such as in punch press operation or part assembly.

  • PDF

신뢰성 있는 정보의 추출을 위한 퍼지집합의 유사측도 구성 (Similarity Measure Construction of the Fuzzy Set for the Reliable Data Selection)

  • 이상혁
    • 한국통신학회논문지
    • /
    • 제30권9C호
    • /
    • pp.854-859
    • /
    • 2005
  • 모호함의 측도를 위하여 퍼지 엔트로피와 거리측도 그리고 유사측도와의 관계를 이용하여 새로운 퍼지 측도를 제안하였다. 제안된 퍼지 엔트로피는 거리측도를 이용하여 구성된다. 거리측도는 일반적으로 사용되는 해밍 거리를 이용하였다. 또한 집합사이의 유사성을 측정하기 위한 유사측도를 거리 측도를 이용하여 구성하였고, 제안한 퍼지 엔트로피와 유사측도를 증명을 통하여 타당성을 확인하였다.

거리 측도를 이용한 퍼지 엔트로피와 유사측도의 구성 (Construction of Fuzzy Entropy and Similarity Measure with Distance Measure)

  • 이상혁;김성신
    • 한국지능시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.521-526
    • /
    • 2005
  • 모호함의 측도를 위하여 퍼지 엔트로피와 거리측도 그리고 유사측도와의 관계를 이용하여 새로운 퍼지 측도를 제안하였다. 제안된 퍼지 엔트로피는 거리측도를 이용하여 구성된다. 거리측도는 일반적으로 사용되는 해밍 거리를 이용하였다. 또한 집합사이의 유사성을 측정하기 위한 유사측도를 거리 측도를 이용하여 구성하였고, 제안한 퍼지 엔트로피와 유사측도를 증명을 통하여 타당성을 확인하였다.

A Subthreshold PMOS Analog Cortex Decoder for the (8, 4, 4) Hamming Code

  • Perez-Chamorro, Jorge;Lahuec, Cyril;Seguin, Fabrice;Le Mestre, Gerald;Jezequel, Michel
    • ETRI Journal
    • /
    • 제31권5호
    • /
    • pp.585-592
    • /
    • 2009
  • This paper presents a method for decoding high minimal distance ($d_{min}$) short codes, termed Cortex codes. These codes are systematic block codes of rate 1/2 and can have higher$d_{min}$ than turbo codes. Despite this characteristic, these codes have been impossible to decode with good performance because, to reach high $d_{min}$, several encoding stages are connected through interleavers. This generates a large number of hidden variables and increases the complexity of the scheduling and initialization. However, the structure of the encoder is well suited for analog decoding. A proof-of-concept Cortex decoder for the (8, 4, 4) Hamming code is implemented in subthreshold 0.25-${\mu}m$ CMOS. It outperforms an equivalent LDPC-like decoder by 1 dB at BER=$10^{-5}$ and is 44 percent smaller and consumes 28 percent less energy per decoded bit.

Security of Constant Weight Countermeasures

  • Won, Yoo-Seung;Choi, Soung-Wook;Park, Dong-Won;Han, Dong-Guk
    • ETRI Journal
    • /
    • 제39권3호
    • /
    • pp.417-427
    • /
    • 2017
  • This paper investigates the security of constant weight countermeasures, which aim to produce indistinguishable leakage from sensitive variables and intermediate variables, assuming a constant Hamming distance and/or Hamming weight leakages. To investigate the security of recent countermeasures, contrary to many related studies, we assume that the coefficients of the simulated leakage models follow a normal distribution so that we may construct a model with approximately realistic leakages. First, using our simulated leakage model, we demonstrate security holes in these previous countermeasures. Subsequently, in contrast to the hypotheses presented in previous studies, we confirm the resistance of these countermeasures to a standard correlation power analysis (CPA). However, these countermeasures can allow a bitwise CPA to leak a sensitive variable with only a few thousand traces.

Upper Bounds for the Performance of Turbo-Like Codes and Low Density Parity Check Codes

  • Chung, Kyu-Hyuk;Heo, Jun
    • Journal of Communications and Networks
    • /
    • 제10권1호
    • /
    • pp.5-9
    • /
    • 2008
  • Researchers have investigated many upper bound techniques applicable to error probabilities on the maximum likelihood (ML) decoding performance of turbo-like codes and low density parity check (LDPC) codes in recent years for a long codeword block size. This is because it is trivial for a short codeword block size. Previous research efforts, such as the simple bound technique [20] recently proposed, developed upper bounds for LDPC codes and turbo-like codes using ensemble codes or the uniformly interleaved assumption. This assumption bounds the performance averaged over all ensemble codes or all interleavers. Another previous research effort [21] obtained the upper bound of turbo-like code with a particular interleaver using a truncated union bound which requires information of the minimum Hamming distance and the number of codewords with the minimum Hamming distance. However, it gives the reliable bound only in the region of the error floor where the minimum Hamming distance is dominant, i.e., in the region of high signal-to-noise ratios. Therefore, currently an upper bound on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix cannot be calculated because of heavy complexity so that only average bounds for ensemble codes can be obtained using a uniform interleaver assumption. In this paper, we propose a new bound technique on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix using ML estimated weight distributions and we also show that the practical iterative decoding performance is approximately suboptimal in ML sense because the simulation performance of iterative decoding is worse than the proposed upper bound and no wonder, even worse than ML decoding performance. In order to show this point, we compare the simulation results with the proposed upper bound and previous bounds. The proposed bound technique is based on the simple bound with an approximate weight distribution including several exact smallest distance terms, not with the ensemble distribution or the uniform interleaver assumption. This technique also shows a tighter upper bound than any other previous bound techniques for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix.

연관성 기반 비유사성을 활용한 범주형 자료 군집분석 (Categorical Data Clustering Analysis Using Association-based Dissimilarity)

  • 이창기;정욱
    • 품질경영학회지
    • /
    • 제47권2호
    • /
    • pp.271-281
    • /
    • 2019
  • Purpose: The purpose of this study is to suggest a more efficient distance measure taking into account the relationship between categorical variables for categorical data cluster analysis. Methods: In this study, the association-based dissimilarity was employed to calculate the distance between two categorical data observations and the distance obtained from the association-based dissimilarity was applied to the PAM cluster algorithms to verify its effectiveness. The strength of association between two different categorical variables can be calculated using a mixture of dissimilarities between the conditional probability distributions of other categorical variables, given these two categorical values. In particular, this method is suitable for datasets whose categorical variables are highly correlated. Results: The simulation results using several real life data showed that the proposed distance which considered relationships among the categorical variables generally yielded better clustering performance than the Hamming distance. In addition, as the number of correlated variables was increasing, the difference in the performance of the two clustering methods based on different distance measures became statistically more significant. Conclusion: This study revealed that the adoption of the relationship between categorical variables using our proposed method positively affected the results of cluster analysis.