• Title/Summary/Keyword: Levenshtein Distance

Search Result 23, Processing Time 0.027 seconds

Vocabulary Recognition Retrieval Optimized System using MLHF Model (MLHF 모델을 적용한 어휘 인식 탐색 최적화 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.217-223
    • /
    • 2009
  • Vocabulary recognition system of Mobile terminal is executed statistical method for vocabulary recognition and used statistical grammar recognition system using N-gram. If limit arithmetic processing capacity in memory of vocabulary to grow then vocabulary recognition algorithm complicated and need a large scale search space and many processing time on account of impossible to process. This study suggest vocabulary recognition optimize using MLHF System. MLHF separate acoustic search and lexical search system using FLaVoR. Acoustic search feature vector of speech signal extract using HMM, lexical search recognition execution using Levenshtein distance algorithm. System performance as a result of represent vocabulary dependence recognition rate of 98.63%, vocabulary independence recognition rate of 97.91%, represent recognition speed of 1.61 second.

Word Similarity Calculation by Using the Edit Distance Metrics with Consonant Normalization

  • Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.573-582
    • /
    • 2015
  • Edit distance metrics are widely used for many applications such as string comparison and spelling error corrections. Hamming distance is a metric for two equal length strings and Damerau-Levenshtein distance is a well-known metrics for making spelling corrections through string-to-string comparison. Previous distance metrics seems to be appropriate for alphabetic languages like English and European languages. However, the conventional edit distance criterion is not the best method for agglutinative languages like Korean. The reason is that two or more letter units make a Korean character, which is called as a syllable. This mechanism of syllable-based word construction in the Korean language causes an edit distance calculation to be inefficient. As such, we have explored a new edit distance method by using consonant normalization and the normalization factor.

A Study on Processing of Speech Recognition Korean Words (한글 단어의 음성 인식 처리에 관한 연구)

  • Nam, Kihun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.407-412
    • /
    • 2019
  • In this paper, we propose a technique for processing of speech recognition in korean words. Speech recognition is a technology that converts acoustic signals from sensors such as microphones into words or sentences. Most foreign languages have less difficulty in speech recognition. On the other hand, korean consists of vowels and bottom consonants, so it is inappropriate to use the letters obtained from the voice synthesis system. That improving the conventional structure speech recognition can the correct words recognition. In order to solve this problem, a new algorithm was added to the existing speech recognition structure to increase the speech recognition rate. Perform the preprocessing process of the word and then token the results. After combining the result processed in the Levenshtein distance algorithm and the hashing algorithm, the normalized words is output through the consonant comparison algorithm. The final result word is compared with the standardized table and output if it exists, registered in the table dose not exists. The experimental environment was developed by using a smartphone application. The proposed structure shows that the recognition rate is improved by 2% in standard language and 7% in dialect.

Secure Blocking + Secure Matching = Secure Record Linkage

  • Karakasidis, Alexandros;Verykios, Vassilios S.
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.223-235
    • /
    • 2011
  • Performing approximate data matching has always been an intriguing problem for both industry and academia. This task becomes even more challenging when the requirement of data privacy rises. In this paper, we propose a novel technique to address the problem of efficient privacy-preserving approximate record linkage. The secure framework we propose consists of two basic components. First, we utilize a secure blocking component based on phonetic algorithms statistically enhanced to improve security. Second, we use a secure matching component where actual approximate matching is performed using a novel private approach of the Levenshtein Distance algorithm. Our goal is to combine the speed of private blocking with the increased accuracy of approximate secure matching.

The Design of a Code-String Matching Processor using an EWLD Algorithm (EWLD 알고리듬을 이용한 코드열 정합 프로세서의 설계)

  • 조원경;홍성민;국일호
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.31A no.4
    • /
    • pp.127-135
    • /
    • 1994
  • In this paper we propose an EWLD(Enhanced Weighted Levenshtein Distance) algorithm to organize code-string pattern matching linear array processor based on the mappting to an one-dimensional array from a two-dimensional matching matrix, and design a processing element(PE) of the processor, N PEs are required instead of NS02T in the processor because of the mapping. Data input and output between PEs and all internal operations of each PE are performed in bit-serial fashion. The bit-serial operation consists of the computing of word distance (WD) by comparison and the selection of optimal code transformation path, and takes 22 clocks as a cycle. The layout of a PE is designed based on the double metal $1.5\mu$m CMOS rule. About 1,800 transistors consistute a processing element and 2 PEs are integrated on a 3mm$\times$3mm sized chip.

  • PDF

Open API-based Conversational Voice Interaction Scheme for Intelligent IoT Applications for the Digital Underprivileged (디지털 소외계층을 위한 지능형 IoT 애플리케이션의 공개 API 기반 대화형 음성 상호작용 기법)

  • Joonhyouk, Jang
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.22-29
    • /
    • 2022
  • Voice interactions are particularly effective in applications targeting the digital underprivileged who are not proficient in the use of smart devices. However, applications based on open APIs are using voice signals only for short, fragmentary input and output due to the limitations of existing touchscreen-oriented UI and API provided. In this paper, we design a conversational voice interaction model for interactions between users and intelligent mobile/IoT applications and propose a keyword detection algorithm based on the edit distance. The proposed model and scheme were implemented in an Android environment, and the edit distance-based keyword detection algorithm showed a higher recognition rate than the existing algorithm for keywords that were incorrectly recognized through speech recognition.

Purchase Transaction Similarity Measure Considering Product Taxonomy (상품 분류 체계를 고려한 구매이력 유사도 측정 기법)

  • Yang, Yu-Jeong;Lee, Ki Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.363-372
    • /
    • 2019
  • A sequence refers to data in which the order exists on the two items, and purchase transaction data in which the products purchased by one customer are listed is one of the representative sequence data. In general, all goods have a product taxonomy, such as category/ sub-category/ sub-sub category, and if they are similar to each other, they are classified into the same category according to their characteristics. Therefore, in this paper, we not only consider the purchase order of products to compare two purchase transaction sequences, but also calculate their similarity by giving a higher score if they are in the same category in spite of their difference. Especially, in order to choose the best similarity measure that directly affects the calculation performance of the purchase transaction sequences, we have compared the performance of three representative similarity measures, the Levenshtein distance, dynamic time warping distance, and the Needleman-Wunsch similarity. We have extended the existing methods to take into account the product taxonomy. For conventional similarity measures, the comparison of goods in two sequences is calculated by simply assigning a value of 0 or 1 according to whether or not the product is matched. However, the proposed method is subdivided to have a value between 0 and 1 using the product taxonomy tree to give a different degree of relevance between the two products, even if they are different products. Through experiments, we have confirmed that the proposed method was measured the similarity more accurately than the previous method. Furthermore, we have confirmed that dynamic time warping distance was the most suitable measure because it considered the degree of association of the product in the sequence and showed good performance for two sequences with different lengths.

A Study on the On-Line Handwritten Hangeul Pattern Recognition Using WLD with Parallelish (병렬성을 갖는 WLD 알고리즘을 이용한 온라인 필기체 한글, 영문자 및 숫자 패턴인식)

  • 김은원;조원경
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.10
    • /
    • pp.747-754
    • /
    • 1991
  • In this paper, we studies the on-line recognition of handwritten character using WLD(weighted levenshtein distance) algorithm with parallelism. The Hangeul can be separated for unit of phonemes and the alphanumeric can be separated for unit of characters. And, we studies the parallelism and the concurrency of the WLD algorithm for realization of special-purpose processor. By the simulation result for 10, 000 characters in practical sentences, the recognition rate of strokes in obtained 96.57$\%$ and the separation rate for phonemes and characteristics is obtained 95.4$\%$.

  • PDF

On the Effect of a Pilot Coding Education Support System for Complex Problem Solving Tasks

  • Jeon, Inseong;Song, Ki-Sang
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.128-137
    • /
    • 2018
  • In the programming education, there is a great need of a teaching support system that can support the learner in the programming process regardless of the computer language due to instructor's difficulty of checking the progress of learners in real-time. Its importance is especially important in lower grade coding classes such as in K-12 education because they are not used to coding and so simple problems can be regarded as complex problems. For this, a pilot coding education support system based on Levenshtein distance algorithm which shows learners' progress to given solution in real-time was developed in order to help learners to solve complex problems easily, and the learners' motivation and self-efficacy was measured for estimating the usefulness of developed system targeting elementary school students. When the learners use the developed system, it was found that a statistically significant difference appears in the sub-factors of learning motivation compared with traditional class teaching environments. Among the sub-factors of self-efficacy, the efficacy dimension showed statistically significant difference too.

Optimizing Multiple Pronunciation Dictionary Based on a Confusability Measure for Non-native Speech Recognition (타언어권 화자 음성 인식을 위한 혼잡도에 기반한 다중발음사전의 최적화 기법)

  • Kim, Min-A;Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Cho, Sung-Eui;Lee, Seong-Ro
    • MALSORI
    • /
    • no.65
    • /
    • pp.93-103
    • /
    • 2008
  • In this paper, we propose a method for optimizing a multiple pronunciation dictionary used for modeling pronunciation variations of non-native speech. The proposed method removes some confusable pronunciation variants in the dictionary, resulting in a reduced dictionary size and less decoding time for automatic speech recognition (ASR). To this end, a confusability measure is first defined based on the Levenshtein distance between two different pronunciation variants. Then, the number of phonemes for each pronunciation variant is incorporated into the confusability measure to compensate for ASR errors due to words of a shorter length. We investigate the effect of the proposed method on ASR performance, where Korean is selected as the target language and Korean utterances spoken by Chinese native speakers are considered as non-native speech. It is shown from the experiments that an ASR system using the multiple pronunciation dictionary optimized by the proposed method can provide a relative average word error rate reduction of 6.25%, with 11.67% less ASR decoding time, as compared with that using a multiple pronunciation dictionary without the optimization.

  • PDF