• Title/Summary/Keyword: Text-recognition

Search Result 677, Processing Time 0.027 seconds

Development of Language Study Machine Using Voice Recognition Technology (음성인식 기술을 이용한 대화식 언어 학습기 개발)

  • Yoo, Jae-Tack;Yoon, Tae-Seob
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.201-203
    • /
    • 2005
  • The best method to study language is to talking with a native speaker. A voice recognition technology can be used to develope a language study machine. SD(Speaker dependant) and SI(speaker independant) voice recognition method is used for the language study machine. MP3 Player. FM Radio. Alarm clock functions are added to enhance the value of the product. The machine is designed with a DSP(Digital Signal Processing) chip for voice recognition. MP3 encoder/decoder chip. FM tumer and SD flash memory card. This paper deals with the application of SD ad SD voice recognition. flash memory file system. PC download function using USB ports, English conversation text function by the use of SD flash memory. LCD display control. MP3 encoding and decoding, etc. The study contents are saved in SD flash memory. This machine can be helpful from child to adult by changing the SD flash memory.

  • PDF

On a Study of the Improvement of Speaker Recognition with Characteristics of High Order Reflection Coefficients (고차 반사계수 특성을 이용한 화자인식의 성능 향상에 관한 연구)

  • 이윤주;오세영;함명규;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.667-670
    • /
    • 1999
  • As the number of reference patterns increase in the text dependant speaker recognition, the recognition performance of the system degrades. So, if reference patterns were decreased the high recognition rate can be obtained. It’s because the speaker recognition can obtain the high discrimination. In this paper, to decrease the number of reference patterns, we choose candidate reference patterns to perform pattern matching with test pattern by high order component of the reflection coefficients of the uttered speech signal Consequently the total recognition rate of the proposed method is about 2% higher than that of the conventional method.

  • PDF

Methods of Classification and Character Recognition for Table Items through Deep Learning (딥러닝을 통한 문서 내 표 항목 분류 및 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.651-658
    • /
    • 2021
  • In this paper, we propose methods for character recognition and classification for table items through deep learning. First, table areas are detected in a document image through CNN. After that, table areas are separated by separators such as vertical lines. The text in document is recognized through a neural network combined with CNN and RNN. To correct errors in the character recognition, multiple candidates for the recognized result are provided for a sentence which has low recognition accuracy.

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

Design And Implementation of a Speech Recognition Interview Model based-on Opinion Mining Algorithm (오피니언 마이닝 알고리즘 기반 음성인식 인터뷰 모델의 설계 및 구현)

  • Kim, Kyu-Ho;Kim, Hee-Min;Lee, Ki-Young;Lim, Myung-Jae;Kim, Jeong-Lae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.225-230
    • /
    • 2012
  • The opinion mining is that to use the existing data mining technology also uploaded blog to web, to use product comment, the opinion mining can extract the author's opinion therefore it not judge text's subject, only judge subject's emotion. In this paper, published opinion mining algorithms and the text using speech recognition API for non-voice data to judge the emotions suggested. The system is open and the Subject associated with Google Voice Recognition API sunwihwa algorithm, the algorithm determines the polarity through improved design, based on this interview, speech recognition, which implements the model.

Step-by-step Approach for Effective Korean Unknown Word Recognition (한국어 미등록어 인식을 위한 단계별 접근방법)

  • Park, So-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.369-372
    • /
    • 2009
  • Recently, newspapers as well as web documents include many newly coined words such as "mid"(meaning "American drama" since "mi" means "America" in Korean and "d" refers to the "d" of drama) and "anseup"(meaning "pathetic" since "an" and "seup" literally mean eyeballs and moist respectively). However, these words cause a Korean analyzing system's performance to decrease. In order to recognize these unknown word automatically, this paper propose a step-by-step approach consisting of an unknown noun recognition phase based on full text analysis, an unknown verb recognition phase based on web document frequency, and an unknown noun recognition phase based on web document frequency. The proposed approach includes the phase based on full text analysis to recognize accurately the unknown words occurred once and again in a document. Also, the proposed approach includes two phases based on web document frequency to recognize broadly the unknown words occurred once in the document. Besides, the proposed model divides between an unknown noun recognition phase and an unknown verb recognition phase to recognize various unknown words. Experimental results shows that the proposed approach improves precision 1.01% and recall 8.50% as compared with a previous approach.

  • PDF

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Retrieving Information from Korean OCR Text Database (문자 인식에 의해 구축된 한글 문서 데이터베이스에 대한 정보 검색)

  • Lee, Jun-Ho;Lee, Chung-Sik;Han, Seon-Hwa;Kim, Jin-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.833-841
    • /
    • 1999
  • The texts constructed with Optical Character Recognition(OCR) contain more errors than those constructed with keyboard typing. Therefore, in order to retrieve useful information from OCR texts, we need to develop an effective automatic indexing method. In this paer, we investigate automatic indexing methods that can retrieve information effectively from Korean OCR text database with the character-level recognition ratio of 90%. Experimental result shows that 2-gram indexing provides similar retrieval effectiveness of morpheme-based indexing for the Korean OCR text database.

  • PDF

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.