• 제목/요약/키워드: text recognition

검색결과 670건 처리시간 0.025초

한국어 고립 단어 음성의 자음/모음/유성자음 음가 분할 및 인식에 관한 연구 (A Study on Consonant/Vowel/Unvoiced Consonant Phonetic Value Segmentation and Recognition of Korean Isolated Word Speech)

  • 이준환;이상범
    • 한국정보처리학회논문지
    • /
    • 제7권6호
    • /
    • pp.1964-1972
    • /
    • 2000
  • For the Korean language, on acoustics, it creates a different form of phonetic value not a phoneme by its own peculiar property. Therefore, the construction of extended recognition system for understanding Korean language should be created with a study of the Korean rule-based system, before it can be used as post-processing of the Korean recognition system. In this paper, text-based Korean rule-based system featuring Korean peculiar vocal sound changing rule is constructed. and based on the text-based phonetic value result of the system constructed, a preliminary phonetic value segmentation border points with non-uniform blocks are extracted in Korean isolated word speech. Through the way of merge and recognition of the non-uniform blocks between the extracted border points, recognition possibility of Korean voice as the form of the phonetic vale has been investigated.

  • PDF

대안적 통째학습 기반 저품질 레거시 콘텐츠에서의 문자 인식 알고리즘 (Character Recognition Algorithm in Low-Quality Legacy Contents Based on Alternative End-to-End Learning)

  • 이성진;윤준석;박선후;유석봉
    • 한국정보통신학회논문지
    • /
    • 제25권11호
    • /
    • pp.1486-1494
    • /
    • 2021
  • 문자 인식은 스마트 주차, text to speech 등 최근 다양한 플랫폼에서 필요로 하는 기술로써, 기존의 방법과 달리 새로운 시도를 통하여 그 성능을 향상시키려는 연구들이 진행되고 있다. 그러나 문자 인식에 사용되는 이미지의 품질이 낮을 경우, 문자 인식기 학습용 이미지와 테스트 이미지간에 해상도 차이가 발생하여 정확도가 떨어지는 문제가 발생된다. 이를 해결하기 위해 본 논문은 문자 인식 모델 성능이 다양한 품질 데이터에 대하여 강인하도록 이미지 초해상도 및 문자 인식을 결합한 통째학습 신경망을 설계하고, 대안적 통째학습 알고리즘을 구현하여 통째 신경망 학습을 수행하였다. 다양한 문자 이미지 중 차량 번호판 이미지를 이용하여 대안적 통째학습 및 인식 성능 테스트를 진행하였고, 이를 통해 제안하는 알고리즘의 효과를 검증하였다.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • 제5권3호
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

음각 정보를 이용한 딥러닝 기반의 알약 식별 알고리즘 연구 (Pill Identification Algorithm Based on Deep Learning Using Imprinted Text Feature)

  • 이선민;김영재;김광기
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권6호
    • /
    • pp.441-447
    • /
    • 2022
  • In this paper, we propose a pill identification model using engraved text feature and image feature such as shape and color, and compare it with an identification model that does not use engraved text feature to verify the possibility of improving identification performance by improving recognition rate of the engraved text. The data consisted of 100 classes and used 10 images per class. The engraved text feature was acquired through Keras OCR based on deep learning and 1D CNN, and the image feature was acquired through 2D CNN. According to the identification results, the accuracy of the text recognition model was 90%. The accuracy of the comparative model and the proposed model was 91.9% and 97.6%. The accuracy, precision, recall, and F1-score of the proposed model were better than those of the comparative model in terms of statistical significance. As a result, we confirmed that the expansion of the range of feature improved the performance of the identification model.

컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상 (Improved Text Recognition using Analysis of Illumination Component in Color Images)

  • 치미영;김계영;최형일
    • 한국컴퓨터정보학회논문지
    • /
    • 제12권3호
    • /
    • pp.131-136
    • /
    • 2007
  • 본 논문에서는 컬러영상에 존재하는 문자들을 효율적으로 추출하기 위한 새로운 접근 방법을 제안한다. 빛 또는 조명성분의 영향에 의해 획득된 영상 내에 존재하는 반사성분은 문자 또는 관심객체들의 경계가 모호해 지거나 관심객체와 배경이 서로 혼합 되었을 경우, 문자추출 및 인식을 함에 있어서 오류를 포함시킬 수 있다. 따라서 영상 내에 존재하는 반사성분을 제거하기 위해 먼저. 컬러영상으로부터 Red컬러 성분에 해당하는 히스토그램에서 두개의 pick점을 검출한다. 이후 검출된 두 개의 pick점들 간의 분포를 사용하여 노말 또는 편광 영상에 해당하는지를 판별한다. 노말 영상의 경우 부가적인 처리를 거치지 않고 문자에 해당하는 영역을 검출하며, 편광 영상에 해당하는 경우 반사성분을 제거하기 위해 호모모픽필터링 방법을 적용하여 반사성분에 해당하는 영역을 제거한다. 이후 문자영역을 검출하기 위해 최적전역임계화방식을 적용하여 전경과 배경을 분리하였으며 문자영역 추출 및 인식의 성능을 향상시켰다. 널리 사용되고 있는 문자 인식기를 사용하여 제안한 방식 적용 전과 후의 인식결과를 비교하였다. 편광영상에서 제안된 방법 적용 후, 문자인식을 한 경우 인식률이 향상되었다.

  • PDF

문자영역의 분리와 기하학적 도면요소의 인식에 의한 도면 자동입력 (Automatic Drawing Input by Segmentation of Text Region and Recognltion of Geometric Drawing Element)

  • 배창석;민병우
    • 전자공학회논문지B
    • /
    • 제31B권6호
    • /
    • pp.91-103
    • /
    • 1994
  • As CAD systems are introduced in the filed of engineering design, the necessities for automatic drawing input are increased . In this paper, we propose a method for realizing automatic drawing input by separation of text regions and graphic regions, extraction of line vectors from graphic regions, and recognition of circular arcs and circles from line vectors. Sizes of isolated regions, on a drawing are used for separating text regions and graphic regions. Thinning and maximum allowable error method are used to extract line vectors. And geometric structures of line vectors are analyzed to recognize circular arcs and circles. By processing text regions and graphic regions separately, 30~40% of vector information can be reduced. Recognition of circular arcs and circles can increase the utilization of automatic drawing input function.

  • PDF

OryzaGP: rice gene and protein dataset for named-entity recognition

  • Larmande, Pierre;Do, Huy;Wang, Yue
    • Genomics & Informatics
    • /
    • 제17권2호
    • /
    • pp.17.1-17.3
    • /
    • 2019
  • Text mining has become an important research method in biology, with its original purpose to extract biological entities, such as genes, proteins and phenotypic traits, to extend knowledge from scientific papers. However, few thorough studies on text mining and application development, for plant molecular biology data, have been performed, especially for rice, resulting in a lack of datasets available to solve named-entity recognition tasks for this species. Since there are rare benchmarks available for rice, we faced various difficulties in exploiting advanced machine learning methods for accurate analysis of the rice literature. To evaluate several approaches to automatically extract information from gene/protein entities, we built a new dataset for rice as a benchmark. This dataset is composed of a set of titles and abstracts, extracted from scientific papers focusing on the rice species, and is downloaded from PubMed. During the 5th Biomedical Linked Annotation Hackathon, a portion of the dataset was uploaded to PubAnnotation for sharing. Our ultimate goal is to offer a shared task of rice gene/protein name recognition through the BioNLP Open Shared Tasks framework using the dataset, to facilitate an open comparison and evaluation of different approaches to the task.

고유영역을 이용한 문자독립형 화자인식에 관한 연구 (A Study On Text Independent Speaker Recognition Using Eigenspace)

  • 함철배;이동규;이두수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.671-674
    • /
    • 1999
  • We report the new method for speaker recognition. Until now, many researchers have used HMM (Hidden Markov Model) with cepstral coefficient or neural network for speaker recognition. Here, we introduce the method of speaker recognition using eigenspace. This method can reduce the training and recognition time of speaker recognition system. In proposed method, we use the low rank model of the speech eigenspace. In experiment, we obtain good recognition result.

  • PDF

딥러닝 기반의 의료 OCR 기술 동향 (Trends in Deep Learning-based Medical Optical Character Recognition)

  • 윤성연;최아린;김채원;오수민;손서영;김지연;이현희;한명은;박민서
    • 문화기술의 융합
    • /
    • 제10권2호
    • /
    • pp.453-458
    • /
    • 2024
  • 광학 문자 인식(Optical Character Recognition, OCR)은 이미지 내의 문자를 인식하여 디지털 포맷(Digital Format)의 텍스트로 변환하는 기술이다. 딥러닝(Deep Learning) 기반의 OCR이 높은 인식률을 보여줌에 따라 대량의 기록 자료를 보유한 많은 산업 분야에서 OCR을 활용하고 있다. 특히, 의료 산업 분야는 의료 서비스 향상을 위해 딥러닝 기반의 OCR을 적극 도입하였다. 본 논문에서는 딥러닝 기반 OCR 엔진(Engine) 및 의료 데이터에 특화된 OCR의 동향을 살펴보고, 의료 OCR의 발전 방향에 대해 제시한다. 현재의 의료 OCR은 검출한 문자 데이터를 자연어 처리(Natural Language Processing, NLP)하여 인식률을 개선하였다. 그러나, 정형화되지 않은 손글씨(Handwriting)나 변형된 문자에서는 여전히 인식 정확도에 한계를 보였다. 의료 데이터의 데이터베이스(Database)화, 이미지 전처리(Pre-processing), 특화된 자연어 처리를 통해 더욱 고도화된 의료 OCR을 발전시키는 것이 필요하다.

Extending TextAE for annotation of non-contiguous entities

  • Lever, Jake;Altman, Russ;Kim, Jin-Dong
    • Genomics & Informatics
    • /
    • 제18권2호
    • /
    • pp.15.1-15.6
    • /
    • 2020
  • Named entity recognition tools are used to identify mentions of biomedical entities in free text and are essential components of high-quality information retrieval and extraction systems. Without good entity recognition, methods will mislabel searched text and will miss important information or identify spurious text that will frustrate users. Most tools do not capture non-contiguous entities which are separate spans of text that together refer to an entity, e.g., the entity "type 1 diabetes" in the phrase "type 1 and type 2 diabetes." This type is commonly found in biomedical texts, especially in lists, where multiple biomedical entities are named in shortened form to avoid repeating words. Most text annotation systems, that enable users to view and edit entity annotations, do not support non-contiguous entities. Therefore, experts cannot even visualize non-contiguous entities, let alone annotate them to build valuable datasets for machine learning methods. To combat this problem and as part of the BLAH6 hackathon, we extended the TextAE platform to allow visualization and annotation of non-contiguous entities. This enables users to add new subspans to existing entities by selecting additional text. We integrate this new functionality with TextAE's existing editing functionality to allow easy changes to entity annotation and editing of relation annotations involving non-contiguous entities, with importing and exporting to the PubAnnotation format. Finally, we roughly quantify the problem across the entire accessible biomedical literature to highlight that there are a substantial number of non-contiguous entities that appear in lists that would be missed by most text mining systems.