• Title/Summary/Keyword: text input

Search Result 351, Processing Time 0.026 seconds

Neural Text Categorizer for Exclusive Text Categorization

  • Jo, Tae-Ho
    • Journal of Information Processing Systems
    • /
    • v.4 no.2
    • /
    • pp.77-86
    • /
    • 2008
  • This research proposes a new neural network for text categorization which uses alternative representations of documents to numerical vectors. Since the proposed neural network is intended originally only for text categorization, it is called NTC (Neural Text Categorizer) in this research. Numerical vectors representing documents for tasks of text mining have inherently two main problems: huge dimensionality and sparse distribution. Although many various feature selection methods are developed to address the first problem, the reduced dimension remains still large. If the dimension is reduced excessively by a feature selection method, robustness of text categorization is degraded. Even if SVM (Support Vector Machine) is tolerable to huge dimensionality, it is not so to the second problem. The goal of this research is to address the two problems at same time by proposing a new representation of documents and a new neural network using the representation for its input vector.

Text Line Segmentation of Handwritten Documents by Area Mapping

  • Boragule, Abhijeet;Lee, GueeSang
    • Smart Media Journal
    • /
    • v.4 no.3
    • /
    • pp.44-49
    • /
    • 2015
  • Text line segmentation is a preprocessing step in OCR, which can significantly influence the accuracy of document analysis applications. This paper proposes a novel methodology for the text line segmentation of handwritten documents. First, the average width of the connected components is used to form a 1-D Gaussian kernel and a smoothing operation is then applied to the input binary image. The adaptive binarization of the smoothed image forms the final text lines. In this work, the segmentation method involves two stages: firstly, the large connected components are labelled as a unique text line using text line area mapping. Secondly, the final refinement of the segmentation is performed using the Euclidean distance between the text line and small connected components. The group of uniquely labelled text candidates achieves promising segmentation results. The proposed approach works well on Korean and English language handwritten documents captured using a camera.

An end-to-end synthesis method for Korean text-to-speech systems (한국어 text-to-speech(TTS) 시스템을 위한 엔드투엔드 합성 방식 연구)

  • Choi, Yeunju;Jung, Youngmoon;Kim, Younggwan;Suh, Youngjoo;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.39-48
    • /
    • 2018
  • A typical statistical parametric speech synthesis (text-to-speech, TTS) system consists of separate modules, such as a text analysis module, an acoustic modeling module, and a speech synthesis module. This causes two problems: 1) expert knowledge of each module is required, and 2) errors generated in each module accumulate passing through each module. An end-to-end TTS system could avoid such problems by synthesizing voice signals directly from an input string. In this study, we implemented an end-to-end Korean TTS system using Google's Tacotron, which is an end-to-end TTS system based on a sequence-to-sequence model with attention mechanism. We used 4392 utterances spoken by a Korean female speaker, an amount that corresponds to 37% of the dataset Google used for training Tacotron. Our system obtained mean opinion score (MOS) 2.98 and degradation mean opinion score (DMOS) 3.25. We will discuss the factors which affected training of the system. Experiments demonstrate that the post-processing network needs to be designed considering output language and input characters and that according to the amount of training data, the maximum value of n for n-grams modeled by the encoder should be small enough.

Prosodic Annotation in a Thai Text-to-speech System

  • Potisuk, Siripong
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.405-414
    • /
    • 2007
  • This paper describes a preliminary work on prosody modeling aspect of a text-to-speech system for Thai. Specifically, the model is designed to predict symbolic markers from text (i.e., prosodic phrase boundaries, accent, and intonation boundaries), and then using these markers to generate pitch, intensity, and durational patterns for the synthesis module of the system. In this paper, a novel method for annotating the prosodic structure of Thai sentences based on dependency representation of syntax is presented. The goal of the annotation process is to predict from text the rhythm of the input sentence when spoken according to its intended meaning. The encoding of the prosodic structure is established by minimizing speech disrhythmy while maintaining the congruency with syntax. That is, each word in the sentence is assigned a prosodic feature called strength dynamic which is based on the dependency representation of syntax. The strength dynamics assigned are then used to obtain rhythmic groupings in terms of a phonological unit called foot. Finally, the foot structure is used to predict the durational pattern of the input sentence. The aforementioned process has been tested on a set of ambiguous sentences, which represents various structural ambiguities involving five types of compounds in Thai.

  • PDF

Academic Registration Text Classification Using Machine Learning

  • Alhawas, Mohammed S;Almurayziq, Tariq S
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.93-96
    • /
    • 2022
  • Natural language processing (NLP) is utilized to understand a natural text. Text analysis systems use natural language algorithms to find the meaning of large amounts of text. Text classification represents a basic task of NLP with a wide range of applications such as topic labeling, sentiment analysis, spam detection, and intent detection. The algorithm can transform user's unstructured thoughts into more structured data. In this work, a text classifier has been developed that uses academic admission and registration texts as input, analyzes its content, and then automatically assigns relevant tags such as admission, graduate school, and registration. In this work, the well-known algorithms support vector machine SVM and K-nearest neighbor (kNN) algorithms are used to develop the above-mentioned classifier. The obtained results showed that the SVM classifier outperformed the kNN classifier with an overall accuracy of 98.9%. in addition, the mean absolute error of SVM was 0.0064 while it was 0.0098 for kNN classifier. Based on the obtained results, the SVM is used to implement the academic text classification in this work.

One-key Keyboard: A Very Small QWERTY Keyboard Supporting Text Entry for Wearable Computing (원키 키보드: 웨어러블 컴퓨팅 환경에서 문자입력을 지원하는 초소형 QWERTY 키보드)

  • Lee, Woo-Hun;Sohn, Min-Jung
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.21-28
    • /
    • 2006
  • Most of the commercialized wearable text input devices are wrist-worn keyboards that have adopted the minimization method of reducing keys. Generally, a drastic key reduction in order to achieve sufficient wearability increases KSPC(Keystrokes per Character), decreases text entry performance, and requires additional effort to learn a new typing method. We are faced with wearability-usability tradeoff problems in designing a good wearable keyboard. To address this problem, we introduced a new keyboard minimization method of reducing key pitch. From a series of empirical studies, we found the potential of a new method which has a keyboard with a 7mm key pitch, good wearability and social acceptance in terms of physical form factors, and allows users to type 15.0WPM in 3 session trials. However, participants point out that a lack of passive haptic feedback in keying action and visual feedback on users' input deteriorate the text entry performance. We have developed the One-key Keyboard that addresses this problem. The traditional desktop keyboard has one key per character, but the One-key Keyboard has only one key ($70mm{\times}35mm$) on which a 10*5 QWERTY key array is printed. The One-key Keyboard detects the position of the fingertip at the time of the keying event and figures out the character entered. We conducted a text entry performance test comprised of 5 sessions. The participants typed 18.9WPM with a 6.7% error rate over all sessions and achieved up to 24.5WPM. From the experiment's results, the One-key Keyboard was evaluated as a potential text input device for wearable computing, balancing wearability, social acceptance, input speed, and learnability.

  • PDF

Design of Handwriting-based Text Interface for Support of Mobile Platform Education Contents (모바일 플랫폼 교육 콘텐츠 지원을 위한 손 글씨 기반 텍스트 인터페이스 설계)

  • Cho, Yunsik;Cho, Sae-Hong;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.81-89
    • /
    • 2021
  • This study proposes a text interface for support of language-based educational contents in a mobile platform environment. The proposed interface utilizes deep learning as an input structure to write words through handwriting. Based on GUI (Graphical User Interface) using buttons and menus of mobile platform contents and input methods such as screen touch, click, and drag, we design a text interface that can directly input and process handwriting from the user. It uses the EMNIST (Extended Modified National Institute of Standards and Technology database) dataset and a trained CNN (Convolutional Neural Network) to classify and combine alphabetic texts to complete words. Finally, we conduct experiments to analyze the learning support effect of the interface proposed by directly producing English word education contents and to compare satisfaction. We compared the ability to learn English words presented by users who have experienced the existing keypad-type interface and the proposed handwriting-based text interface in the same educational environment, and we analyzed the overall satisfaction in the process of writing words by manipulating the interface.

A Study on Image Generation from Sentence Embedding Applying Self-Attention (Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구)

  • Yu, Kyungho;No, Juhyeon;Hong, Taekeun;Kim, Hyeong-Ju;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.63-69
    • /
    • 2021
  • When a person sees a sentence and understands the sentence, the person understands the sentence by reminiscent of the main word in the sentence as an image. Text-to-image is what allows computers to do this associative process. The previous deep learning-based text-to-image model extracts text features using Convolutional Neural Network (CNN)-Long Short Term Memory (LSTM) and bi-directional LSTM, and generates an image by inputting it to the GAN. The previous text-to-image model uses basic embedding in text feature extraction, and it takes a long time to train because images are generated using several modules. Therefore, in this research, we propose a method of extracting features by using the attention mechanism, which has improved performance in the natural language processing field, for sentence embedding, and generating an image by inputting the extracted features into the GAN. As a result of the experiment, the inception score was higher than that of the model used in the previous study, and when judged with the naked eye, an image that expresses the features well in the input sentence was created. In addition, even when a long sentence is input, an image that expresses the sentence well was created.

Separation of Text and Non-text in Document Layout Analysis using a Recursive Filter

  • Tran, Tuan-Anh;Na, In-Seop;Kim, Soo-Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4072-4091
    • /
    • 2015
  • A separation of text and non-text elements plays an important role in document layout analysis. A number of approaches have been proposed but the quality of separation result is still limited due to the complex of the document layout. In this paper, we present an efficient method for the classification of text and non-text components in document image. It is the combination of whitespace analysis with multi-layer homogeneous regions which called recursive filter. Firstly, the input binary document is analyzed by connected components analysis and whitespace extraction. Secondly, a heuristic filter is applied to identify non-text components. After that, using statistical method, we implement the recursive filter on multi-layer homogeneous regions to identify all text and non-text elements of the binary image. Finally, all regions will be reshaped and remove noise to get the text document and non-text document. Experimental results on the ICDAR2009 page segmentation competition dataset and other datasets prove the effectiveness and superiority of proposed method.