• Title/Summary/Keyword: text input

Search Result 354, Processing Time 0.023 seconds

SVD-LDA: A Combined Model for Text Classification

  • Hai, Nguyen Cao Truong;Kim, Kyung-Im;Park, Hyuk-Ro
    • Journal of Information Processing Systems
    • /
    • v.5 no.1
    • /
    • pp.5-10
    • /
    • 2009
  • Text data has always accounted for a major portion of the world's information. As the volume of information increases exponentially, the portion of text data also increases significantly. Text classification is therefore still an important area of research. LDA is an updated, probabilistic model which has been used in many applications in many other fields. As regards text data, LDA also has many applications, which has been applied various enhancements. However, it seems that no applications take care of the input for LDA. In this paper, we suggest a way to map the input space to a reduced space, which may avoid the unreliability, ambiguity and redundancy of individual terms as descriptors. The purpose of this paper is to show that LDA can be perfectly performed in a "clean and clear" space. Experiments are conducted on 20 News Groups data sets. The results show that the proposed method can boost the classification results when the appropriate choice of rank of the reduced space is determined.

A Still Image Compression System with a High Quality Text Compression Capability (고 품질 텍스트 압축 기능을 지원하는 정지영상 압축 시스템)

  • Lee, Je-Myung;Lee, Ho-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.275-302
    • /
    • 2007
  • We propose a novel still image compression system which supports a high quality text compression function. The system segments the text from the image and compresses the text with a high quality. The system shows 48:1 high compression ratio using context-based adaptive binary arithmetic coding. The arithmetic coding performs the high compression by the codeblocks in the bitplane. The input of the system consists of a segmentation mode and a ROI(Region Of Interest) mode. In segmentation mode, the input image is segmented into a foreground consisting of text and a background consisting of the remaining region. In ROI mode, the input image is represented by the region of interest window. The high quality text compression function with a high compression ratio shows that the proposed system can be comparable with the JPEG2000 products. This system also uses gray coding to improve the compression ratio.

Development of an image processing algorithm for korean document recognition (인식률을 향상한 한글문서 인식 알고리즘 개발)

  • 김희식;김영재;이평원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1391-1394
    • /
    • 1997
  • This paper proposes a new image processing algorithm to recognize korean documents. It take out the region of text area form input image, then it makes esgmentation of lines, words and characters in the text. A precision segmentation is very important to recognize the input document. The input image has 8-bit gray scaled resolution. Not only the histogram but also brightness dispersion graph are used for segmentation. The result shows a higher accuracy of document recognition.

  • PDF

Representation of Texts into String Vectors for Text Categorization

  • Jo, Tae-Ho
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.2
    • /
    • pp.110-127
    • /
    • 2010
  • In this study, we propose a method for encoding documents into string vectors, instead of numerical vectors. A traditional approach to text categorization usually requires encoding documents into numerical vectors. The usual method of encoding documents therefore causes two main problems: huge dimensionality and sparse distribution. In this study, we modify or create machine learning-based approaches to text categorization, where string vectors are received as input vectors, instead of numerical vectors. As a result, we can improve text categorization performance by avoiding these two problems.

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.

Data Transition Minimization Algorithm for Text Image (텍스트 영상에 대한 데이터 천이 최소화 알고리즘)

  • Hwang, Bo-Hyun;Park, Byoung-Soo;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.371-376
    • /
    • 2012
  • In this paper, we propose a new data coding method and its circuits for minimizing data transition in text image. The proposed circuits can solve the synchronization problem between input data and output data in the modified LVDS algorithm. And the proposed algorithm is allowed to transmit two data signals through additional serial data coding method in order to minimize the data transition in text image and can reduce the operating frequency to a half. Thus, we can solve EMI(Electro-Magnetic Interface) problem and reduce the power consumption. The simulation results show that the proposed algorithm and circuits can provide an enhanced data transition minimization in text image and solve the synchronization problem between input data and output data.

Text-Driven Multiple-Path Discourse Processing for Descriptive Texts

  • Seo, Jungyun
    • Journal of Electrical Engineering and information Science
    • /
    • v.1 no.2
    • /
    • pp.1-8
    • /
    • 1996
  • This paper presents a text-driven discourse analysis system, called DPAS. DPAS constructs a discourse structure by weaving together clauses in the text by finding discourse relations between a clause and the clauses in a context. The basic processing model of DPAS is based on the stack based model of discourse analysis suggested by Grosz and Sidner. We extend the model with dynamic programming method to handle various discourse ambiguities effectively and efficiently. We develop the idea of a context space to keep all information of a context. DPAS parses a text by considering all possible discourse relations between a clause and a context. Since different discourse relations may result in different states of a context, DPAS maintains multiple context spaces for an ambiguous text. Since maintaining all interpretations until the whole text is processed requires too much computing resources, DPAS uses the idea of depth-limited search to limit the search space. If there is more than one discourse relation between an input clause and a context, DPAS constructs context spaces one context space for each discourse relation. Then, DPAS applies heuristics to choose the most desirable context space after it processes some more input clauses. Since the basic idea of DPAS is domain independent, although we used descriptive texts to demonstrate DPAS, we believe the idea of DPAS can be extended to understand other styles of texts.

  • PDF

Segment unit shuffling layer in deep neural networks for text-independent speaker verification (문장 독립 화자 인증을 위한 세그멘트 단위 혼합 계층 심층신경망)

  • Heo, Jungwoo;Shim, Hye-jin;Kim, Ju-ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.148-154
    • /
    • 2021
  • Text-Independent speaker verification needs to extract text-independent speaker embedding to improve generalization performance. However, deep neural networks that depend on training data have the potential to overfit text information instead of learning the speaker information when repeatedly learning from the identical time series. In this paper, to prevent the overfitting, we propose a segment unit shuffling layer that divides and rearranges the input layer or a hidden layer along the time axis, thus mixes the time series information. Since the segment unit shuffling layer can be applied not only to the input layer but also to the hidden layers, it can be used as generalization technique in the hidden layer, which is known to be effective compared to the generalization technique in the input layer, and can be applied simultaneously with data augmentation. In addition, the degree of distortion can be adjusted by adjusting the unit size of the segment. We observe that the performance of text-independent speaker verification is improved compared to the baseline when the proposed segment unit shuffling layer is applied.

Optimizing Input Parameters of Paralichthys olivaceus Disease Classification based on SHAP Analysis (SHAP 분석 기반의 넙치 질병 분류 입력 파라미터 최적화)

  • Kyung-Won Cho;Ran Baik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1331-1336
    • /
    • 2023
  • In text-based fish disease classification using machine learning, there is a problem that the input parameters of the machine learning model are too many, but due to performance problems, the input parameters cannot be arbitrarily reduced. This paper proposes a method of optimizing input parameters specialized for Paralichthys olivaceus disease classification using SHAP analysis techniques to solve this problem,. The proposed method includes data preprocessing of disease information extracted from the halibut disease questionnaire by applying the SHAP analysis technique and evaluating a machine learning model using AutoML. Through this, the performance of the input parameters of AutoML is evaluated and the optimal input parameter combination is derived. In this study, the proposed method is expected to be able to maintain the existing performance while reducing the number of input parameters required, which will contribute to enhancing the efficiency and practicality of text-based Paralichthys olivaceus disease classification.

A Comparative Study on Data Input Design of E-business Websites (E-business 웹사이트에서의 데이터 입력디자인에 관한 비교 연구)

  • 정홍인
    • Archives of design research
    • /
    • v.17 no.1
    • /
    • pp.127-134
    • /
    • 2004
  • The purpose of this study was to compare data input interfaces used in e-business applications on the web and find optimal input design characteristics. Basic data entry tools such as a pull down menu, list, text input box, and radio button were examined by inputting data into a simulated hotel room reservation web site. Experimental results indicated that the text input box was most efficient for experts or experienced operators when there are more than four menu-items and pull down menu was considered most satisfactory, simplest, and easier to use for novices or unexperienced users. A simple list was determined to be the best for the input of binary data considering user's satisfaction, simplicity, and flexibility but radio button was evaluated best for the ease to use. Design guide lines of this study can be applied to build a usable interactive web sites and increase economic efficiency.

  • PDF