• Title/Summary/Keyword: word spacing

Search Result 66, Processing Time 0.026 seconds

A Joint Statistical Model for Word Spacing and Spelling Error Correction Simultaneously (띄어쓰기 및 철자 오류 동시교정을 위한 통계적 모델)

  • Noh, Hyung-Jong;Cha, Jeong-Won;Lee, GaryGeun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.131-139
    • /
    • 2007
  • In this paper, we present a preprocessor which corrects word spacing errors and spelling correction errors simultaneously. The proposed expands noisy-channel model so that it corrects both errors in colloquial style sentences effectively, while preprocessing algorithms have limitations because they correct each error separately. Using Eojeol transition pattern dictionary and statistical data such as n-gram and Jaso transition probabilities, it minimizes the usage of dictionaries and produces the corrected candidates effectively. In experiments we did not get satisfactory results at current stage, we noticed that the proposed methodology has the utility by analyzing the errors. So we expect that the preprocessor will function as an effective error corrector for general colloquial style sentence by doing more improvements.

An Analysis of Korean Word Spacing Errors Made by Chinese Learners (중국인 한국어 학습자의 글쓰기에 나타난 띄어쓰기 오류 양상 및 지도 방향)

  • Wang, Yuan
    • Korean Educational Research Journal
    • /
    • v.40 no.1
    • /
    • pp.59-79
    • /
    • 2019
  • The purpose of this study is to analyze, through questionnaires and interviews, spacing errors in Chinese students' Korean writing and to propose changes for the teaching methods used for Chinese learners by analyzing the causes of errors. By analyzing the learners' writing samples, a total of 148 space errors were found. The rates of errors (77.6%) that were made by combining separate words is much higher than the errors (22.4%) that were made by placing a space within a compound word. Among the error types, "noun + noun," "observer (type) + dependent noun," and postpositional particle errors occur most frequently. In this paper, we propose the direction of spacing starting from the deductive side and the inductive side for nouns and investigations.

  • PDF

The Effect of Hangul Font on Reading Speed in the Computer Environment

  • Kim, Sunkyoung;Lee, Ko Eun;Lee, Hye-Won
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.5
    • /
    • pp.449-457
    • /
    • 2013
  • Objective: The aim of this study is to investigate the effect of Hangul font on reading speed when texts are displayed on the computer screen. Background: Reading performance is influenced by fonts. However, there are few studies of Hangul font from a cognitive perspective. Fonts could affect reading performance directly and indirectly, interacting with other visual-perceptual factors such as size, word spacing, and line spacing. Method: In experiment 1, two variables were manipulated; a frame condition(square frame non-square frame) and a stroke condition(serif sans-serif). According to each condition, one of the four fonts was applied to the texts. The height of the four fonts was controlled. The participants were asked to read aloud the presented texts. In experiment 2, the non-square frame fonts were adjusted to have approximately the same size, width, letter spacing, and word spacing as the square frame fonts. The experimental design and task used in experiment 2 were identical with experiment 1. Results: In general, reading speed was faster in the square frame fonts than in the non-square frame fonts. The reading speed was not significantly different across stroke conditions. Conclusion: The frame of Hangul font significantly influenced reading speed. These results suggest that the type of Hangul font is a factor to affect reading performance. Application: The frame of fonts should be considered in designing of new fonts. The square frame fonts should be the preferred choice to enhance legibility.

A Word Spacing System based on Syllable Patterns for Memory-constrained Devices (메모리 제약적 기기를 위한 음절 패턴 기반 띄어쓰기 시스템)

  • Kim, Shin-Il;Yang, Seon;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.653-658
    • /
    • 2010
  • In this paper, we propose a word spacing system which can be performed with just a small memory. We focus on significant memory reduction while maintaining the performance of the system as much as the latest studies. Our proposed method is based on the theory of Hidden Markov Model. We use only probability information not adding any rule information. Two types of features are employed: 1) the first features are the spacing patterns dependent on each individual syllable and 2) the second features are the values of transition probability between the two syllable-patterns. In our experiment using only the first type of features, we achieved a high accuracy of more than 91% while reducing the memory by 53% compared with other systems developed for mobile application. When we used both types of features, we achieved an outstanding accuracy of more than 94% while reducing the memory by 76% compared with other system which employs bigram syllables as its features.

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Word Processor font optimization in Fixed-function cell Using a Genetic Algorithm (유전자 알고리즘을 이용한 고정 셀에서 글자 폰트(font) 최적화)

  • Kim, Sang-Won;Kim, Seung-Hee;Kim, Woo-Je
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.163-172
    • /
    • 2013
  • This study was conducted to explore a method of displaying optimized letters that fit the size of tables using a genetic algorithm. As a result, fonts with optimized letters of different lengths were offered through optimum values of the font size, line spacing, and letter spacing by calculating the width and height of the cell and number of letters to be entered. This study is significant in that it provides a solution to letter optimization issues in fixed cells that occur in various word processors that are currently used, through the genetic algorithm.

KR-WordRank : An Unsupervised Korean Word Extraction Method Based on WordRank (KR-WordRank : WordRank를 개선한 비지도학습 기반 한국어 단어 추출 방법)

  • Kim, Hyun-Joong;Cho, Sungzoon;Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.1
    • /
    • pp.18-33
    • /
    • 2014
  • A Word is the smallest unit for text analysis, and the premise behind most text-mining algorithms is that the words in given documents can be perfectly recognized. However, the newly coined words, spelling and spacing errors, and domain adaptation problems make it difficult to recognize words correctly. To make matters worse, obtaining a sufficient amount of training data that can be used in any situation is not only unrealistic but also inefficient. Therefore, an automatical word extraction method which does not require a training process is desperately needed. WordRank, the most widely used unsupervised word extraction algorithm for Chinese and Japanese, shows a poor word extraction performance in Korean due to different language structures. In this paper, we first discuss why WordRank has a poor performance in Korean, and propose a customized WordRank algorithm for Korean, named KR-WordRank, by considering its linguistic characteristics and by improving the robustness to noise in text documents. Experiment results show that the performance of KR-WordRank is significantly better than that of the original WordRank in Korean. In addition, it is found that not only can our proposed algorithm extract proper words but also identify candidate keywords for an effective document summarization.

Hot Keyword Extraction of Sci-tech Periodicals Based on the Improved BERT Model

  • Liu, Bing;Lv, Zhijun;Zhu, Nan;Chang, Dongyu;Lu, Mengxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1800-1817
    • /
    • 2022
  • With the development of the economy and the improvement of living standards, the hot issues in the subject area have become the main research direction, and the mining of the hot issues in the subject currently has problems such as a large amount of data and a complex algorithm structure. Therefore, in response to this problem, this study proposes a method for extracting hot keywords in scientific journals based on the improved BERT model.It can also provide reference for researchers,and the research method improves the overall similarity measure of the ensemble,introducing compound keyword word density, combining word segmentation, word sense set distance, and density clustering to construct an improved BERT framework, establish a composite keyword heat analysis model based on I-BERT framework.Taking the 14420 articles published in 21 kinds of social science management periodicals collected by CNKI(China National Knowledge Infrastructure) in 2017-2019 as the experimental data, the superiority of the proposed method is verified by the data of word spacing, class spacing, extraction accuracy and recall of hot keywords. In the experimental process of this research, it can be found that the method proposed in this paper has a higher accuracy than other methods in extracting hot keywords, which can ensure the timeliness and accuracy of scientific journals in capturing hot topics in the discipline, and finally pass Use information technology to master popular key words.

A Normalization Method of Distorted Korean SMS Sentences for Spam Message Filtering (스팸 문자 필터링을 위한 변형된 한글 SMS 문장의 정규화 기법)

  • Kang, Seung-Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.271-276
    • /
    • 2014
  • Short message service(SMS) in a mobile communication environment is a very convenient method. However, it caused a serious side effect of generating spam messages for advertisement. Those who send spam messages distort or deform SMS sentences to avoid the messages being filtered by automatic filtering system. In order to increase the performance of spam filtering system, we need to recover the distorted sentences into normal sentences. This paper proposes a method of normalizing the various types of distorted sentence and extracting keywords through automatic word spacing and compound noun decomposition.

SMS Text Messages Filtering using Word Embedding and Deep Learning Techniques (워드 임베딩과 딥러닝 기법을 이용한 SMS 문자 메시지 필터링)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.24-29
    • /
    • 2018
  • Text analysis technique for natural language processing in deep learning represents words in vector form through word embedding. In this paper, we propose a method of constructing a document vector and classifying it into spam and normal text message, using word embedding and deep learning method. Automatic spacing applied in the preprocessing process ensures that words with similar context are adjacently represented in vector space. Additionally, the intentional word formation errors with non-alphabetic or extraordinary characters are designed to avoid being blocked by spam message filter. Two embedding algorithms, CBOW and skip grams, are used to produce the sentence vector and the performance and the accuracy of deep learning based spam filter model are measured by comparing to those of SVM Light.