• Title/Summary/Keyword: consonants and vowels

Search Result 200, Processing Time 0.022 seconds

A Study on Stroke Extraction for Handwritten Korean Character Recognition (필기체 한글 문자 인식을 위한 획 추출에 관한 연구)

  • Choi, Young-Kyoo;Rhee, Sang-Burm
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.375-382
    • /
    • 2002
  • Handwritten character recognition is classified into on-line handwritten character recognition and off-line handwritten character recognition. On-line handwritten character recognition has made a remarkable outcome compared to off-line hacdwritten character recognition. This method can acquire the dynamic written information such as the writing order and the position of a stroke by means of pen-based electronic input device such as a tablet board. On the contrary, Any dynamic information can not be acquired in off-line handwritten character recognition since there are extreme overlapping between consonants and vowels, and heavily noisy images between strokes, which change the recognition performance with the result of the preprocessing. This paper proposes a method that effectively extracts the stroke including dynamic information of characters for off-line Korean handwritten character recognition. First of all, this method makes improvement and binarization of input handwritten character image as preprocessing procedure using watershed algorithm. The next procedure is extraction of skeleton by using the transformed Lu and Wang's thinning: algorithm, and segment pixel array is extracted by abstracting the feature point of the characters. Then, the vectorization is executed with a maximum permission error method. In the case that a few strokes are bound in a segment, a segment pixel array is divided with two or more segment vectors. In order to reconstruct the extracted segment vector with a complete stroke, the directional component of the vector is mortified by using right-hand writing coordinate system. With combination of segment vectors which are adjacent and can be combined, the reconstruction of complete stroke is made out which is suitable for character recognition. As experimentation, it is verified that the proposed method is suitable for handwritten Korean character recognition.

Space Structure Character of Hangeul Typography (한글 타이포그래피의 공간 구조적 특성)

  • Kim, Young-Kook;Park, Seong-Hyeon
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.3
    • /
    • pp.86-96
    • /
    • 2008
  • General development basis of letter system is recognized by formative value in terms of its function and structure. principle of clustered writing is the most significant feature of Hangeul typography as considered that it is based on function and formativeness. Thus, not only by changes with its form but also by its characteristic syllable combination, space structure is made as consonants and vowels are combined in single letter, then the combination develop into word, sentence, paragraph to make second, third space structure character. This character has significant impact on readability that is core function of typography. With this property, space structure character is regarded as very important component of Hangeul typography. First, space structure character of Hangeul typography is reviewed by relating it to visual perception of gestalt psychology and compared square-framed letter and framed latter By applying square-framed letter and framed latter in same sentence, legibility and readability were studied. Researcher has found that space structure character of Hangeul typography has significant impact on its function, and in terms of future design, it is very critical not only for design but also for communication environment as space structure formativeness of Hangeul typography interact with communication that is basic concept.

Hangul Bitmap Data Compression Embedded in TrueType Font (트루타입 폰트에 내장된 한글 비트맵 데이타의 압축)

  • Han Joo-Hyun;Jeong Geun-Ho;Choi Jae-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.6
    • /
    • pp.580-587
    • /
    • 2006
  • As PDA, IMT-2000, and e-Book are developed and popular in these days, the number of users who use these products has been increasing. However, available memory size of these machines is still smaller than that of desktop PCs. In these products, TrueType fonts have been increased in demand because the number of users who want to use good quality fonts has increased, and TrueType fonts are of great use in Windows CE products. However, TrueType fonts take a large portion of available device memory, considering the small memory sizes of mobile devices. Therefore, it is required to reduce the size of TrueType fonts. In this paper, two-phase compression techniques are presented for the purpose of reducing the sire of hangul bitmap data embedded in TrueType fonts. In the first step, each character in bitmap is divided into initial consonant, medial vowel, and final consonant, respectively, then the character is recomposed into the composite bitmap. In the second phase, if any two consonants or vowels are determined to be the same, one of them is removed. The TrueType embedded bitmaps in Hangeul Wanseong (pre-composed) and Hangul Johab (pre-combined) are used in compression. By using our compression techniques, the compression rates of embedded bitmap data for TrueType fonts can be reduced around 35% in Wanseong font, and 7% in Johab font. Consequently, the compression rate of total TrueType Wanseong font is about 9.26%.

Learning-associated Reward and Penalty in Feedback Learning: an fMRI activation study (학습피드백으로서 보상과 처벌 관련 두뇌 활성화 연구)

  • Kim, Jinhee;Kan, Eunjoo
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.1
    • /
    • pp.65-90
    • /
    • 2017
  • Rewards or penalties become informative only when contingent on an immediately preceding response. Our goal was to determine if the brain responds differently to motivational events depending on whether they provide feedback with the contingencies effective for learning. Event-related fMRI data were obtained from 22 volunteers performing a visuomotor categorical task. In learning-condition trials, participants learned by trial and error to make left or right responses to letter cues (16 consonants). Monetary rewards (+500) or penalties (-500) were given as feedback (learning feedback). In random-condition trials, cues (4 vowels) appeared right or left of the display center, and participants were instructed to respond with the appropriate hand. However, rewards or penalties (random feedback) were given randomly (50/50%) regardless of the correctness of response. Feedback-associated BOLD responses were analyzed with ANOVA [trial type (learning vs. random) x feedback type (reward vs. penalty)] using SPM8 (voxel-wise FWE p < .001). The right caudate nucleus and right cerebellum showed activation, whereas the left parahippocampus and other regions as the default mode network showed deactivation, both greater for learning trials than random trials. Activations associated with reward feedback did not differ between the two trial types for any brain region. For penalty, both learning-penalty and random-penalty enhanced activity in the left insular cortex, but not the right. The left insula, however, as well as the left dorsolateral prefrontal cortex and dorsomedial prefrontal cortex/dorsal anterior cingulate cortex, showed much greater responses for learning-penalty than for random-penalty. These findings suggest that learning-penalty plays a critical role in learning, unlike rewards or random-penalty, probably not only due to its evoking of aversive emotional responses, but also because of error-detection processing, either of which might lead to changes in planning or strategy.

Knowledge based Text to Facial Sequence Image System for Interaction of Lecturer and Learner in Cyber Universities (가상대학에서 교수자와 학습자간 상호작용을 위한 지식기반형 문자-얼굴동영상 변환 시스템)

  • Kim, Hyoung-Geun;Park, Chul-Ha
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.179-188
    • /
    • 2008
  • In this paper, knowledge based text to facial sequence image system for interaction of lecturer and learner in cyber universities is studied. The system is defined by the synthesis of facial sequence image which is synchronized the lip according to the text information based on grammatical characteristic of hangul. For the implementation of the system, the transformation method that the text information is transformed into the phoneme code, the deformation rules of mouse shape which can be changed according to the code of phonemes, and the synthesis method of facial sequence image by using deformation rules of mouse shape are proposed. In the proposed method, all syllables of hangul are represented 10 principal mouse shape and 78 compound mouse shape according to the pronunciation characteristics of the basic consonants and vowels, and the characteristics of the articulation rules, respectively. To synthesize the real time facial sequence image able to realize the PC, the 88 mouth shape stored data base are used without the synthesis of mouse shape in each frame. To verify the validity of the proposed method the various synthesis of facial sequence image transformed from the text information is accomplished, and the system that can be applied the PC is implemented using the proposed method.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

A Phoneme-based Approximate String Searching System for Restricted Korean Character Input Environments (제한된 한글 입력환경을 위한 음소기반 근사 문자열 검색 시스템)

  • Yoon, Tai-Jin;Cho, Hwan-Gue;Chung, Woo-Keun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.10
    • /
    • pp.788-801
    • /
    • 2010
  • Advancing of mobile device is remarkable, so the research on mobile input device is getting more important issue. There are lots of input devices such as keypad, QWERTY keypad, touch and speech recognizer, but they are not as convenient as typical keyboard-based desktop input devices so input strings usually contain many typing errors. These input errors are not trouble with communication among person, but it has very critical problem with searching in database, such as dictionary and address book, we can not obtain correct results. Especially, Hangeul has more than 10,000 different characters because one Hangeul character is made by combination of consonants and vowels, frequency of error is higher than English. Generally, suffix tree is the most widely used data structure to deal with errors of query, but it is not enough for variety errors. In this paper, we propose fast approximate Korean word searching system, which allows variety typing errors. This system includes several algorithms for applying general approximate string searching to Hangeul. And we present profanity filters by using proposed system. This system filters over than 90% of coined profanities.

A Model for evaluating the efficiency of inputting Hangul on a telephone keyboard (전화기 자판의 한글 입력 효율성 평가 모형)

  • Koo, Min-Mo;Lee, Mahn-Young
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.295-304
    • /
    • 2001
  • The standards of a telephone Hangul keyboard should be decided in terms of objective factors : the number of strokes and fingers’moving distance. A number of designers will agree on them, because these factors can be calculated in an objective manner. So, We developed the model which can evaluate the efficiency of inputting Hangul on a telephone keyboard in terms of two factors. As compared with other models, the major features of this model are as follows : in order to evaluate the efficiency of Hangul input on a telephone keyboard, (1) this model calculated not a typing time but the number of strokes ; (2) concurrence frequency that had been counted on KOREA-1 Corpus was used directly ; (3) a total set of 67 consonants and vowels was used ; and (4) this model could evaluate a number of keyboards that use a kind of syllabic function key-the complete key, the null key and the final consonant key and also calculate a lot of keyboards that adopt no syllabic function key. However, there are many other factors to judge the efficiency of inputting Hangul on a telephone keyboard. If we want to make more accurate estimate of a telephone Hangul keyboard, we must consider both logical data and experimental data as well.

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.