• Title/Summary/Keyword: word segmentation

Search Result 135, Processing Time 0.027 seconds

Character Level and Word Level English License Plate Recognition Using Deep-learning Neural Networks (딥러닝 신경망을 이용한 문자 및 단어 단위의 영문 차량 번호판 인식)

  • Kim, Jinho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.4
    • /
    • pp.19-28
    • /
    • 2020
  • Vehicle license plate recognition system is not generalized in Malaysia due to the loose character layout rule and the varying number of characters as well as the mixed capital English characters and italic English words. Because the italic English word is hard to segmentation, a separate method is required to recognize in Malaysian license plate. In this paper, we propose a mixed character level and word level English license plate recognition algorithm using deep learning neural networks. The difference of Gaussian method is used to segment character and word by generating a black and white image with emphasized character strokes and separated touching characters. The proposed deep learning neural networks are implemented on the LPR system at the gate of a building in Kuala-Lumpur for the collection of database and the evaluation of algorithm performance. The evaluation results show that the proposed Malaysian English LPR can be used in commercial market with 98.01% accuracy.

A Reverse Segmentation Algorithm of Compound Nouns Using Affix Information and Preference Pattern (접사정보 및 선호패턴을 이용한 복합명사의 역방향 분해 알고리즘)

  • Ryu, Bang;Baek, Hyun-Chul;Kim, Sang-Bok
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.3
    • /
    • pp.418-426
    • /
    • 2004
  • This paper suggests a reverse segmentation Algorithm using affix information and some preference pattern information of Korean compound nouns. The structure of Korean compound nouns are mostly derived from the Chinese characters and it includes some preference patterns, which are going to be utilized as a segmentation rule in this paper. To evaluate the accuracy of the proposed algorithm, an experiment was performed with 36061 compound nouns. The experiment resulted in getting 99.3% of correct segmentation and showed excellent satisfactory result from the comparative experimentation with other algorithm, especially most of the four or five-syllable compound nouns were successfully segmented without fail.

  • PDF

Design and Implementation of Virtual Network Search System for Segmentation of Unconstrained Handwritten Hangul (무제약 필기체 한글 분할을 위한 가상 네트워크 탐색 시스템의 설계 및 구현)

  • Park Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.5
    • /
    • pp.651-659
    • /
    • 2005
  • For segmentation of constrained and handwritten Hangul, a new method, which has been not introduced, was proposed and implemented to use virtual network search system in the space between characters. The proposed system was designed to be used in all cases in unconstrained handwritten Hangul by various writers and to make a number of curved segmentation path using a virtual network to the space between characters. The proposed system prevented Process from generating a path in a wrong position by changing search window upon target block within a search process. From the experimental results, the proposed virtual network search system showed segmentation accuracy of $91.4\%$ from 800 word set including touched and overlapped characters collected from various writers.

  • PDF

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

Performance Improvement of Automatic Speech Segmentation and Labeling System (자동 음성분할 및 레이블링 시스템의 성능향상)

  • Hong Seong Tae;Kim Je-U;Kim Hyeong-Sun
    • MALSORI
    • /
    • no.35_36
    • /
    • pp.175-188
    • /
    • 1998
  • Database segmented and labeled up to phoneme level plays an important role in phonetic research and speech engineering. However, it usually requires manual segmentation and labeling, which is time-consuming and may also lead to inconsistent consequences. Automatic segmentation and labeling can be introduced to solve these problems. In this paper, we investigate a method to improve the performance of automatic segmentation and labeling system, where Spectral Variation Function(SVF), modification of silence model, and use of energy variations in postprocessing stage are considered. In this paper, SVF is applied in three ways: (1) addition to feature parameters, (2) postprocessing of phoneme boundaries, (3) restricting the Viterbi path so that the resulting phoneme boundaries may be located in frames around SVF peaks. In the postprocessing stage, positions with greatest energy variation during transitional period between silence and other phonemes were used to modify boundaries. In order to evaluate the performance of the system, we used 452 phonetically balanced word(PBW) database for training phoneme models and phonetically balanced sentence(PBS) database for testing. According to our experiments, 83.1% (6.2% improved) and 95.8% (0.9% improved) of phoneme boundaries were within 20ms and 40ms of the manually segmented boundaries, respectively.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Examining Line-breaks in Korean Language Textbooks: the Promotion of Word Spacing and Reading Skills (한국어 교재의 행 바꾸기 -띄어쓰기와 읽기 능력의 계발 -)

  • Cho, In Jung;Kim, Danbee
    • Journal of Korean language education
    • /
    • v.23 no.1
    • /
    • pp.77-100
    • /
    • 2012
  • This study investigates issues in relation to text segmenting, in particular, line breaks in Korean language textbooks. Research on L1 and L2 reading has shown that readers process texts by chunking (grouping words into phrases or meaningful syntactic units) and, therefore, phrase-cued texts are helpful for readers whose syntactic knowledge has not yet been fully developed. In other words, it would be important for language textbooks to avoid awkward syntactic divisions at the end of a line, in particular, those textbooks for beginners and intermediate level learners. According to our analysis of a number of major Korean language textbooks for beginner-level learners, however, many textbooks were found to display line-breaks of awkward syntactic division. Moreover, some textbooks displayed frequent instances where a single word (or eojeol in the case of Korean) is split between different lines. This can hamper not only learners' learning of the rules of spaces between eojeols in Korean, but also learners' development in automatic word recognition, which is an essential part of reading processes. Based on the findings of our textbook analysis and of existing research on reading, this study suggests ways to overcome awkward line-breaks in Korean language textbooks.

Comparison of Word Extraction Methods Based on Unsupervised Learning for Analyzing East Asian Traditional Medicine Texts (한의학 고문헌 텍스트 분석을 위한 비지도학습 기반 단어 추출 방법 비교)

  • Oh, Junho
    • Journal of Korean Medical classics
    • /
    • v.32 no.3
    • /
    • pp.47-57
    • /
    • 2019
  • Objectives : We aim to assist in choosing an appropriate method for word extraction when analyzing East Asian Traditional Medical texts based on unsupervised learning. Methods : In order to assign ranks to substrings, we conducted a test using one method(BE:Branching Entropy) for exterior boundary value, three methods(CS:cohesion score, TS:t-score, SL:simple-ll) for interior boundary value, and six methods(BExSL, BExTS, BExCS, CSxTS, CSxSL, TSxSL) from combining them. Results : When Miss Rate(MR) was used as the criterion, the error was minimal when the TS and SL were used together, while the error was maximum when CS was used alone. When number of segmented texts was applied as weight value, the results were the best in the case of SL, and the worst in the case of BE alone. Conclusions : Unsupervised-Learning-Based Word Extraction is a method that can be used to analyze texts without a prepared set of vocabulary data. When using this method, SL or the combination of SL and TS could be considered primarily.

Segmentation of Chinese Fashion Product Consumers according to Internet Shopping Values and Their Online Word-of-Mouth and Purchase Behavior (인터넷 쇼핑가치에 따른 중국 패션제품 소비자 세분집단의 온라인 구전 및 구매행동)

  • Yin, Mei;Yu, Haekyung;Hwang, Seona
    • Fashion & Textile Research Journal
    • /
    • v.18 no.3
    • /
    • pp.317-326
    • /
    • 2016
  • The main purposes of this study were to segment Chinese consumers who purchase fashion products through internet commerce according to internet shopping values, to compare their online word-of-mouth acceptance and dissemination behavior, and to examine the demographic characteristics and purchase behavior of the segments. 715 questionnaires were collected through internet survey from January $19^{th}$ to March $16^{th}$, 2015 and a total of 488 were used for the final data analysis. The respondents were twenty to thirty nine years old men and women living in all over China. Hedonic and utilitarian shopping values were identified through factor analysis and based on the shopping values, the respondents were categorized into four groups-ambivalent shopping value group, hedonic shopping value group, utilitarian shopping value group and indifferent group. Among these groups, there were significant differences in terms of online word-of-mouth acceptance as well as dissemination level and motivation. In overall, ambivalent shopping value group showed high online word-of-mouth acceptance as well as dissemination motivation. The groups also showed significant differences in clothing selection criteria, frequently purchased internet shopping sites, online clothing shopping frequency and information sources. The groups also differed in terms of age, residential area, education level, occupation and income. However, there were no significant differences in gender and marital status among the groups.

Segmentation of Korean Compound Nouns Using Semantic Category Analysis of Unregistered Nouns (미등록어의 의미 범주 분석을 이용한 복합명사 분해)

  • Kang Yu-Hwan;Seo Young-Hoon
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • This paper proposes a method of segmenting compound nouns which include unregistered nouns into a correct combination of unit nouns using characteristics of person's names, loanwords, and location names. Korean person's name is generally composed of 3 syllables, only relatively small number of syllables is used as last names, and the second and the third syllables combination is somewhat restrictive. Also many person's names appear with clue words in compound nouns. Most loanwords have one or more syllables which cannot appear in Korean words, or have sequences of syllables different from usual Korean words. Location names are generally used with clue words designating districts in compound nouns. Use of above characteristics to analyze compound nouns not only makes segmentation more accurate, helps natural language systems use semantic categories of those unregistered nouns. Experimental results show that the precision of our method is approximately 98% on average. The precision of human names and loanwords recognition is about 94% and about 92% respectively.

  • PDF