• Title/Summary/Keyword: Text normalization

Search Result 43, Processing Time 0.029 seconds

A Study on Text Summarize Automation Using Document Length Normalization (문서 길이 정규화를 이용한 문서 요약 자동화에 관한 연구)

  • 이재훈;김영천;이성주
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.228-230
    • /
    • 2001
  • WWW(World Wide Web)와 온라인 정보 서비스의 급속한 성장으로 인해, 보다 많은 정보가 온라인으로 이용 혹은 접근 가능해 졌다. 이런 정보홍수로 접근 가능한 정보들이 과잉되는 문제가 발생했다. 이러한 과잉 정보 현상으로 인하여 시간적 제약이 뒤따르며 이용 가능한 모든 정보를 근거로 중요한 의사 결정을 내려야 한다. 문서 요약 자동화(Text Summarize Automation)는 이 문제를 처리하는데 필수적이다. 본 논문에서는 정보 검색을 통해 획득한 문서들을 일차적으로 문서 길이 정규화를 이용하여 질의에 적합하고 신뢰도가 더욱 높은 문서 정보를 얻을 수 있음을 보인다.

  • PDF

On using the LPC parameter for Speaker Identification (LPC에 의한 화자 식별)

  • 조병모
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1987.11a
    • /
    • pp.82-85
    • /
    • 1987
  • Preliminary results of using the LPC parameter for text-independent speaker identification problem are presented. The idetification process includes log likelihood ratio for distance measure and dynamic programming for time normalization. To generate the data base for experiments, ten times. Experimental results show 99.4% of identification accuracy, incorrect identification were made when the speaker uses a dialect.

  • PDF

Correction of Signboard Distortion by Vertical Stroke Estimation

  • Lim, Jun Sik;Na, In Seop;Kim, Soo Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.9
    • /
    • pp.2312-2325
    • /
    • 2013
  • In this paper, we propose a preprocessing method that it is to correct the distortion of text area in Korean signboard images as a preprocessing step to improve character recognition. Distorted perspective in recognizing of Korean signboard text may cause of the low recognition rate. The proposed method consists of four main steps and eight sub-steps: main step consists of potential vertical components detection, vertical components detection, text-boundary estimation and distortion correction. First, potential vertical line components detection consists of four steps, including edge detection for each connected component, pixel distance normalization in the edge, dominant-point detection in the edge and removal of horizontal components. Second, vertical line components detection is composed of removal of diagonal components and extraction of vertical line components. Third, the outline estimation step is composed of the left and right boundary line detection. Finally, distortion of the text image is corrected by bilinear transformation based on the estimated outline. We compared the changes in recognition rates of OCR before and after applying the proposed algorithm. The recognition rate of the distortion corrected signboard images is 29.63% and 21.9% higher at the character and the text unit than those of the original images.

Improving Multinomial Naive Bayes Text Classifier (다항시행접근 단순 베이지안 문서분류기의 개선)

  • 김상범;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.259-267
    • /
    • 2003
  • Though naive Bayes text classifiers are widely used because of its simplicity, the techniques for improving performances of these classifiers have been rarely studied. In this paper, we propose and evaluate some general and effective techniques for improving performance of the naive Bayes text classifier. We suggest document model based parameter estimation and document length normalization to alleviate the Problems in the traditional multinomial approach for text classification. In addition, Mutual-Information-weighted naive Bayes text classifier is proposed to increase the effect of highly informative words. Our techniques are evaluated on the Reuters21578 and 20 Newsgroups collections, and significant improvements are obtained over the existing multinomial naive Bayes approach.

A Study on the Extraction and Utilization of Index from Bibliographic MARC Database (서지마크 데이터베이스로부터의 색인어 추출과 색인어의 검색 활용에 관한 연구 - 경북대학교 도서관 학술정보시스템 사례를 중심으로 -)

  • Park Mi-Sung
    • Journal of Korean Library and Information Science Society
    • /
    • v.36 no.2
    • /
    • pp.327-348
    • /
    • 2005
  • The purpose of this study is to emphasize the importance of index definition and to prepare the basis of optimal index in bibliographic retrieval system. For the purpose, this research studied a index extraction theory on index tag definition and index normalization from the bibliographic marc database and analyzed a retrieval utilization rate of extracted index. In this experiment, we divided index between text-type and code-type about the generated 29,219,853 indexes from 2,200,488 bibliographic records and analyzed utilization rate by the comparison of index-type and index term of web logs. According to the result, the text-type indexes such as title, author, publication, subject are showed high utilization rate while the code-type indexes were showed low utilization rate. So this study suggests that the unused index is removed from index definition to optimize index.

  • PDF

Normalization in Collection Procedures of Emotional Speech by Scriptual Context (대본 내용에 의한 정서음성 수집과정의 정규화에 대하여)

  • Jo Cheol-Woo
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.123-125
    • /
    • 2006
  • One of the biggest problems unsolved in emotional speech acquisition is how to make or find a situation which is close to natual or desired state from humans. We proposed a method to collect emotional speech data by scriptual context. Several contexts from the scripts of drama were chosen by the experts in the area. Context were divided into 6 classes according to the contents. Two actors, one male and one female, read the text after recognizing the emotional situations in the script.

  • PDF

Text Extraction by Skew Normalization and Block Split & Merge (기울기 보정과 블록 분할 합병을 통한 문자 추출)

  • 김도현;차의영;강민경
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.424-426
    • /
    • 2001
  • 신문, 잡지, 공문서, 영수증 등의 문서로부터 필요한 정보를 자동화하여 처리할 수 있는 문서영상 이해 시스템의 구현에 있어서 문서영상에 존재하는 문자를 추출하는 연구는 문자 인식의 전처리 단계로서 매우 중요한 의미를 지니고 있다. 하지만 현 시점에서 문서 자체가 가지는 다양한 형태 및 배경 등에 의하여 범용화되고 일반화된 방법을 찾기란 매우 어려운 실정이다. 본 논문에서는 특히 배경이 선이나 도표 등으로 이루어진 문서 영상에서 Hough Transform을 사용하여 기울어짐을 보정하고 문자들이 선에 겹친 부분을 효과적으로 보정하며 추출된 영역에 대한 분할 및 합병 과정을 거쳐 최종적으로 완전한 문자 영역을 추출하는 방법에 대하여 다룬다.

  • PDF

MediScore: MEDLINE-based Interactive Scoring of Gene and Disease Associations

  • Cho, Hye-Young;Oh, Bermseok;Lee, Jong-Keuk;Kim, Kuchan;Koh, InSong
    • Genomics & Informatics
    • /
    • v.2 no.3
    • /
    • pp.131-133
    • /
    • 2004
  • MediScore is an information retrieval system, which helps to search for the set of genes associated with a specific disease or the set of diseases associated with a specific gene. Despite recent improvement of natural language processing (NLP) and other text mining approaches to search for disease associated genes, many false positive results come out due to diversity of exceptional cases as well as ambiguities in gene names. In order to overcome the weak points of current text mining approaches, MediScore introduces statistical normalization based on binomial to normal distribution approximation which corrects inaccurate scores caused by common words not representing genes and interactive rescoring by the user to remove the false positive results. Interactive rescoring includes individual alias scoring for each gene to remove false gene synonyms, referring MEDLINE abstracts, and cross referencing between OMIM and other related information.

Improving A Text Independent Speaker Identification System By Frame Level Likelihood Normalization (프레임단위유사도정규화를 이용한 문맥독립화자식별시스템의 성능 향상)

  • 김민정;석수영;정현열;정호열
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.487-490
    • /
    • 2001
  • 본 논문에서는 기존의 Caussian Mixture Model을 이용한 실시간문맥독립화자인식시스템의 성능을 향상시키기 위하여 화자검증시스템에서 좋은 결과를 나타내는 유사도정규화 ( Likelihood Normalization )방법을 화자식별시스템에 적용하여 시스템을 구현하였으며, 인식실험한 결과에 대해 보고한다. 시스템은 화자모델생성단과 화자식별단으로 구성하였으며, 화자모델생성단에서는, 화자발성의 음향학적 특징을 잘 표현할 수 있는 GMM(Gaussian Mixture Model)을 이용하여 화자모델을 작성하였으며. GMM의 파라미터를 최적화하기 위하여 MLE(Maximum Likelihood Estimation)방법을 사용하였다. 화자식별단에서는 학습된 데이터와 테스트용 데이터로부터 ML(Maximum Likelihood)을 이용하여 프레임단위로 유사도를 계산하였다. 계산된 유사도는 유사도 정규화 과정을 거쳐 스코어( SC)로 표현하였으며, 가장 높은 스코어를 가지는 화자를 인식화자로 결정한다. 화자인식에서 발성의 종류로는 문맥독립 문장을 사용하였다. 인식실험을 위해서는 ETRI445 DB와 KLE452 DB를 사용하였으며. 특징파라미터로서는 켑스트럼계수 및 회귀계수값만을 사용하였다. 인식실험에서는 등록화자의 수를 달리하여 일반적인 화자식별방법과 프레임단위유사도정규화방법으로 각각 인식실험을 하였다. 인식실험결과, 프레임단위유사도정규화방법이 인식화자수가 많아지는 경우에 일반적인 방법보다 향상된 인식률을 얻을수 있었다.

  • PDF

Hangul Encoding Standard based on Unicode (유니코드의 한글 인코딩 표준안)

  • Ahn, Dae-Hyuk;Park, Young-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1083-1092
    • /
    • 2007
  • In Unicode, two types of Hangul encoding schemes are currently in use, namely, the "precomposed modern Hangul syllables" model and the "conjoining Hangul characters" model. The current Unicode Hangul conjoining rules allow a precomposed Hangul syllable to be a member of a syllable which includes conjoining Hangul characters; this has resulted in a number of different Hangul encoding implementations. This unfortunate problem stems from an incomplete understanding of the Hangul writing system when the normalization and encoding schemes were originally designed. In particular, the extended use of old Hangul was not taken into consideration. As a result, there are different ways to represent Hangul syllables, and this cause problem in the processing of Hangul text, for instance in searching, comparison and sorting functions. In this paper, we discuss the problems with the normalization of current Hangul encodings, and suggest a single efficient rule to correctly process the Hangul encoding in Unicode.