• Title/Summary/Keyword: text extraction

Search Result 459, Processing Time 0.035 seconds

A study on Similarity analysis of National R&D Programs using R&D Project's technical classification (R&D과제의 기술분류를 이용한 사업간 유사도 분석 기법에 관한 연구)

  • Kim, Ju-Ho;Kim, Young-Ja;Kim, Jong-Bae
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.317-324
    • /
    • 2012
  • Recently, coordination task of similarity between national R&D programs is emphasized on view from the R&D investment efficiency. But the previous similarity search method like text-based similarity search which using keyword of R&D projects has reached the limit due to deviation of document's quality. For the solve the limitations of text-based similarity search using the keyword extraction, in this study, utilization of R&D project's technical classification will be discussed as a new similarity search method when analyzed of similarity between national R&D programs. To this end, extracts the Science and Technology Standard Classification of R & D projects which are collected when national R&D Survey & analysis, and creates peculiar vector model of each R&D programs. Verify a reliability of this study by calculate the cosine-based and Euclidean distance-based similarity and compare with calculated the text-based similarity.

An Effective Method for Replacing Caption in Video Images (비디오 자막 문자의 효과적인 교환 방법)

  • Chun Byung-Tae;Kim Sook-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.97-104
    • /
    • 2005
  • Caption texts frequently inserted in a manufactured video image for helping an understanding of the TV audience. In the movies. replacement of the caption texts can be achieved without any loss of an original image, because the caption texts have their own track in the films. To replace the caption texts in early methods. the new texts have been inserted the caption area in the video images, which is filled a certain color for removing established caption texts. However, the use of these methods could be lost the original images in the caption area, so it is a Problematic method to the TV audience. In this Paper, we propose a new method for replacing the caption text after recovering original image in the caption area. In the experiments. the results in the complex images show some distortion after recovering original images, but most results show a good caption text with the recovered image. As such, this new method is effectively demonstrated to replace the caption texts in video images.

  • PDF

A DOM-Based Fuzzing Method for Analyzing Seogwang Document Processing System in North Korea (북한 서광문서처리체계 분석을 위한 Document Object Model(DOM) 기반 퍼징 기법)

  • Park, Chanju;Kang, Dongsu
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.5
    • /
    • pp.119-126
    • /
    • 2019
  • Typical software developed and used by North Korea is Red Star and internal application software. However, most of the existing research on the North Korean software is the software installation method and general execution screen analysis. One of the ways to identify software vulnerabilities is file fuzzing, which is a typical method for identifying security vulnerabilities. In this paper, we use file fuzzing to analyze the security vulnerability of the software used in North Korea's Seogwang Document Processing System. At this time, we propose the analysis of open document text (ODT) file produced by Seogwang Document Processing System, extraction of node based on Document Object Mode (DOM) to determine test target, and generation of mutation file through insertion and substitution, this increases the number of crash detections at the same testing time.

A Study on the Channel Normalized Pitch Synchronous Cepstrum for Speaker Recognition (채널에 강인한 화자 인식을 위한 채널 정규화 피치 동기 켑스트럼에 관한 연구)

  • 김유진;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.61-74
    • /
    • 2004
  • In this paper, a contort- and speaker-dependent cepstrum extraction method and a channel normalization method for minimizing the loss of speaker characteristics in the cepstrum were proposed for a robust speaker recognition system over the channel. The proposed extraction method creates a cepstrum based on the pitch synchronous analysis using the inherent pitch of the speaker. Therefore, the cepstrum called the 〃pitch synchronous cepstrum〃 (PSC) represents the impulse response of the vocal tract more accurately in voiced speech. And the PSC can compensate for channel distortion because the pitch is more robust in a channel environment than the spectrum of speech. And the proposed channel normalization method, the 〃formant-broadened pitch synchronous CMS〃 (FBPSCMS), applies the Formant-Broadened CMS to the PSC and improves the accuracy of the intraframe processing. We compared the text-independent closed-set speaker identification on 56 females and 112 males using TIMIT and NTIMIT database, respectively. The results show that pitch synchronous km improves the error reduction rate by up to 7.7% in comparison with conventional short-time cepstrum and the error rates of the FBPSCMS are more stable and lower than those of pole-filtered CMS.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

KONG-DB: Korean Novel Geo-name DB & Search and Visualization System Using Dictionary from the Web (KONG-DB: 웹 상의 어휘 사전을 활용한 한국 소설 지명 DB, 검색 및 시각화 시스템)

  • Park, Sung Hee
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.321-343
    • /
    • 2016
  • This study aimed to design a semi-automatic web-based pilot system 1) to build a Korean novel geo-name, 2) to update the database using automatic geo-name extraction for a scalable database, and 3) to retrieve/visualize the usage of an old geo-name on the map. In particular, the problem of extracting novel geo-names, which are currently obsolete, is difficult to solve because obtaining a corpus used for training dataset is burden. To build a corpus for training data, an admin tool, HTML crawler and parser in Python, crawled geo-names and usages from a vocabulary dictionary for Korean New Novel enough to train a named entity tagger for extracting even novel geo-names not shown up in a training corpus. By means of a training corpus and an automatic extraction tool, the geo-name database was made scalable. In addition, the system can visualize the geo-name on the map. The work of study also designed, implemented the prototype and empirically verified the validity of the pilot system. Lastly, items to be improved have also been addressed.

A Block Classification and Rotation Angle Extraction for Document Image (문서 영상의 영역 분류와 회전각 검출)

  • Mo, Moon-Jung;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.509-516
    • /
    • 2002
  • This paper proposes an efficient algorithm which recognizes the mixed document image consisting of the images, texts, tables, and straight lines. This system is composed of three steps. The first step is the detection of rotation angle for complementing skewed images, the second is detection of erasing an unnecessary background region and last is the classification of each component included in document images. This algorithm performs preprocessing of detecting rotation angles and correcting documents based on the detected rotation angles in order to minimize the error rate by skewness of the documentation. We detected the rotation angie using only horizontal and vertical components in document images and minimized calculation time by erasing unnecessary background region in the detecting process of component of document. In the next step, we classify various components such as image, text, table and line area included in document images. we applied this method to various document images in order to evaluate the performance of document recognition system and show the successful experimental results.

A Study on the Semiautomatic Construction of Domain-Specific Relation Extraction Datasets from Biomedical Abstracts - Mainly Focusing on a Genic Interaction Dataset in Alzheimer's Disease Domain - (바이오 분야 학술 문헌에서의 분야별 관계 추출 데이터셋 반자동 구축에 관한 연구 - 알츠하이머병 유관 유전자 간 상호 작용 중심으로 -)

  • Choi, Sung-Pil;Yoo, Suk-Jong;Cho, Hyun-Yang
    • Journal of Korean Library and Information Science Society
    • /
    • v.47 no.4
    • /
    • pp.289-307
    • /
    • 2016
  • This paper introduces a software system and process model for constructing domain-specific relation extraction datasets semi-automatically. The system uses a set of terms such as genes, proteins diseases and so forth as inputs and then by exploiting massive biological interaction database, generates a set of term pairs which are utilized as queries for retrieving sentences containing the pairs from scientific databases. To assess the usefulness of the proposed system, this paper applies it into constructing a genic interaction dataset related to Alzheimer's disease domain, which extracts 3,510 interaction-related sentences by using 140 gene names in the area. In conclusion, the resulting outputs of the case study performed in this paper indicate the fact that the system and process could highly boost the efficiency of the dataset construction in various subfields of biomedical research.

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

Sinus floor elevation and simultaneous implant placement in fresh extraction sockets: a systematic review of clinical data

  • Ekhlasmandkermani, Mehdi;Amid, Reza;Kadkhodazadeh, Mahdi;Hajizadeh, Farhad;Abed, Pooria Fallah;Kheiri, Lida;Kheiri, Aida
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.47 no.6
    • /
    • pp.411-426
    • /
    • 2021
  • Combining different procedures to reduce the number of surgical sessions and patient discomfort in implant placement and sinus floor elevation has been recommended, and evidence supports good outcomes. The aim of this study was to review the results of clinical studies on sinus floor elevation through extraction sockets and simultaneous immediate posterior implant placement. An electronic search was carried out in PubMed, Scopus, and Web of Science to find English articles published in or before August 2020. A manual search was also performed. Titles, abstracts, and the full-text of the retrieved articles were studied. Thirteen studies met our eligibility criteria: 6 retrospective case series, 3 case reports, 2 prospective cohort case-series, 1 prospective case series, and 1 randomized controlled trial. Overall, 306 implants were placed; 2 studies reported implant survival rates of 91.7% and 98.57%. The others either did not report the survival rate or reported 100% survival. Sinus floor elevation through a fresh extraction socket and simultaneous immediate implant placement appears to be a predictable modality with a high success rate. However, proper case selection and the expertise of the clinician play fundamental roles in the success of such complex procedures.