• Title/Summary/Keyword: Text information

Search Result 4,393, Processing Time 0.031 seconds

A Study of on Extension Compression Algorithm of Mixed Text by Hangeul-Alphabet

  • Ji, Kang-yoo;Cho, Mi-nam;Hong, Sung-soo;Park, Soo-bong
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.446-449
    • /
    • 2002
  • This paper represents a improved data compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet from. Original LZW algorithm efficiently compress a alphabet text file but inefficiently compress a 2 byte completion Hangout text file. To solve this problem, data compression algorithm using 2 byte prefix field and 2 byte suffix field for compression table have developed. But it have a another problem that is compression ratio of alphabet text file decreased. In this paper, we proposes improved LZW algorithm, that is, compression table in the Extended LZW(ELZW) algorithm uses 2 byte prefix field for pointer of a table and 1 byte suffix field for repeat counter. where, a prefix field uses a pointer(index) of compression table and a suffix field uses a counter of overlapping or recursion text data in compression table. To increase compression ratio, after construction of compression table, table data are properly packed as different bit string in accordance with a alphabet, Hangout, and pointer respectively. Therefore, proposed ELZW algorithm is superior to 1 byte LZW algorithm as 7.0125 percent and superior to 2 byte LZW algorithm as 11.725 percent. This paper represents a improved data Compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet form. This document is an example of what your camera-ready manuscript to ITC-CSCC 2002 should look like. Authors are asked to conform to the directions reported in this document.

  • PDF

An effective approach to generate Wikipedia infobox of movie domain using semi-structured data

  • Bhuiyan, Hanif;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.49-61
    • /
    • 2017
  • Wikipedia infoboxes have emerged as an important structured information source on the web. To compose infobox for an article, considerable amount of manual effort is required from an author. Due to this manual involvement, infobox suffers from inconsistency, data heterogeneity, incompleteness, schema drift etc. Prior works attempted to solve those problems by generating infobox automatically based on the corresponding article text. However, there are many articles in Wikipedia that do not have enough text content to generate infobox. In this paper, we present an automated approach to generate infobox for movie domain of Wikipedia by extracting information from several sources of the web instead of relying on article text only. The proposed methodology has been developed using semantic relations of article content and available semi-structured information of the web. It processes the article text through some classification processes to identify the template from the large pool of template list. Finally, it extracts the information for the corresponding template attributes from web and thus generates infobox. Through a comprehensive experimental evaluation the proposed scheme was demonstrated as an effective and efficient approach to generate Wikipedia infobox.

A Study of Hangul Text Steganography based on Genetic Algorithm (유전 알고리즘 기반 한글 텍스트 스테가노그래피의 연구)

  • Ji, Seon-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.3
    • /
    • pp.7-12
    • /
    • 2016
  • In a hostile Internet environment, steganography has focused to hide a secret message inside the cover medium for increasing the security. That is the complement of the encryption. This paper presents a text steganography techniques using the Hangul text. To enhance the security level, secret messages have been encrypted first through the genetic algorithm operator crossover. And then embedded into an cover text to form the stego text without changing its noticeable properties and structures. To maintain the capacity in the cover media to 3.69%, the experiments show that the size of the stego text was increased up to 14%.

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

A Study on Rhythm Information Visualization Using Syllable of Digital Text (디지털 텍스트의 음절을 이용한 운율 정보 시각화에 관한 연구)

  • Park, seon-hee;Lee, jae-joong;Park, jin-wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.120-126
    • /
    • 2009
  • As the information age grows rapidly, the amount of digital texts has been increasing as well. It has brought an increasing of visualization case in order to figure out lots of digital texts. Existing visualized design of digital text is merely concentrating on figuration of subject word through adoption of stemming algorithm and word frequency extraction, prominence of meaning of text, and connection in between sentences. So it is a fact that expression of rhythm that can visualize sentimental feeing of digital text was insufficient. Syllable is a phoneme unit that can express rhythm more efficiently. In sentences, syllable is a most basic pronunciation unit in pronouncing word, phase and sentence. On this basis, accent, intonation, length of rhythm factor and others are based on syllable. Sonority, which is most closely associated with definitions of syllable, is expressed through air flow of igniting lung and acoustic energy that is specified kinetic energy into sonority. Seen from this perspective, this study examines phonologic definition and characteristics based on syllable, which is properties of digital text, and research the way to visualize rhythm through diagram. After converting digital text into phonetic symbol by the experiment, rhythm information are visualized into images using degree of resonance, which was started from rhythm in all languages, and using syllable establishment of digital text. By visualizing syllable information, it provides syllable information of digital text and express sentiment of digital text through diagram to assist user's understanding by systematic formula. Therefore, this study is aimed at planning for easy understanding of text's rhythm and realizing visualization of digital text.

  • PDF

A Term Importance-based Approach to Identifying Core Citations in Computational Linguistics Articles

  • Kang, In-Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.9
    • /
    • pp.17-24
    • /
    • 2017
  • Core citation recognition is to identify influential ones among the prior articles that a scholarly article cite. Previous approaches have employed citing-text occurrence information, textual similarities between citing and cited article, etc. This study proposes a term-based approach to core citation recognition, which exploits the importance of individual terms appearing in in-text citation to calculate influence-strength for each cited article. Term importance is computed using various frequency information such as term frequency(tf) in in-text citation, tf in the citing article, inverse sentence frequency in the citing article, inverse document frequency in a collection of articles. Experiments using a previous test set consisting of computational linguistics articles show that the term-based approach performs comparably with the previous approaches. The proposed technique could be easily extended by employing other term units such as n-grams and phrases, or by using new term-importance formulae.

Keyword Automatic Extraction Scheme with Enhanced TextRank using Word Co-Occurrence in Korean Document (한글 문서의 단어 동시 출현 정보에 개선된 TextRank를 적용한 키워드 자동 추출 기법)

  • Song, KwangHo;Min, Ji-Hong;Kim, Yoo-Sung
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.62-66
    • /
    • 2016
  • 문서의 의미 기반 처리를 위해서 문서의 내용을 대표하는 키워드를 추출하는 것은 정확성과 효율성 측면에서 매우 중요한 과정이다. 그러나 단일문서로부터 키워드를 추출해 내는 기존의 연구들은 정확도가 낮거나 한정된 분야에 대해서만 검증을 수행하여 결과를 신뢰하기 어려운 문제가 있었다. 따라서 본 연구에서는 정확하면서도 다양한 분야의 텍스트에 적용 가능한 키워드 추출 방법을 제시하고자 단어의 동시출현 정보와 그래프 모델을 바탕으로 TextRank 알고리즘을 변형한 새로운 형태의 알고리즘을 동시에 적용하는 키워드 추출 기법을 제안하였다. 제안한 기법을 활용하여 성능평가를 진행한 결과 기존의 연구들보다 향상된 정확도를 얻을 수 있음을 확인하였다.

  • PDF

An Improved Coverless Text Steganography Algorithm Based on Pretreatment and POS

  • Liu, Yuling;Wu, Jiao;Chen, Xianyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1553-1567
    • /
    • 2021
  • Steganography is a current hot research topic in the area of information security and privacy protection. However, most previous steganography methods are not effective against steganalysis and attacks because they are usually carried out by modifying covers. In this paper, we propose an improved coverless text steganography algorithm based on pretreatment and Part of Speech (POS), in which, Chinese character components are used as the locating marks, then the POS is used to hide the number of keywords, the retrieval of stego-texts is optimized by pretreatment finally. The experiment is verified that our algorithm performs well in terms of embedding capacity, the embedding success rate, and extracting accuracy, with appropriate lengths of locating marks and the large scale of the text database.

A Public Open Civil Complaint Data Analysis Model to Improve Spatial Welfare for Residents - A Case Study of Community Welfare Analysis in Gangdong District - (거주민 공간복지 향상을 위한 공공 개방 민원 데이터 분석 모델 - 강동구 공간복지 분석 사례를 중심으로 -)

  • Shin, Dongyoun
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.39-47
    • /
    • 2023
  • This study aims to introduce a model for enhancing community well-being through the utilization of public open data. To objectively assess abstract notions of residential satisfaction, text data from complaints is analyzed. By leveraging accessible public data, costs related to data collection are minimized. Initially, relevant text data containing civic complaints is collected and refined by removing extraneous information. This processed data is then combined with meaningful datasets and subjected to topic modeling, a text mining technique. The insights derived are visualized using Geographic Information System (GIS) and Application Programming Interface (API) data. The efficacy of this analytical model was demonstrated in the Godeok/Gangil area. The proposed methodology allows for comprehensive analysis across time, space, and categories. This flexible approach involves incorporating specific public open data as needed, all within the overarching framework.

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

  • Yan, Wanying;Guo, Junjun
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.820-831
    • /
    • 2020
  • Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and document-level document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.