• Title/Summary/Keyword: data dictionary

Search Result 346, Processing Time 0.03 seconds

Favorable analysis of users through the social data analysis based on sentimental analysis (소셜데이터 감성분석을 통한 사용자의 호감도 분석)

  • Lee, Min-gyu;Sohn, Hyo-jung;Seong, Baek-min;Kim, Jong-bae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.438-440
    • /
    • 2014
  • Recently it is used commercially to actively move the data from the SNS service. Therefore, we propose a method that can accurately analyze the information related to the reputation of companies and products in real time SNS environment in this paper.Identify the relationship between words by performing morphological analysis on the text data gathered by crawling the SNS scheme. In addition, it shows the visualization to analyze statistically through a established emotional dictionary morphemes are extracted from the sentence. Here, if the extracted word is not exist in sentimental dictionary. Also, we propose the algorithm that add the word to emotional dictionary automatically.

  • PDF

Energy Efficient and Low-Cost Server Architecture for Hadoop Storage Appliance

  • Choi, Do Young;Oh, Jung Hwan;Kim, Ji Kwang;Lee, Seung Eun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4648-4663
    • /
    • 2020
  • This paper proposes the Lempel-Ziv 4(LZ4) compression accelerator optimized for scale-out servers in data centers. In order to reduce CPU loads caused by compression, we propose an accelerator solution and implement the accelerator on an Field Programmable Gate Array(FPGA) as heterogeneous computing. The LZ4 compression hardware accelerator is a fully pipelined architecture and applies 16 dictionaries to enhance the parallelism for high throughput compressor. Our hardware accelerator is based on the 20-stage pipeline and dictionary architecture, highly customized to LZ4 compression algorithm and parallel hardware implementation. Proposing dictionary architecture allows achieving high throughput by comparing input sequences in multiple dictionaries simultaneously compared to a single dictionary. The experimental results provide the high throughput with intensively optimized in the FPGA. Additionally, we compare our implementation to CPU implementation results of LZ4 to provide insights on FPGA-based data centers. The proposed accelerator achieves the compression throughput of 639MB/s with fine parallelism to be deployed into scale-out servers. This approach enables the low power Intel Atom processor to realize the Hadoop storage along with the compression accelerator.

Super Resolution using Dictionary Data Mapping Method based on Loss Area Analysis (손실 영역 분석 기반의 학습데이터 매핑 기법을 이용한 초해상도 연구)

  • Han, Hyun-Ho;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.19-26
    • /
    • 2020
  • In this paper, we propose a method to analyze the loss region of the dictionary-based super resolution result learned for image quality improvement and to map the learning data according to the analyzed loss region. In the conventional learned dictionary-based method, a result different from the feature configuration of the input image may be generated according to the learning image, and an unintended artifact may occur. The proposed method estimate loss information of low resolution images by analyzing the reconstructed contents to reduce inconsistent feature composition and unintended artifacts in the example-based super resolution process. By mapping the training data according to the final interpolation feature map, which improves the noise and pixel imbalance of the estimated loss information using a Gaussian-based kernel, it generates super resolution with improved noise, artifacts, and staircase compared to the existing super resolution. For the evaluation, the results of the existing super resolution generation algorithms and the proposed method are compared with the high-definition image, which is 4% better in the PSNR (Peak Signal to Noise Ratio) and 3% in the SSIM (Structural SIMilarity Index).

A Study on Applying Novel Reverse N-Gram for Construction of Natural Language Processing Dictionary for Healthcare Big Data Analysis (헬스케어 분야 빅데이터 분석을 위한 개체명 사전구축에 새로운 역 N-Gram 적용 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.391-396
    • /
    • 2024
  • This study proposes a novel reverse N-Gram approach to overcome the limitations of traditional N-Gram methods and enhance performance in building an entity dictionary specialized for the healthcare sector. The proposed reverse N-Gram technique allows for more precise analysis and processing of the complex linguistic features of healthcare-related big data. To verify the efficiency of the proposed method, big data on healthcare and digital health announced during the Consumer Electronics Show (CES) held each January was collected. Using the Python programming language, 2,185 news titles and summaries mentioned from January 1 to 31 in 2010 and from January 1 to 31 in 2024 were preprocessed with the new reverse N-Gram method. This resulted in the stable construction of a dictionary for natural language processing in the healthcare field.

Homonym Disambiguation based on Mutual Information and Sense-Tagged Compound Noun Dictionary (상호정보량과 복합명사 의미사전에 기반한 동음이의어 중의성 해소)

  • Heo, Jeong;Seo, Hee-Cheol;Jang, Myung-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1073-1089
    • /
    • 2006
  • The goal of Natural Language Processing(NLP) is to make a computer understand a natural language and to deliver the meanings of natural language to humans. Word sense Disambiguation(WSD is a very important technology to achieve the goal of NLP. In this paper, we describe a technology for automatic homonyms disambiguation using both Mutual Information(MI) and a Sense-Tagged Compound Noun Dictionary. Previous research work using word definitions in dictionary suffered from the problem of data sparseness because of the use of exact word matching. Our work overcomes this problem by using MI which is an association measure between words. To reflect language features, the rate of word-pairs with MI values, sense frequency and site of word definitions are used as weights in our system. We constructed a Sense-Tagged Compound Noun Dictionary for high frequency compound nouns and used it to resolve homonym sense disambiguation. Experimental data for testing and evaluating our system is constructed from QA(Question Answering) test data which consisted of about 200 query sentences and answer paragraphs. We performed 4 types of experiments. In case of being used only MI, the result of experiment showed a precision of 65.06%. When we used the weighted values, we achieved a precision of 85.35% and when we used the Sense-Tagged Compound Noun Dictionary, we achieved a precision of 88.82%, respectively.

Relationship between Result of Sentiment Analysis and User Satisfaction -The case of Korean Meteorological Administration- (감성분석 결과와 사용자 만족도와의 관계 -기상청 사례를 중심으로-)

  • Kim, In-Gyum;Kim, Hye-Min;Lim, Byunghwan;Lee, Ki-Kwang
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.393-402
    • /
    • 2016
  • To compensate for limited the satisfaction survey currently conducted by Korea Metrological Administration (KMA), a sentiment analysis via a social networking service (SNS) can be utilized. From 2011 to 2014, with the sentiment analysis, Twitter who had commented 'KMA' had collected, then, using $Na{\ddot{i}}ve$ Bayes classification, we were classified into three sentiments: positive, negative, and neutral sentiments. An additional dictionary was made with morphemes appeared only in the positive, negative, and neutral sentiments of basic $Na{\ddot{i}}ve$ Bayes classification, thus the accuracy of sentiment analysis was improved. As a result, when sentiments were classified with a basic $Na{\ddot{i}}ve$ Bayes classification, the training data were reproduced about 75% accuracy rate. Whereas, when classifying with the additional dictionary, it showed 97% accuracy rate. When using the additional dictionary, sentiments of verification data was classified with about 75% accuracy rate. Lower classification accuracy rate would be improved by not only a qualified dictionary that has increased amount of training data, including diverse keywords related to weather, but continuous update of the dictionary. Meanwhile, contrary to the sentiment analysis based on dictionary definition of individual vocabulary, if sentiments are classified into meaning of sentence, increased rate of negative sentiment and change in satisfaction could be explained. Therefore, the sentiment analysis via SNS would be considered as useful tool for complementing surveys in the future.

Memory Performance of Electronic Dictionary-Based Commercial Workload

  • Lee, Changsik;Kim, Hiecheol;Lee, Yongdoo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.5
    • /
    • pp.39-48
    • /
    • 2002
  • long with the rapid spread of the Internet, a new class of commercial applications which process transactions with respect to electronic dictionaries become popular Typical examples are Internet search engines. In this paper, we present a new approach to achieving high performance electronic dictionaries. Different from the conventional approach which use Trie data structures for the implementation of electronic dictionaries, our approach used multi-dimensional binary trees. In this paper, we present the implementation of our electronic dictionary ED-MBT(Electronic Dictionary based on Multidimensional Binary Tree). Exhaustive performance study is also presented to assess the performance impact of ED-MBT on the real world applications.

  • PDF

A Parser of Definitions in Korean Dictionary based on Probabilistic Grammar Rules (확률적 문법규칙에 기반한 국어사전의 뜻풀이말 구문분석기)

  • Lee, Su Gwang;Ok, Cheol Yeong
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.5
    • /
    • pp.448-448
    • /
    • 2001
  • The definitions in Korean dictionary not only describe meanings of title, but also include various semantic information such as hypernymy/hyponymy, meronymy/holonymy, polysemy, homonymy, synonymy, antonymy, and semantic features. This paper purposes to implement a parser as the basic tool to acquire automatically the semantic information from the definitions in Korean dictionary. For this purpose, first we constructed the part-of-speech tagged corpus and the tree tagged corpus from the definitions in Korean dictionary. And then we automatically extracted from the corpora the frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability based on the statistical method. The parser is a kind of the probabilistic chart parser that uses the extracted data. The frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability resolve the noun phrase's structural ambiguity during parsing. The parser uses a grammar factoring, Best-First search, and Viterbi search In order to reduce the number of nodes during parsing and to increase the performance. We experiment with grammar rule's probability, left-to-right parsing, and left-first search. By the experiments, when the parser uses grammar rule's probability and left-first search simultaneously, the result of parsing is most accurate and the recall is 51.74% and the precision is 87.47% on raw corpus.

Reduction of Unstressed Prevocalic /u/ in English

  • Hwangbo, Young-Shik
    • Journal of English Language & Literature
    • /
    • v.55 no.6
    • /
    • pp.1139-1161
    • /
    • 2009
  • This paper deals with the reduction of unstressed prevocalic /u/ and the appearance of /w/ which are observed in such words as ambiguity [ˌæm bǝ ˈgju: ǝ ti] - ambiguous [æm ˈbɪ gjǝ wǝs]. This phenomenon is recorded in Merriam-Webster Online Dictionary, Webster's Third New International Dictionary, Unabridged, and the draft revisions of Oxford English Dictionary Online. Since this phenomenon has not been studied in detail up to now, this paper aims 1) to collect the data related to the reduction of unstressed prevocalic /u/, 2) to classify them systematically, and 3) to explain the phenomenon in terms of Optimality Theory. In the course of analysis, Prevocalic Lengthening, which is crucial to the preservation of unstressed prevocalic /u/, is reinterpreted as one of the ways to prevent hiatus (annual /æ nju: ǝl/). /w/-insertion is another way to prevent hiatus (annual /æ njǝ wǝl/). In addition it is argued that prevocalic /u/ behaves differently from prevocalic /i/ due to the difference in the articulators involved.

Domain Adaptation Image Classification Based on Multi-sparse Representation

  • Zhang, Xu;Wang, Xiaofeng;Du, Yue;Qin, Xiaoyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2590-2606
    • /
    • 2017
  • Generally, research of classical image classification algorithms assume that training data and testing data are derived from the same domain with the same distribution. Unfortunately, in practical applications, this assumption is rarely met. Aiming at the problem, a domain adaption image classification approach based on multi-sparse representation is proposed in this paper. The existences of intermediate domains are hypothesized between the source and target domains. And each intermediate subspace is modeled through online dictionary learning with target data updating. On the one hand, the reconstruction error of the target data is guaranteed, on the other, the transition from the source domain to the target domain is as smooth as possible. An augmented feature representation produced by invariant sparse codes across the source, intermediate and target domain dictionaries is employed for across domain recognition. Experimental results verify the effectiveness of the proposed algorithm.