• Title/Summary/Keyword: Hash Data

Search Result 334, Processing Time 0.022 seconds

Load Shedding Method based on Grid Hash to Improve Accuracy of Spatial Sliding Window Aggregate Queries (공간 슬라이딩 윈도우 집계질의의 정확도 향상을 위한 그리드 해쉬 기반의 부하제한 기법)

  • Baek, Sung-Ha;Lee, Dong-Wook;Kim, Gyoung-Bae;Chung, Weon-Il;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • As data stream is entered into system continuously and the memory space is limited, the data exceeding the memory size cannot be processed. In order to solve the problem, load shedding methods which drop a part of data to prevent exceeding the storage space have been researched. Generally, a traditional load shedding method uses random sampling with optimized rate according to data deviation. The method samples data not to distinguish those used in spatial query because the method uses only a random sampling with optimized rate according to data deviation. Therefore, the accuracy of query was reduced in u-GIS environment including spatial query. In this paper, we researched a new load shedding method improving accuracy of the query in u-GIS environment which runs spatial query and aspatial query simultaneously. The method uses a new sampling method that samples data having low probability used in query. Therefore proposed method improves spatial query accuracy and query processing speed as applying spatial filtering operation to sampling operator.

  • PDF

Suggestions on how to convert official documents to Machine Readable (공문서의 기계가독형(Machine Readable) 전환 방법 제언)

  • Yim, Jin Hee
    • The Korean Journal of Archival Studies
    • /
    • no.67
    • /
    • pp.99-138
    • /
    • 2021
  • In the era of big data, analyzing not only structured data but also unstructured data is emerging as an important task. Official documents produced by government agencies are also subject to big data analysis as large text-based unstructured data. From the perspective of internal work efficiency, knowledge management, records management, etc, it is necessary to analyze big data of public documents to derive useful implications. However, since many of the public documents currently held by public institutions are not in open format, a pre-processing process of extracting text from a bitstream is required for big data analysis. In addition, since contextual metadata is not sufficiently stored in the document file, separate efforts to secure metadata are required for high-quality analysis. In conclusion, the current official documents have a low level of machine readability, so big data analysis becomes expensive.

A Novel Scalable and Storage-Efficient Architecture for High Speed Exact String Matching

  • Peiravi, Ali;Rahimzadeh, Mohammad Javad
    • ETRI Journal
    • /
    • v.31 no.5
    • /
    • pp.545-553
    • /
    • 2009
  • String matching is a fundamental element of an important category of modern packet processing applications which involve scanning the content flowing through a network for thousands of strings at the line rate. To keep pace with high network speeds, specialized hardware-based solutions are needed which should be efficient enough to maintain scalability in terms of speed and the number of strings. In this paper, a novel architecture based upon a recently proposed data structure called the Bloomier filter is proposed which can successfully support scalability. The Bloomier filter is a compact data structure for encoding arbitrary functions, and it supports approximate evaluation queries. By eliminating the Bloomier filter's false positives in a space efficient way, a simple yet powerful exact string matching architecture is proposed that can handle several thousand strings at high rates and is amenable to on-chip realization. The proposed scheme is implemented in reconfigurable hardware and we compare it with existing solutions. The results show that the proposed approach achieves better performance compared to other existing architectures measured in terms of throughput per logic cells per character as a metric.

Phantom Protection Method for Multi-dimensional Index Structures

  • Lee, Seok-Jae;Song, Seok-Il;Yoo, Jae-Soo
    • International Journal of Contents
    • /
    • v.3 no.2
    • /
    • pp.6-17
    • /
    • 2007
  • Emerging modem database applications require multi-dimensional index structures to provide high performance for data retrieval. In order for a multi-dimensional index structure to be integrated into a commercial database system, efficient techniques that provide transactional access to data through this index structure are necessary. The techniques must support all degrees of isolation offered by the database system. Especially degree 3 isolation, called "no phantom read," protects search ranges from concurrent insertions and the rollbacks of deletions. In this paper, we propose a new phantom protection method for multi-dimensional index structures that uses a multi-level grid technique. The proposed mechanism is independent of the type of the multi-dimensional index structure, i.e., it can be applied to all types of index structures such as tree-based, file-based, and hash-based index structures. In addition, it has a low development cost and achieves high concurrency with a low lock overhead. It is shown through various experiments that the proposed method outperforms existing phantom protection methods for multi-dimensional index structures.

Efficient Creation of Data Cube Using Hash Table in Data Warehouse (데이터 웨어하우스에서 해쉬 테이블을 이용한 효율적인 데이터 큐브 생성 기법)

  • Kim Hyungsun;You Byeongseob;Lee JaeDong;Bae Haeyoung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.211-213
    • /
    • 2005
  • 데이터 웨어하우스는 축적된 대량의 데이터를 분석하여 의사결정을 지원하는 시스템이다. 의사결정을 위한 대량의 데이터 분석은 많은 비용을 요구하므로, 질의 처리 성능을 높이고 의사 결정자에게 빠른 응답을 제공하는 효율적인 데이터 큐브 생성 기법이 연구되었다. 기존 기법으로는 Multiway Array 기법과 H-Cubing 기법이 있다. Multiway Array 기법은 다차원 집계 연산에 필요한 모든 데이터를 배열로 저장하는 것으로 데이터의 양이 많아질수록 메모리 사용이 증가한다. H-Cubing 기법은 Hyper-Tree를 기반으로 튜플을 트리로 구축하므로 모든 튜플을 트리로 구축해야 하는 비용이 증가한다. 본 논문에서는 데이터 웨어하우스에서 해쉬 테이블을 이용한 효율적인 데이터 큐브 생성 기법을 제안한다. 제안 기법은 데이터 큐브 생성 시 필드 해쉬 테이블과 레코드 해쉬 테이블을 사용한다. 필드 해쉬 테이블은 저장될 레코드 순서 계산을 위하여 각 필드에 대해 레벨 값을 해쉬 테이블로 관리한다. 레코드 해쉬 테이블은 데이터 큐브 테이블에 저장될 레코드의 순서와 데이터 큐브 테이블에 저장하기 위한 임시 레코드의 위치를 관리한다. 필드 해쉬 테이블을 이용하여 다차원 데이터의 저장될 레코드 순서를 빠르게 찾아 저장함으로서 데이터 큐브의 생성속도가 향상된다. 또한 해쉬 테이블 만을 유지하면 되므로 메모리 사용량이 감소한다. 따라서 해쉬 테이블의 사용으로 데이터의 빠른 검색과 데이터 큐브 생성 요청에 빠른 응답이 가능하다.

  • PDF

Reliable Billing Schemes for Service Types in Mobile Communication Environments (이동통신 환경에서 신뢰할 수 있는 서비스별 과금 방법)

  • 김순석
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1714-1725
    • /
    • 2003
  • In this paper we propose reliable billing schemes between users and contents providers where a user is provided value­added service for a paid contents using mobile terminals. Our schemes support various types of services such as short messages, bell sounds, images or music data transmissions, and games in mobile communication environments. Using hash chain method, we also reduced the computational overhead of mobile terminals and the volume of data transmitted between the user and the content provider. Content providers can save memory space because they don't need to store each user's usage evidence but still can charge.

Proposal of Process Hollowing Attack Detection Using Process Virtual Memory Data Similarity (프로세스 가상 메모리 데이터 유사성을 이용한 프로세스 할로윙 공격 탐지)

  • Lim, Su Min;Im, Eul Gyu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.2
    • /
    • pp.431-438
    • /
    • 2019
  • Fileless malware uses memory injection attacks to hide traces of payloads to perform malicious works. During the memory injection attack, an attack named "process hollowing" is a method of creating paused benign process like system processes. And then injecting a malicious payload into the benign process allows malicious behavior by pretending to be a normal process. In this paper, we propose a method to detect the memory injection regardless of whether or not the malicious action is actually performed when a process hollowing attack occurs. The replication process having same execution condition as the process of suspending the memory injection is executed, the data set belonging to each process virtual memory area is compared using the fuzzy hash, and the similarity is calculated.

A Study on Key Factors Influencing Customers' Ratings of Restaurants by Using Data Mining Method (데이터 마이닝을 활용한 외식업체의 평점에 영향을 미치는 선행 요인)

  • Kim, Seon Ju;Kim, Byoung Soo
    • The Journal of Information Systems
    • /
    • v.31 no.2
    • /
    • pp.1-18
    • /
    • 2022
  • Purpose Customer review is a major factor in choosing certain restaurants. This study investigates the key factors affecting customer's evaluation about restaurants. With the recent intensification of competition among restaurants in the service industry, the analysis results are expected to provide in-depth insights for enhancing customer experiences. Design/methodology/approach We collected information and reviews provided at the restaurants in the Kakao Map platform. The information collected is based on the information of 3,785 restaurants in Daegu registered on Kakao Map. Based on the information collected, seven independent variables, including number of rating registered, number of reviews, presence or absence of safe restaurants, presence or absence of a posting about holding facilities, presence or absence of a posting about business hours, presence or absence of a posting about hashtags, and presence or absence of break times, were used. Dependent variable is restaurant rating. Multiple regression between independent variables and restaurant rating was carried out. Findings The results of the study confirmed that number of rating registered, presence or absence of a posting about business hours, and presence or absence of a posting about hash tags have an positive effects on the restaurant rating. The number of reviews had a negative effect on the restaurant rating. In addition, in order to confirm the role of customer's reviews, we carried out LDA topic modeling. We divided the topics into the positive review and the negative reviews.

A Dynamic Locality Sensitive Hashing Algorithm for Efficient Security Applications

  • Mohammad Y. Khanafseh;Ola M. Surakhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.79-88
    • /
    • 2024
  • The information retrieval domain deals with the retrieval of unstructured data such as text documents. Searching documents is a main component of the modern information retrieval system. Locality Sensitive Hashing (LSH) is one of the most popular methods used in searching for documents in a high-dimensional space. The main benefit of LSH is its theoretical guarantee of query accuracy in a multi-dimensional space. More enhancement can be achieved to LSH by adding a bit to its steps. In this paper, a new Dynamic Locality Sensitive Hashing (DLSH) algorithm is proposed as an improved version of the LSH algorithm, which relies on employing the hierarchal selection of LSH parameters (number of bands, number of shingles, and number of permutation lists) based on the similarity achieved by the algorithm to optimize searching accuracy and increasing its score. Using several tampered file structures, the technique was applied, and the performance is evaluated. In some circumstances, the accuracy of matching with DLSH exceeds 95% with the optimal parameter value selected for the number of bands, the number of shingles, and the number of permutations lists of the DLSH algorithm. The result makes DLSH algorithm suitable to be applied in many critical applications that depend on accurate searching such as forensics technology.

Adaptive Path Index for Efficient U Query Processing (효율적인 XML 질의 처리를 위한 적응형 경로 인덱스)

  • 민준기;심규석;정진완
    • Journal of KIISE:Databases
    • /
    • v.31 no.1
    • /
    • pp.61-71
    • /
    • 2004
  • XML can describe a wide range of data, from regular to irregular and from flat to deeply nested. Thus, XML is rapidly emerging as the do facto standard for the Web document format since XML supports an efficient data exchange and integration. Also, to retrieve the data represented by XML, several XML query languages are proposed. XML query languages such as XPath and XQuery use path expressions to traverse irregularly structured data which comprise B% elements. To evaluate path expressions, various path indexes are proposed. However, traditional path indexes are constructed by utilizing only the XML data structure. Therefore, in this paper, we propose an adaptive path index which utilizes the XML data structure as well as query workloads. To improve the query performance, the adaptive path index proposed by this paper manages the frequently used paths and the structural summary of the XML data using a hash tree and a graph structure. Experimental results show that the adaptive path index improves the query performance typically 2 to 69 times compared with the existing indexes.