• Title/Summary/Keyword: Hash Data

Search Result 334, Processing Time 0.025 seconds

Implementation of the Large-scale Data Signature System Using Hash Tree Replication Approach (해시 트리 기반의 대규모 데이터 서명 시스템 구현)

  • Park, Seung Kyu
    • Convergence Security Journal
    • /
    • v.18 no.1
    • /
    • pp.19-31
    • /
    • 2018
  • As the ICT technologies advance, the unprecedently large amount of digital data is created, transferred, stored, and utilized in every industry. With the data scale extension and the applying technologies advancement, the new services emerging from the use of large scale data make our living more convenient and useful. But the cybercrimes such as data forgery and/or change of data generation time are also increasing. For the data security against the cybercrimes, the technology for data integrity and the time verification are necessary. Today, public key based signature technology is the most commonly used. But a lot of costly system resources and the additional infra to manage the certificates and keys for using it make it impractical to use in the large-scale data environment. In this research, a new and far less system resources consuming signature technology for large scale data, based on the Hash Function and Merkle tree, is introduced. An improved method for processing the distributed hash trees is also suggested to mitigate the disruptions by server failures. The prototype system was implemented, and its performance was evaluated. The results show that the technology can be effectively used in a variety of areas like cloud computing, IoT, big data, fin-tech, etc., which produce a large-scale data.

  • PDF

Effective Parallel Hash Join Algorithm Based on Histoftam Equalization in the Presence of Data Skew (데이터 편재 하에서 히스토그램 변환기법에 기초한 효율적인 병렬 해쉬 결합 알고리즘)

  • Park, Ung-Gyu;Choe, Hwang-Gyu;Kim, Tak-Gon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.338-348
    • /
    • 1997
  • In this pater, we first propose a data distribution framework to resolve load imbalance and bucket oerflow in parallel hash join.Using the histogram equalization technique, the framework transforms a histogram of skewed data to the desired uniform distribution that corresponds to the relative computing power of node processors in the system.Next we propose an effcient parallel hash join algorithm for handing skwed data based on the proposed data distribution methodology.For performance comparison of our algorithm with other hash join algorithms.we perform similation experiments and actual exeution on COREDB database computer with 8-node hyperube architecture. In these experiments, skwed data distebution of the join atteibute is modeled using a Zipf-like distribution.The perfomance studies undicate that our algorithm outperforms other algorithms in the skewed cases.

  • PDF

A Multi-Indexes Based Technique for Resolving Collision in a Hash Table

  • Yusuf, Ahmed Dalhatu;Abdullahi, Saleh;Boukar, Moussa Mahamat;Yusuf, Salisu Ibrahim
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.339-345
    • /
    • 2021
  • The rapid development of various applications in networking system, business, medical, education, and other domains that use basic data access operations such as insert, edit, delete and search makes data structure venerable and crucial in providing an efficient method for day to day operations of those numerous applications. One of the major problems of those applications is achieving constant time to search a key from a collection. A number of different methods which attempt to achieve that have been discovered by researchers over the years with different performance behaviors. This work evaluated these methods, and found out that almost all the existing methods have non-constant time for adding and searching a key. In this work, we designed a multi-indexes hashing algorithm that handles a collision in a hash table T efficiently and achieved constant time O(1) for searching and adding a key. Our method employed two-level of hashing which uses pattern extraction h1(key) and h2(key). The second hash function h2(key) is use for handling collision in T. Here, we eliminated the wasted slots in the search space T which is another problem associated with the existing methods.

Key Generation and Management Scheme for Partial Encryption Based on Hash Tree Chain (부분 암호화를 위한 해쉬 트리 체인 기반 키 생성 및 관리 알고리즘)

  • Kim, Kyoung Min;Sohn, Kyu-Seek;Nam, Seung Yeob
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.3
    • /
    • pp.77-83
    • /
    • 2016
  • A new key generation scheme is proposed to support partial encryption and partial decryption of data in cloud computing environment with a minimal key-related traffic overhead. Our proposed scheme employs a concept of hash tree chain to reduce the number of keys that need to be delivered to the decryption node. The performance of the proposed scheme is evaluated through simulation.

SELECTIVE HASH-BASED WYNER-ZIV VIDEO CODING

  • Do, Tae-Won;Shim, Hiuk-Jae;Ko, Bong-Hyuck;Jeon, Byeung-Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.351-354
    • /
    • 2009
  • Distributed video coding (DVC) is a new coding paradigm that enables to exploit the statistics among sources only in decoder and to achieve extremely low complex video encoding without any loss of coding efficiency. Wyner-Ziv coding, a particular implementation of DVC, reconstructs video by correcting noise on side information using channel code. Since a good quality of side information brings less noise to be removed by the channel code, generation of good side information is very important for the overall coding efficiency. However, if there are complex motions among frames, it is very hard to generate a good quality of side information without any information of original frame. In this paper, we propose a method to enhance the quality of the side information using small amount of additional information of original frame in the form of hash. By decoder's informing encoder where the hash has to be transmitted, side information can be improved enormously with only small amount of hash data. Therefore, the proposed method gains considerable coding efficiency. Results of our experiment have verified average PSNR gain up to 1 dB, when compared to the well-known DVC codec, known as DISCOVER codec.

  • PDF

A Hardware Implementation of Whirlpool Hash Function using 64-bit datapath (64-비트 데이터패스를 이용한 Whirlpool 해시 함수의 하드웨어 구현)

  • Kwon, Young-Jin;Kim, Dong-Seong;Shin, Kyung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.485-487
    • /
    • 2017
  • The whirlpool hash function adopted as an ISO / IEC standard 10118-3 by the international standardization organization is an algorithm that provides message integrity based on an SPN (Substitution Permutation Network) structure similar to AES block cipher. In this paper, we describe the hardware implementation of the Whirlpool hash function. The round block is designed with a 64-bit data path and encryption is performed over 10 rounds. To minimize area, key expansion and encryption algorithms use the same hardware. The Whirlpool hash function was modeled using Verilog HDL, and simulation was performed with ModelSim to verify normal operation.

  • PDF

Improving the Lifetime of NAND Flash-based Storages by Min-hash Assisted Delta Compression Engine (MADE (Minhash-Assisted Delta Compression Engine) : 델타 압축 기반의 낸드 플래시 저장장치 내구성 향상 기법)

  • Kwon, Hyoukjun;Kim, Dohyun;Park, Jisung;Kim, Jihong
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1078-1089
    • /
    • 2015
  • In this paper, we propose the Min-hash Assisted Delta-compression Engine(MADE) to improve the lifetime of NAND flash-based storages at the device level. MADE effectively reduces the write traffic to NAND flash through the use of a novel delta compression scheme. The delta compression performance was optimized by introducing min-hash based LSH(Locality Sensitive Hash) and efficiently combining it with our delta compression method. We also developed a delta encoding technique that has functionality equivalent to deduplication and lossless compression. The results of our experiment show that MADE reduces the amount of data written on NAND flash by up to 90%, which is better than a simple combination of deduplication and lossless compression schemes by 12% on average.

A Study on Group Key Generation and Exchange using Hash Collision in M2M Communication Environment (M2M 통신 환경에서 해시 충돌을 이용한 그룹키 생성 및 교환 기법 연구)

  • Song, Jun-Ho;Kim, Sung-Soo;Jun, Moon-Seog
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.5
    • /
    • pp.9-17
    • /
    • 2019
  • As the IoT environment becomes more popular, the safety of the M2M environment, which establishes the communication environment between objects and objects without human intervention, becomes important. Due to the nature of the wireless communication environment, there is a possibility of exposure to security threats in various aspects such as data exposure, falsification, tampering, deletion and privacy, and secure communication security technology is considered as an important requirement. In this paper, we propose a new method for group key generation and exchange using trap hash collision hash in existing 'M2M communication environment' using hash collision, And a mechanism for confirming the authentication of the device and the gateway after the group key is generated. The proposed method has attack resistance such as spoofing attack, meson attack, and retransmission attack in the group communication section by using the specificity of the collision message and collision hash, and is a technique for proving safety against vulnerability of hash collision.

Analysis and Elimination of Side Channels during Duplicate Identification in Remote Data Outsourcing (원격 저장소 데이터 아웃소싱에서 발생하는 중복 식별 과정에서의 부채널 분석 및 제거)

  • Koo, Dongyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.4
    • /
    • pp.981-987
    • /
    • 2017
  • Proliferation of cloud computing services brings about reduction of the maintenance and management costs by allowing data to be outsourced to a dedicated third-party remote storage. At the same time, the majority of storage service providers have adopted a data deduplication technique for efficient utilization of storage resources. When a hash tree is employed for duplicate identification as part of deduplication process, size information of the attested data and partial information about the tree can be deduced from eavesdropping. To mitigate such side channels, in this paper, a new duplicate identification method is presented by exploiting a multi-set hash function.

Improved User Privacy in SocialNetworks Based on Hash Function

  • Alrwuili, Kawthar;Hendaoui, Saloua
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.97-104
    • /
    • 2022
  • In recent years, data privacy has become increasingly important. The goal of network cryptography is to protect data while it is being transmitted over the internet or a network. Social media and smartphone apps collect a lot of personal data which if exposed, might be damaging to privacy. As a result, sensitive data is exposed and data is shared without the data owner's consent. Personal Information is one of the concerns in data privacy. Protecting user data and sensitive information is the first step to keeping user data private. Many applications user data can be found on other websites. In this paper, we discuss the issue of privacy and suggest a mechanism for keeping user data hidden in other applications.