• Title/Summary/Keyword: Hash Data

Search Result 334, Processing Time 0.024 seconds

An Exploratory Study on Block chain based IoT Edge Devices for Plant Operations & Maintenance(O&M) (플랜트 O&M을 위한 블록체인 기반 IoT Edge 장치의 적용에 관한 탐색적 연구)

  • Ryu, Yangsun;Park, Changwoo;Lim, Yongtaek
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.15 no.1
    • /
    • pp.34-42
    • /
    • 2019
  • Receiving great attention of IoT and 4th industrial revolution, the necessity comes to the fore of the plant system which aims making it smart and effective. Smart Factory is the key realm of IoT to apply with the concept to optimize the entire process and it presents a new and flexible production paradigm based on the collected data from numerous sensors installed in a plant. Especially, the wireless sensor network technology is receiving attention as the key technology of Smart Factory, researches to interface those technology is actively in progress. In addition, IoT devices for plant industry security and high reliable network protocols are under development to cope with high-risk plant facilities. In the meanwhile, Blockchain can support high security and reliability because of the hash and hash algorithm in its core structure and transaction as well as the shared ledger among all nodes and immutability of data. With the reason, this research presents Blockchain as a method to preserve security and reliability of the wireless communication technology. In regard to that, it establishes some of key concepts of the possibility on the blockchain based IoT Edge devices for Plant O&M (Operations and Maintenance), and fulfills performance verification with test devices to present key indicator data such as transaction elapsed time and CPU consumption rate.

Distributed data deduplication technique using similarity based clustering and multi-layer bloom filter (SDS 환경의 유사도 기반 클러스터링 및 다중 계층 블룸필터를 활용한 분산 중복제거 기법)

  • Yoon, Dabin;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.60-70
    • /
    • 2018
  • A software defined storage (SDS) is being deployed in cloud environment to allow multiple users to virtualize physical servers, but a solution for optimizing space efficiency with limited physical resources is needed. In the conventional data deduplication system, it is difficult to deduplicate redundant data uploaded to distributed storages. In this paper, we propose a distributed deduplication method using similarity-based clustering and multi-layer bloom filter. Rabin hash is applied to determine the degree of similarity between virtual machine servers and cluster similar virtual machines. Therefore, it improves the performance compared to deduplication efficiency for individual storage nodes. In addition, a multi-layer bloom filter incorporated into the deduplication process to shorten processing time by reducing the number of the false positives. Experimental results show that the proposed method improves the deduplication ratio by 9% compared to deduplication method using IP address based clusters without any difference in processing time.

Implementation of the Stone Classification with AI Algorithm Based on VGGNet Neural Networks (VGGNet을 활용한 석재분류 인공지능 알고리즘 구현)

  • Choi, Kyung Nam
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.32-38
    • /
    • 2021
  • Image classification through deep learning on the image from photographs has been a very active research field for the past several years. In this paper, we propose a method of automatically discriminating stone images from domestic source through deep learning, which is to use Python's hash library to scan 300×300 pixel photo images of granites such as Hwangdeungseok, Goheungseok, and Pocheonseok, performing data preprocessing to create learning images by examining duplicate images for each stone, removing duplicate images with the same hash value as a result of the inspection, and deep learning by stone. In addition, to utilize VGGNet, the size of the images for each stone is resized to 224×224 pixels, learned in VGG16 where the ratio of training and verification data for learning is 80% versus 20%. After training of deep learning, the loss function graph and the accuracy graph were generated, and the prediction results of the deep learning model were output for the three kinds of stone images.

Enabling Efficient Verification of Dynamic Data Possession and Batch Updating in Cloud Storage

  • Qi, Yining;Tang, Xin;Huang, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2429-2449
    • /
    • 2018
  • Dynamic data possession verification is a common requirement in cloud storage systems. After the client outsources its data to the cloud, it needs to not only check the integrity of its data but also verify whether the update is executed correctly. Previous researches have proposed various schemes based on Merkle Hash Tree (MHT) and implemented some initial improvements to prevent the tree imbalance. This paper tries to take one step further: Is there still any problems remained for optimization? In this paper, we study how to raise the efficiency of data dynamics by improving the parts of query and rebalancing, using a new data structure called Rank-Based Merkle AVL Tree (RB-MAT). Furthermore, we fill the gap of verifying multiple update operations at the same time, which is the novel batch updating scheme. The experimental results show that our efficient scheme has better efficiency than those of existing methods.

A Stable Evidence Collection Procedure of a Volatile Data in Research (휘발성 증거자료의 무결한 증거확보 절차에 관한 연구)

  • Kim, Yong-Ho;Lee, Dong-Hwi;J. Kim, Kui-Nam
    • Convergence Security Journal
    • /
    • v.6 no.3
    • /
    • pp.13-19
    • /
    • 2006
  • I would like to explain a method how to get important data from a volatile data securely, when we are not available to use network in computer system by incident. The main idea is that the first investigator who collects a volatile data by applying scripts built in USB media should be in crime scene at the time. In according to volatile data, he generates hash value, and gets witness signature. After that, he analyses the volatile data with authentication in forensics system.

  • PDF

GOMS: Large-scale ontology management system using graph databases

  • Lee, Chun-Hee;Kang, Dong-oh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.780-793
    • /
    • 2022
  • Large-scale ontology management is one of the main issues when using ontology data practically. Although many approaches have been proposed in relational database management systems (RDBMSs) or object-oriented DBMSs (OODBMSs) to develop large-scale ontology management systems, they have several limitations because ontology data structures are intrinsically different from traditional data structures in RDBMSs or OODBMSs. In addition, users have difficulty using ontology data because many terminologies (ontology nodes) in large-scale ontology data match with a given string keyword. Therefore, in this study, we propose a (graph database-based ontology management system (GOMS) to efficiently manage large-scale ontology data. GOMS uses a graph DBMS and provides new query templates to help users find key concepts or instances. Furthermore, to run queries with multiple joins and path conditions efficiently, we propose GOMS encoding as a filtering tool and develop hash-based join processing algorithms in the graph DBMS. Finally, we experimentally show that GOMS can process various types of queries efficiently.

End-to-end Neural Model for Keyphrase Extraction using Twitter Hash-tag Data (트위터 해시 태그를 이용한 End-to-end 뉴럴 모델 기반 키워드 추출)

  • Lee, Young-Hoon;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.176-178
    • /
    • 2018
  • 트위터는 최대 140자의 단문을 주고받는 소셜 네트워크 서비스이다. 트위터의 해시 태그는 주로 문장의 핵심 단어나 주요 토픽 등을 링크하게 되는데 본 논문에서는 이러한 정보를 이용하여 키워드 추출에 활용한다. 문장을 Character CNN, Bi-LSTM을 통해 문장 표현을 얻어내고 각 Span에서 이러한 문장 표현을 활용하여 Span 표현을 생성한다. Span 표현을 이용하여 각 Span에 대한 Score를 얻고 높은 점수의 Span을 이용하여 키워드를 추출한다.

  • PDF

A Watermark for Data Embedding and Image Verification (데이터의 삽입과 무결성이 보장되는 워터마킹)

  • 윤호빈;박근수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.850-852
    • /
    • 2001
  • Fragile 워터마킹은 이미지의 무결성을 보장하기 위하여 원본 이미지에 사람이 지각할 수 없는 데이터를 삽입하는 것을 말한다. 본 논문은 이진 데이터의 삽입이 가능하며, 원본 이미지와 삽입된 데이터의 무결성이 보장되는 fragile 워터마킹의 한 방법을 제시한다. 제시된 방법은 hash 함수와 PRBG(pseudo random bit generator)를 이용한 one-time pad를 사용하며, 한 pixel당 약 2.8125bits의 정보를 저장할 수 있다.

  • PDF

An Efficient Index Structure for Spatial Data in Main Memory Database (주기억 데이타베이스에서 공간 데이타에 대한 효율적인 인덱스 구조)

  • 강은호;김경창
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.794-796
    • /
    • 2003
  • 주기억 데이타베이스 시스템은 기존의 디스크 기반 데이타베이스 시스템과 달리 빠른 처리속도와 주기억 장치의 효율적인 사용이 주된 관심 사항이다. 본 논문에서는 주기억 데이타베이스에서 공간 데이터를 위한 효율적인 인덱스구조를 제시한다. 기존에 제시된 주기억 데이타베이스를 위한 인덱스 기법으로는 T-트리, Hash 계열 기법등이 제시되었으나, 이러한 모든 인덱스 기법은 1차원 데이타를 위한 인덱스 기법으로 공간 데이타에는 적용이 불가능하다. 이러한 제약을 극복하기 위해서 본 논문에서는 T-트리에 R-트리 개념을 추가 하였다.

  • PDF

An Efficient Multi-Signature Scheme for Shared Data in a Cloud Storage (클라우드 스토리지의 공유 데이터에 대한 효율적 다중 서명 기법)

  • Kim, Young-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.11
    • /
    • pp.967-969
    • /
    • 2013
  • In this paper, we propose an efficient mult-signature scheme based on a bilinear mapping for shared data in the cloud and prove the security of the proposed scheme using the difficulty of the computational Diffie-Hellman problem. For verification, the scheme is using the sum of the hash values of stored data rather than the entire data, which makes it feasible to reduce the size of the downloaded data.