• Title/Summary/Keyword: Hash Data

Search Result 334, Processing Time 0.031 seconds

Location Generalization Method of Moving Object using $R^*$-Tree and Grid ($R^*$-Tree와 Grid를 이용한 이동 객체의 위치 일반화 기법)

  • Ko, Hyun;Kim, Kwang-Jong;Lee, Yon-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.231-242
    • /
    • 2007
  • The existing pattern mining methods[1,2,3,4,5,6,11,12,13] do not use location generalization method on the set of location history data of moving object, but even so they simply do extract only frequent patterns which have no spatio-temporal constraint in moving patterns on specific space. Therefore, it is difficult for those methods to apply to frequent pattern mining which has spatio-temporal constraint such as optimal moving or scheduling paths among the specific points. And also, those methods are required more large memory space due to using pattern tree on memory for reducing repeated scan database. Therefore, more effective pattern mining technique is required for solving these problems. In this paper, in order to develop more effective pattern mining technique, we propose new location generalization method that converts data of detailed level into meaningful spatial information for reducing the processing time for pattern mining of a massive history data set of moving object and space saving. The proposed method can lead the efficient spatial moving pattern mining of moving object using by creating moving sequences through generalizing the location attributes of moving object into 2D spatial area based on $R^*$-Tree and Area Grid Hash Table(AGHT) in preprocessing stage of pattern mining.

  • PDF

Blockchain Oracle for Random Number Generator using Irregular Big Data (비정형 빅데이터를 이용한 난수생성용 블록체인 오라클)

  • Jung, Seung Wook
    • Convergence Security Journal
    • /
    • v.20 no.2
    • /
    • pp.69-76
    • /
    • 2020
  • Blockchain 2.0 supports programmable smart contract for the various distributed application. However, the environment of running smart contract is limited in the blockchain, so the smart contract only get the deterministic information, such as block height, block hash, and so on. Therefore, some applications, which requires random information, such as lottery or batting, should use oracle service that supply the information outside of blockchain. This paper develops a random number generator oracle service. The random number generator oracle service use irregular big data as entropy source. This paper tests the randomness of bits sequence generated from oracle service using NIST SP800-22. This paper also describes the advantages of irregular big data in our model in perspective of cost comparing hardware entropy source.

User Authentication Protocol through Distributed Process for Cloud Environment (클라우드 환경을 위한 분산 처리 사용자 인증 프로토콜)

  • Jeong, Yoon-Su;Lee, Sang-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.4
    • /
    • pp.841-849
    • /
    • 2012
  • Cloud computing that provides IT service and computer resource based on internet is now getting attention. However, the encrypted data can be exposed because it is saved in cloud server, even though it is saved as an encrypted data. In this paper, user certification protocol is proposed to prevent from illegally using of secret data by others while user who locates different physical position is providing secret data safely. The proposed protocol uses one way hash function and XOR calculation to get user's certification information which is in server when any user approaches to particular server remotely. Also it solves user security problem of cloud.

Scalable Blockchain Storage Model Based on DHT and IPFS

  • Chen, Lu;Zhang, Xin;Sun, Zhixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2286-2304
    • /
    • 2022
  • Blockchain is a distributed ledger that combines technologies such as cryptography, consensus mechanism, peer-to-peer transmission, and time stamping. The rapid development of blockchain has attracted attention from all walks of life, but storage scalability issues have hindered the application of blockchain. In this paper, a scalable blockchain storage model based on Distributed Hash Table (DHT) and the InterPlanetary File System (IPFS) was proposed. This paper introduces the current research status of the scalable blockchain storage model, as well as the basic principles of DHT and the InterPlanetary File System. The model construction and workflow are explained in detail. At the same time, the DHT network construction mechanism, block heat identification mechanism, new node initialization mechanism, and block data read and write mechanism in the model are described in detail. Experimental results show that this model can reduce the storage burden of nodes, and at the same time, the blockchain network can accommodate more local blocks under the same block height.

An Integrated Accurate-Secure Heart Disease Prediction (IAS) Model using Cryptographic and Machine Learning Methods

  • Syed Anwar Hussainy F;Senthil Kumar Thillaigovindan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.504-519
    • /
    • 2023
  • Heart disease is becoming the top reason of death all around the world. Diagnosing cardiac illness is a difficult endeavor that necessitates both expertise and extensive knowledge. Machine learning (ML) is becoming gradually more important in the medical field. Most of the works have concentrated on the prediction of cardiac disease, however the precision of the results is minimal, and data integrity is uncertain. To solve these difficulties, this research creates an Integrated Accurate-Secure Heart Disease Prediction (IAS) Model based on Deep Convolutional Neural Networks. Heart-related medical data is collected and pre-processed. Secondly, feature extraction is processed with two factors, from signals and acquired data, which are further trained for classification. The Deep Convolutional Neural Networks (DCNN) is used to categorize received sensor data as normal or abnormal. Furthermore, the results are safeguarded by implementing an integrity validation mechanism based on the hash algorithm. The system's performance is evaluated by comparing the proposed to existing models. The results explain that the proposed model-based cardiac disease diagnosis model surpasses previous techniques. The proposed method demonstrates that it attains accuracy of 98.5 % for the maximum amount of records, which is higher than available classifiers.

A Spatio-Temporal Clustering Technique for the Moving Object Path Search (이동 객체 경로 탐색을 위한 시공간 클러스터링 기법)

  • Lee, Ki-Young;Kang, Hong-Koo;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.3 s.15
    • /
    • pp.67-81
    • /
    • 2005
  • Recently, the interest and research on the development of new application services such as the Location Based Service and Telemetics providing the emergency service, neighbor information search, and route search according to the development of the Geographic Information System have been increasing. User's search in the spatio-temporal database which is used in the field of Location Based Service or Telemetics usually fixes the current time on the time axis and queries the spatial and aspatial attributes. Thus, if the range of query on the time axis is extensive, it is difficult to efficiently deal with the search operation. For solving this problem, the snapshot, a method to summarize the location data of moving objects, was introduced. However, if the range to store data is wide, more space for storing data is required. And, the snapshot is created even for unnecessary space that is not frequently used for search. Thus, non storage space and memory are generally used in the snapshot method. Therefore, in this paper, we suggests the Hash-based Spatio-Temporal Clustering Algorithm(H-STCA) that extends the two-dimensional spatial hash algorithm used for the spatial clustering in the past to the three-dimensional spatial hash algorithm for overcoming the disadvantages of the snapshot method. And, this paper also suggests the knowledge extraction algorithm to extract the knowledge for the path search of moving objects from the past location data based on the suggested H-STCA algorithm. Moreover, as the results of the performance evaluation, the snapshot clustering method using H-STCA, in the search time, storage structure construction time, optimal path search time, related to the huge amount of moving object data demonstrated the higher performance than the spatio-temporal index methods and the original snapshot method. Especially, for the snapshot clustering method using H-STCA, the more the number of moving objects was increased, the more the performance was improved, as compared to the existing spatio-temporal index methods and the original snapshot method.

  • PDF

Big Data Management Scheme using Property Information based on Cluster Group in adopt to Hadoop Environment (하둡 환경에 적합한 클러스터 그룹 기반 속성 정보를 이용한 빅 데이터 관리 기법)

  • Han, Kun-Hee;Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.235-242
    • /
    • 2015
  • Social network technology has been increasing interest in the big data service and development. However, the data stored in the distributed server and not on the central server technology is easy enough to find and extract. In this paper, we propose a big data management techniques to minimize the processing time of information you want from the content server and the management server that provides big data services. The proposed method is to link the in-group data, classified data and groups according to the type, feature, characteristic of big data and the attribute information applied to a hash chain. Further, the data generated to extract the stored data in the distributed server to record time for improving the data index information processing speed of the data classification of the multi-attribute information imparted to the data. As experimental result, The average seek time of the data through the number of cluster groups was increased an average of 14.6% and the data processing time through the number of keywords was reduced an average of 13%.

Securing Sensitive Data in Cloud Storage (클라우드 스토리지에서의 중요데이터 보호)

  • Lee, Shir-Ly;Lee, Hoon-Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.871-874
    • /
    • 2011
  • The fast emerging of network technology and the high demand of computing resources have prompted many organizations to outsource their storage and computing needs. Cloud based storage services such as Microsoft's Azure and Amazon's S3 allow customers to store and retrieve any amount of data, at anytime from anywhere via internet. The scalable and dynamic of the cloud storage services help their customer to reduce IT administration and maintenance costs. No doubt, cloud based storage services brought a lot of benefits to its customer by significantly reducing cost through optimization increased operating and economic efficiencies. However without appropriate security and privacy solution in place, it could become major issues to the organization. As data get produced, transferred and stored at off premise and multi tenant cloud based storage, it becomes vulnerable to unauthorized disclosure and unauthorized modification. An attacker able to change or modify data while data inflight or when data is stored on disk, so it is very important to secure data during its entire life-cycle. The traditional cryptography primitives for the purpose of data security protection cannot be directly adopted due to user's lose control of data under off premises cloud server. Secondly cloud based storage is not just a third party data warehouse, the data stored in cloud are frequently update by the users and lastly cloud computing is running in a simultaneous, cooperated and distributed manner. In our proposed mechanism we protect the integrity, authentication and confidentiality of cloud based data with the encrypt- then-upload concept. We modified and applied proxy re-encryption protocol in our proposed scheme. The whole process does not reveal the clear data to any third party including the cloud provider at any stage, this helps to make sure only the authorized user who own corresponding token able to access the data as well as preventing data from being shared without any permission from data owner. Besides, preventing the cloud storage providers from unauthorized access and making illegal authorization to access the data, our scheme also protect the data integrity by using hash function.

A Study on Secure Routing Technique using Trust Value and Key in MANET (신뢰도와 키를 이용한 보안 라우팅 기법에 관한 연구)

  • Yang, Hwanseok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.69-77
    • /
    • 2015
  • MANET is composed of only the mobile nodes have a limited transmission range. The dynamic topology by the frequent movement of nodes makes routing difficult and is also cause exposed to security vulnerabilities. In this paper, we propose the security routing technique consisted of mechanism of two steps in order to respond effectively to attack by the modification of the routing information and transmit secure data. The hierarchical structure is used and the authentication node that issues the key of the nodes within each cluster is elected in this proposed method. The authentication node manages key issues and issued information for encrypting the routing information from the source node. The reliability value for each node is managed to routing trust table in order to secure data transmission. In the first step, the route discovery is performed using this after the routing information is encrypted using the key issued by the authentication node. In the second step, the average reliability value of the node in the found path is calculated. And the safety of the data transmission is improved after the average reliability value selects the highest path. The improved performance of the proposed method in this paper was confirmed through comparative experiments with CBSR and SEER. It was confirmed a better performance in the transmission delay, the amount of the control packet, and the packet transmission success ratio.

An Advanced Parallel Join Algorithm for Managing Data Skew on Hypercube Systems (하이퍼큐브 시스템에서 데이타 비대칭성을 고려한 향상된 병렬 결합 알고리즘)

  • 원영선;홍만표
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.117-129
    • /
    • 2003
  • In this paper, we propose advanced parallel join algorithm to efficiently process join operation on hypercube systems. This algorithm uses a broadcasting method in processing relation R which is compatible with hypercube structure. Hence, we can present optimized parallel join algorithm for that hypercube structure. The proposed algorithm has a complete solution of two essential problems - load balancing problem and data skew problem - in parallelization of join operation. In order to solve these problems, we made good use of the characteristics of clustering effect in the algorithm. As a result of this, performance is improved on the whole system than existing algorithms. Moreover. new algorithm has an advantage that can implement non-equijoin operation easily which is difficult to be implemented in hash based algorithm. Finally, according to the cost model analysis. this algorithm showed better performance than existing parallel join algorithms.